id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
236990380 | pes2o/s2orc | v3-fos-license | Phytophthora capsici CBM1‐containing protein CBP3 is an apoplastic effector with plant immunity‐inducing activity
Abstract Carbohydrate‐binding module family 1 (CBM1) is a cellulose‐binding domain that is almost exclusively found in fungi and oomycetes. CBM1‐containing proteins (CBPs) have diverse domain architectures and play pivotal roles in the plant–microbe interaction. However, only a few CBPs have been functionally investigated. In this study, we identified PcCBP3 in an oomycete pathogen, Phytophthora capsici. PcCBP3 contains two tandem CBM1 domains and its orthologs from other Phytophthora species exhibit diversity including gene loss, pseudogenization, variations in sequences, and domain structures. PcCBP3 is upregulated during infection and knockout of PcCBP3 results in significantly decreased virulence. Moreover, PcCBP3 requires signal peptide to induce BAK1‐dependent cell death in Nicotiana benthamiana. Further studies indicate that PcCBP3‐triggered cell death and plant immunity require its N‐terminal region, which is conserved in CBM1‐containing proteins and other small, secreted, cysteine‐rich protein from oomycetes. These results suggest that PcCBP3 is an apoplastic effector and could be perceived by the plant immune system.
CBPs have variable domain architectures, whereas only CBEL from
Phytophthora parasitica has been functionally determined.
Phytophthora capsici is one of the most notorious oomycete phytopathogens and infects a wide range of crop plants (Kamoun et al., 2015;Lamour et al., 2012b). Moreover, P. capsici also infects the model plants Nicotiana benthamiana and Arabidopsis thaliana, presenting a model pathogen for understanding Phytophthora-plant interactions (Lamour et al., 2012b;Wang et al., 2013). In this study, we identified a virulence-essential PcCBP3 that contains two CBM1 domains and triggers BAK1-dependent cell death in N. benthamiana.
The Ser24 site at the conserved N-terminal region is essential for PcCBP3-induced cell death.
| PcCBP3 is a CBM1-containing protein that triggers cell death in N. benthamiana
To identify the potential CBM1-containing proteins in P. capsici, we searched its genome sequence (Lamour et al., 2012a) using the hmmsearch program, and obtained nine candidates containing CBM1 domains (Table S1). Among them, one shares 88.4% protein sequence similarity with PpCBEL, a known Phytophthora MAMP (Gaulin et al., 2006). The eight others exhibit different domain architectures and were named as PcCBP1 to PcCBP8 accordingly ( Figure 1a). All the nine CBPs contain a N-terminal signal peptide, suggesting that they are probably secreted proteins. Intriguingly, all of them contain no catalytic domain, which is distinct from most of the fungal CBPs. PcCBPs possess different numbers of CBM1 domains that are separated by low-complexity regions.
Previous reported CBPs such as PpCBEL, VdCUT11, and VdEG3 could activate plant immunity and trigger cell death in N. benthamiana (Gaulin et al., 2006;Gui et al., 2017Gui et al., , 2018. Therefore, to reveal roles of P. capsici CBPs during interactions with plants, all the nine CBPs were cloned from P. capsici strain LT263 and transiently expressed in N. benthamiana by agroinfiltration. PcCBP3 induced cell death in N. benthamiana leaves, whereas all other CBPs failed to trigger cell death (Figure 1c). Immunoblot assays showed that all the CBP proteins were normally expressed in N. benthamiana (Figure 1d).
Interestingly, we noticed that PcCBEL could not induce cell death and its activity was distinct from its close homolog from P. parasitica, PpCBEL (Gaulin et al., 2006). We inferred that the sequence divergence may possibly account for the difference of these two proteins ( Figure S1). Therefore, we focused on PcCBP3 for further study.
| CBP3 is moderately conserved in Phytophthora species
We investigated PcCBP3 orthologs from 37 Phytophthora species with available genome sequences. In total, 24 CBP3 orthologs were identified in 23 species (Figure 2a). Ten species lacked a CBP3 ortholog and three species had only pseudogenes. Moreover, the 24 CBP3 orthologs were divided into six variants based on the domain architecture (Figure 2b and Table S2). Most of these orthologs showed a similar domain architecture to PcCBP3. Multiple sequence alignment of these orthologs showed that the CBM1 domain is highly conserved with >90% similarity, whereas the region linking two CBM1s is highly variable with only 38.94% similarity (Figure 2c). Transient expression of four CBP3 orthologs from Phytophthora species that cause major crop diseases led to the identification of PpCBP3 from P. parasitica that also induced cell death in N. benthamiana, while PsCBP3, PiCBP3, and PcmCBP3 did not (Figure 2d).
Immunoblot analysis showed that all the proteins were expressed in N. benthamiana leaves (Figure 2e). These results suggest that CBP3 from different Phytophthora species are divergent and probably have different roles in the Phytophthora-plant interaction. Interestingly, we found that PcCBP3 and PpCBP3 showed similar western blot bands that contained ladder-like larger bands, whereas PsCBP3, PiCBP3, and PcmCBP3 did not have these larger bands (Figure 2e).
In addition, a negative transformant (NT) was selected as a control because PcCBP3 was intact (Figure 3b). Knockout of PcCBP3 did not affect the growth and mycelial morphological characteristics of P. capsici. The virulence of knockout mutants was evaluated on N. benthamiana. The results showed that the average lesion sizes caused by both ΔPcCBP3-2 and ΔPcCBP3-4 were significantly smaller than that caused by NT (Figure 3c,d). Moreover, quantitative reverse transcription PCR (RT-qPCR) analysis showed that PcCBP3 transcripts were induced at the early stage during infection of N. benthamiana ( Figure 3e). Previously reported transcriptomes of P. capsici-A. thaliana interaction (Ma et al., 2018) were also used to examine the expression profile of PcCBP3. The data showed that PcCBP3 was upregulated during infection with A. thaliana ( Figure S2). Among the nine CBP genes of P. capsici, seven genes were induced during interaction with the host plant ( Figure S2), suggesting their potential important roles in virulence.
These results indicate that PcCBP3 is required for virulence during infection.
| PcCBP3 is an apoplastic protein and induces BAK1-dependent cell death
To determine whether PcCBP3 possess a functional signal peptide (SP), we fused the SP to yeast invertase using the yeast secretion system. SP PcCBP3 and the positive control SP Avr1b could lead to the secretion of invertase, which reduced triphenyltetrazolium chloride (TTC) to red formazan. In contrast, no colour change was observed for negative control SP Mg87 or empty vector ( Figure 4a). Moreover, deletion of SP PcCBP3 failed to induce cell death in N. benthamiana.
VdEG3 is an apoplastic CBP whose signal peptide is required for cell death-inducing activity (Gui et al., 2017). We thus replaced the signal peptide of VdEG3 by SP PcCBP3 . As shown in Figure 4b, deletion of SP VdEG3 abolished VdEG3-induced cell death, which could be rescued by SP PcCBP3 . Immunoblot analysis showed that all the proteins were expressed in N. benthamiana leaves ( Figure 4c). Furthermore, we investigated the subcellular localization of mCherry tagged-PcCBP3 in N. benthamiana. The results showed that the mCherry signal was mainly observed at the cell edge ( Figure 4d). To distinguish the mCherry signal of apoplast and plasma membrane, N. benthamiana cells were plasmolysed with 30% sucrose. PcCBP3 accumulated mainly in the apoplast region, as shown in Figure 4d. These results indicate that PcCBP3 is an apoplastic protein.
The plant cell surface-localized receptor-like kinases BAK1 and SOBIR1 are essential for many apoplastic effector-triggered cell death and/or immunity (Heese et al., 2007;Liebrand et al., 2013). To determine whether they are required for PcCBP3-induced cell death, The transcripts levels of BAK1 or SOBIR1 in BAK1-or SOBIR1-slienced plants was c.70% lower than that in GFP-silenced plants, which was confirmed by RT-qPCR analysis (Figure 4g). These results suggest that PcCBP3 acts in extracellular space and is perhaps perceived by a BAK1-dependent receptor complex.
| Unknown larger bands are required for PcCBP3-induced cell death and plant resistance
Plants mount a series of defence responses after perception of apoplastic effectors (Yu et al., 2017). Consistently, lesion areas caused by P. capsici F I G U R E 2 CBP3 is moderately conserved in Phytophthora species. (a) Phylogeny of 37 Phytophthora species with released genome sequences in the NCBI genome database. The phylogenetic tree was constructed based on nucleotide sequences of EF1α retrieved from the GenBank database. Pythium vexans was used as the outgroup. Black and green species names indicate CBP3 orthologs or variants, respectively. All these species have only one CBP3 except for P. betacei, which has two CBP3 orthologs. Red and blue species names indicate no CBP3 ortholog and pseudogene, respectively. Figures 1d, 2e and 4c). To explore the larger bands of PcCBP3, we first predicted potential N-linked glycosylation sites using the NetNGlyc server. However, no glycosylation site was predicted in PcCBP3. The ladder-like immunoblot bands resemble covalent dimer and polymer, which are also SDS-resistant (Xiang et al., 2015). To verify the self-association of PcCBP3, a coimmunoprecipitation (CoIP) assay was performed to assess PcCBP3 polymer formation in planta. Hemagglutinin (HA)-tagged and FLAG-tagged PcCBP3 were coexpressed in N. benthamiana leaves. However, HA-tagged PcCBP3 did not coimmunoprecipitate with FLAG-tagged PcCBP3 ( Figure S3), suggesting that PcCBP3 does not associate with itself.
| Ser24 is required for cell death-inducing activity of PcCBP3
PcCBP3 is a small, secreted protein that contains two CBM1 domains and two small linkers L1 and L2 (Figure 6a). To map the region responsible for PcCBP3-induced cell death, four deletion mutants lacking CBM1 or linker were generated and expressed in N. benthamiana. As shown in Figure 6a, deletion of either CBM1 or L2 did not affect the death-inducing activity of PcCBP3, whereas deletion L1 abolished PcCBP3-induced cell death. Intriguingly, L1 mutant expressed in N. benthamiana leaves lacked the larger bands, while other mutants all contained larger bands confirmed by immunoblot F I G U R E 4 PcCBP3 is an apoplastic protein and induces BAK1-dependent cell death. (a) Validation of the N-terminal signal peptide (SP) of PcCBP3 by yeast secretion system. The yeast strain YTK12 carrying pSUC2 vector can grow on CMD−W medium. The SP was fused to mature yeast invertase. Secreted invertase can reduce triphenyltetrazolium chloride (TTC) to red formazan. The SPs of Mg87 and Avr1b were used as the negative and positive controls, respectively. (b) The SP of PcCBP3 is required for cell death-inducing activity. VdEG3 is a known apoplastic protein that needs to be targeted to apoplast to induce cell death in Nicotiana benthamiana. (Figure 6b). We further generated a mutant without both CBM1 domains (SP + L1 + L2), which contains no cysteine but still induced cell death ( Figure S4). However, a mutant containing only SP and L1 regions (SP + L1) failed to induce cell death ( Figure S4). These findings indicate that CBM1 domains are not necessary for PcCBP3induced cell death.
Because the L1 motif has unknown annotation in public databases, including Pfam and SMART, we examined the distribution of the L1 motif. Of the nine CBPs from P. capsici, PcCBP2, PcCBP3, and PcCBP7 contain an L1 motif after the signal peptide. To explore whether non-CBM1 proteins also contain an L1 motif, we performed Blastp searches on the GenBank database. In total, 19 proteins were identified from Phytophthora and other oomycete species ( Figure S5). All of these are small, secreted cysteine-rich (SCR) proteins. Multiple sequence alignment of representative sequences showed that most of these proteins are unrelated ( Figure S5). Based on the multiple sequence alignment of CBP3 orthologs, we generated point mutants in the conserved sites by site-directed mutagenesis. First, the four most conserved basic amino acids (H26, R28, H30, and K32) were changed to alanine. However, none of these mutants exhibited impaired cell death in N. benthamiana ( Figure S6).
Next, two serine residues (S24 and S31), which are possible O-linked glycosylation sites, were changed to alanine. Importantly, the S24A mutant failed to induce cell death, whereas S31A did not affect the cell death-inducing activity (Figure 6c). Cell death was confirmed by viewing under UV light (Figure 6d). Immunoblot analysis showed that both S24A and S31A exhibited remarkable reduction of larger bands ( Figure 6e). These findings suggest that Ser24 in the L1 motif is indispensable for the cell death-inducing activity of PcCBP3.
| D ISCUSS I ON
CBM1 is a noncatalytic domain with cellulose-binding function. CBM1 is found almost exclusively in fungi and oomycetes (Larroque et al., 2012). The CBM1 domain contains four conserved cysteines that form two disulphide bonds and are required for cellulose-binding activity (Gilkes et al., 1991). CBPs are widely distributed in plant-interacting microbes, suggesting their important roles for pathogens (Larroque et al., 2012). The fungal pathogen V. dahliae contains 28 CBPs and at least two of them (VdCUT11 and VdEG3) are required for cotton infection and are also recognized by plants (Gui et al., 2017(Gui et al., , 2018. In this study, nine CBPs were found in the oomycete phytopathogen P. capsici, among which PcCBP3 was shown to be involved in the P. capsiciplant interaction. PcCBP3 is required for virulence and triggers (Brotman et al., 2008;Gaulin et al., 2006). VdCUT11 probably activates plant immunity indirectly by degrading the plant cell wall because mutagenesis of its activity sites in the cutinase domain abolishes elicitor activity (Gui et al., 2018). A 63 amino acid region in the glycoside hydrolase domain of VdEG3 is sufficient for cell death-inducing activity (Gui et al., 2017).
The region required for elicitor function of PcCBP3 was mapped to a c.10 amino acid linker after the signal peptide (L1 motif). However, the L1 motif is required but not sufficient for PcCBP3-induced cell death.
Posttranslational modification (PTM) of proteins is a versatile regulatory process that is pivotal for many pathogen effectors and plant immune signalling proteins (Tahir et al., 2019;Withers & Dong, 2017). Examples of PTMs are phosphorylation, ubiquitination, sumoylation, glycosylation, oligomerization, and proteolytic cleavage (Tahir et al., 2019). PTMs of some apoplastic effectors are also essential for their perception by plants. For instance, glycosylation of P. sojae XEG1 is indispensable because XEG1 is glycosylated when expressed in N. benthamiana, and Pichia pastoris-expressed but not E. coli-expressed XEG1 is active . The small cysteine-rich secreted protein PC2 from P. infestans is cleaved by apoplastic subtilisin-like proteases, which releases an immunogenic peptide and activates plant immunity (Wang et al., 2021b). PcCBP3 might undergo PTM, as indicated by the ladder-like larger bands in immunoblot analysis, which is indispensable for triggering cell death and plant resistance. However, what kind of PTM is responsible for the larger bands of PcCBP3 remains obscure. The conserved L1 motif after the signal peptide was required for PcCBP3-induced cell death, and immunoblot analysis revealed that the ΔL1 mutant probably lacked these larger bands. By site-directed mutagenesis, the conserved Ser24 in the L1 motif was found to be required for the larger bands and indispensable for PcCBP3-induced cell death.
Besides different immunogenic regions of CBPs, the receptorlike kinases BAK1 and SOBIR1 also play different roles in the perception of CBPs. CBEL-triggered defence responses but not cell death required BAK1, while the roles of SOBIR1 in CBEL detection remain undetermined (Larroque et al., 2013). Both BAK1 and SOBIR1 are required for VdCUT11-triggered cell death (Gui et al., 2018), while PcCBP3-and VdEG3-triggered cell death only required BAK1 but not SOBIR1 (Gui et al., 2017). However, the defence responses stimulated by VdEG3 are probably SOBIR1-dependent because silencing of SOBIR1 impairs defence responses but not cell death induced by PsXEG1, a homolog of VdEG3 . These findings together suggest that plants can perceive diverse microbial CBPs via different mechanisms.
In summary, we identified a CBP from P. capsici (PcCBP3) consisting of two CBM1 domains. PcCBP3 is an apoplastic effector and triggers BAK1-dependent cell death in N. benthamiana. Ser24 in the conserved L1 motif is indispensable for the detection of PcCBP3 by plants.
| Bioinformatics analyses
To identify the CBPs in P. capsici we retrieved the aligned CBM1 sequences from the Pfam database (PF00734) as a query. The hmmsearch program in HMMER v. 3.2 package (Eddy, 2009) was used to search the predicted proteome of P. capsici strain LT1534 downloaded from the Department of Energy (DOE) Joint Genome Institute (Grigoriev et al., 2011). The identified CBPs were subjected to domain annotation. The N-terminal signal peptide was predicted using web servers SignalP v. 4.1 (Petersen et al., 2011) and Phobius (Käll et al., 2007). The domain architectures of CBPs were annotated by SMART (a Simple Modular Architecture Research Tool) (Letunic & Bork, 2018) and NCBI CDD (Conserved Domain Database) (Marchler-Bauer et al., 2015), and were displayed by IBS (Illustrator for Biological Sequences) (Liu et al., 2015).
To identify PcCBP3 orthologs from other Phytophthora species, we performed tblastn searches to genomes of 37 Phytophthora species deposited in the NCBI genome database. The maximumlikelihood species tree of these 37 Phytophthora species were constructed using IQ-TREE (Nguyen et al., 2015) based on their DNA sequences of EF1α (Yang et al., 2017). To explore the distribution of the L1 motif, we searched the GenBank NR database using Blastp.
The domain architectures and multiple sequence alignment were performed as described above. The N-terminal signal peptide was predicted using the online tool SignalP v. 4.1 (http://www.cbs.dtu. dk/servi ces/Signa lP-4.1/). A potential N-linked glycosylation site was predicted by the NetNGlyc server (http://www.cbs.dtu.dk/servi ces/NetNG lyc/).
| Plant growth conditions and inoculation assays
N. benthamiana plants were grown in soil in a growth room at 25 ℃ with 60% relative humidity and a 16 hr day/8 hr night photoperiod. P. capsici strain LT263 and knockout mutants were maintained on 20% (vol/vol) V8 juice agar in the dark. For the inoculation assay, P. capsici was grown on V8 medium for 2 days and mycelial plugs from the colony edge were taken by a 5-mm diameter corkborer. The zoospores were prepared as described previously . The P. capsici mycelia plug or c.100 zoospores were inoculated at the back of detached N. benthamiana leaves, and then placed in a plastic box with high humidity. Inoculated leaves were maintained at 25 ℃ in the dark for 24-48 hr.
| Plasmid construction
The CBP genes were amplified from genomic DNA of the cognate Phytophthora species by PCR and cloned into the binary vector pSuper with a 3 × FLAG tag. Truncated versions and point mutations of PcCBP3 were generated by overlap PCR. The pTRV2 constructs used for silencing NbBAK1 or NbSOBIR1 were generated as described before (Nie et al., 2019). The primers used in this study are listed in Table S3.
| Transient expression and virus-induced gene silencing in N. benthamiana
Agrobacterium-mediated transient expression and TRV-based gene silencing were performed as described previously (Nie et al., 2019;Zhang et al., 2020). The levels of BAK1 and SOBIR1 in TRV-treated N. benthamiana plants were determined by RT-qPCR using NbAct as the reference gene. The primers are listed in Table S3.
| RT-qPCR analysis of PcCBP3 during infection
P. capsici hyphae inoculating on N. benthamiana leaves were collected at 0, 1.5, 6, 12, 24 and 36 hr postinoculation. Total RNA was extracted using a Plant Total RNA Kit (ZomanBio) and cDNA was synthesized by PrimeScript RT Master Mix (Takara) according to the manufacturer's instructions. Real-time PCR was performed using TB Green Premix Ex Taq II (Takara) on the ABI QuantStudio 6 Flex system (Thermo Fisher). The gene-specific primers used for RT-qPCR are listed in Table S3.
| CRISPR/Cas9-mediated knockout of PcCBP3
The gene knockout in P. capsici by the CRISPR/Cas9 system was described previously . Briefly, the sgRNA of PcCBP3 was designed using EuPaGDT (Peng & Tarleton, 2015) and the potential off-target was assessed by performing a Blastn search to the genome of P. capsici LT1534. The secondary structure of sgRNA was predicted using RNAstructure (Reuter & Mathews, 2010).
Protoplast transformation was performed as previously described (Fang & Tyler, 2016). The transformants were selected by G418 antibiotic, PCR detection, and sequencing of target genomic DNA. The primers are listed in Table S3.
| Yeast secretion trap assay
The yeast secretion trap assay was used for functional evaluation of the signal peptide of PcCBP3 according to a protocol described previously (Yin et al., 2018). Briefly, SP PcCBP3 was fused to the invertase gene in the pSUC2 vector and then transformed into the yeast strain YTK12. Positive transformants were confirmed by growth on CMD−W medium. To detect invertase secretion, yeast cultures grown in YPAD liquid medium were used for TTC assay.
| Immunoblot analysis
To detect proteins expressed in N. benthamiana leaves, a 7-mm diameter leaf disc was taken by corkborer. After adding 70 μl of Tris-buffered saline, the leaf disc was homogenized and then 20 μl 5× Laemmli sample buffer was added. The sample was boiled for 5 min and centrifuged at room temperature for 5 min at 16,873 × g. The supernatant was used for SDS-PAGE and western blot with α-FLAG or α-HA antibodies (Abcam). Immunoprecipitations were performed using anti-FLAG M2 Affinity Gel (Sigma-Aldrich) according to instructions.
| Purification of E. coli-expressed PcCBP3
To express PcCBP3 in E. coli, PcCBP3 without signal peptide was cloned into pET28a and then transformed into E. coli BL21 (DE3).
E. coli was cultured in LB medium containing 50 μg/ml kanamycin at 28 ℃ to an OD 600 of 0.6. Then, 0.3 mM isopropyl βd-1thiogalactopyranoside (IPTG) was added to the medium and protein expression was induced at 16 ℃ overnight. E. coli cells were collected by centrifuging at 4 ℃ for 1 min at 12,396 × g, and washed by phosphate-buffered saline (PBS, pH 7.4) three times. Cells were resuspended using 10 ml PBS and disrupted by sonication. His-tagged proteins were purified by affinity chromatography using HisPur Ni-NTA Resin (Thermo Scientific) according to the manufacturer's instructions.
| Luminol-based chemiluminescence assay
ROS production induced by E. coli-expressed PcCBP3 was measured as described previously (Albert et al., 2015). In brief, 0.125 cm 2 leaf discs from N. benthamiana leaves were taken by corkborer and incubated in a 96-well plate with 200 μl of water overnight. The water was replaced by a buffer containing 1 μM protein or flg22 peptide, 20 μM L-012 (Waco), and 20 μg/ml horseradish peroxidase (Sigma-Aldrich). The chemiluminescent signal was measured immediately using a luminometer (Tecan F200).
| Subcellular localization
Imaging of the mCherry signal was performed as previously described (Wang et al., 2021a). Briefly, PcCBP3-mCherry was transiently expressed in N. benthamiana leaves for 48 hr and then imaged by a confocal laser-scanning microscope (DMI8; Leica). Plasmolysis was performed by treating leaves with 30% sucrose for 10 min.
ACK N OWLED G EM ENTS
We thank Dr Xiangxiu Liang of the China Agricultural University
DATA AVA I L A B I L I T Y S TAT E M E N T
The genes used in this study are deposited in the GenBank database at https://www.ncbi.nlm.nih.gov/genba nk/ with the following ac- | 2021-08-13T06:16:46.652Z | 2021-08-11T00:00:00.000 | {
"year": 2021,
"sha1": "62848a8ff6bc1b5b2fbdf5e7904f3a9519f30ecd",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mpp.13116",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa1aca79e13436cbb0c93e7cc2c38d0e38e46e45",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221015961 | pes2o/s2orc | v3-fos-license | Reply to Miller et al.: Airway Disease Presenting as Restrictive Impairment
variety of respiratory disorders, including World Trade Center (WTC) lung disease (2), asthma/reactive airway dysfunction syndrome (3), and coal workers’ lung disease (4). These observations have been characterized as restrictive (2) in the presence of decreased FVC often described as parallel to a decrease in FEV1 (4) and resultant normal FEV1/FVC ratio. In nonobstructive chronic bronchitis, follow-up spirometry showed parallel decreases in FVC and FEV1 and a “restrictive pattern” in 14% (5). Similar findings have been described as “GOLD (Global Initiative for Chronic Obstructive Lung Disease)-unclassified,” “PRISm” (preserved ratio impaired spirometry), “nonspecific,” or simply “low FVC.” Findings include characteristic airway symptoms (cough, sputum, and wheezing); flow rates at low lung volumes may be decreased but are often not reported, and oscillometry is consistent with small airway dysfunction. Unlike the iconic restrictive impairment caused by interstitial lung disease or chest bellows deficit, this “restrictive dysfunction” worsens with bronchoprovocation and improves with bronchodilatation. Unlike classic airway obstruction in chronic obstructive pulmonary disease and most cases of asthma, the FEV1/FVC ratio is maintained, and FRC or residual volume is not or is minimally increased. Eddy and associates demonstrated loss of subsegmental airways seen on computed tomography (1). This correlated with increased bronchial wall thickness, decreased luminal area, and ventilatory defects on hyperpolarized 3-He magnetic resonance imaging in patients with severe asthma (FEV1 64–65% of predicted, FEV1/FVC 0.58–0.64) compared with less severe disease (FEV1 88% predicted, FEV1/FVC 0.74). Eddy and colleagues’ Figure 1 showing the difference in airway count in patients with severe asthma vividly illustrates the anatomic deficit. Recently, the Mount Sinai WTC group reported increased bronchial wall area on quantitative computed tomography in 167 exposed workers and volunteers with the “Low FVC Spirometric Pattern,” confirming Eddy and colleagues’ report (6). Restrictive impairment attributable to asthma was described in 32 of 413 (8%) patients with asthma seen in a small inner-city hospital over 2 years (3). No patients had evidence of another disorder causing restrictive impairment. Plethysmographic FRC was normal or decreased in 22 of 25 patients in whom it was measured. Restriction as opposed to obstruction was attributed to airway closure rather than narrowing, an explanation consonant with Eddy and colleagues’ demonstration of airway loss. Restrictive impairment in asthma was not generally recognized before this publication despite two illustrative reports almost a half-century ago cited in this article; Colp and Williams described in 1973 a “restrictive pattern of ventilatory impairment” in two patients with asthma. One patient had mucus plugging of main and lobar bronchi and resultant massive atelectasis clearly explaining her restriction. The other had “diffuse small airway involvement” on pathologic examination, which would cause the loss of airways described by Eddy and colleagues. Three years later, Hudgel and colleagues reported “reversible restrictive lung disease” in a young patient with asthma whose TLC decreased from 5.3 to 2.6 L during an acute episode. Loss of airways similarly helps explain the characteristic findings of low FVC, normal FEV1/FVC, and small airway obstruction on oscillometry reported in first responders and area residents with “WTC Lung Disease” (2) and the accelerated “parallel decline” in FVC and FEV1 reported in 11 coal miners in the absence of radiographic fibrosis (4). n
anatomic basis for observations by a number of researchers in a variety of respiratory disorders, including World Trade Center (WTC) lung disease (2), asthma/reactive airway dysfunction syndrome (3), and coal workers' lung disease (4). These observations have been characterized as restrictive (2) in the presence of decreased FVC often described as parallel to a decrease in FEV 1 (4) and resultant normal FEV 1 /FVC ratio. In nonobstructive chronic bronchitis, follow-up spirometry showed parallel decreases in FVC and FEV 1 and a "restrictive pattern" in 14% (5). Similar findings have been described as "GOLD (Global Initiative for Chronic Obstructive Lung Disease)-unclassified," "PRISm" (preserved ratio impaired spirometry), "nonspecific," or simply "low FVC." Findings include characteristic airway symptoms (cough, sputum, and wheezing); flow rates at low lung volumes may be decreased but are often not reported, and oscillometry is consistent with small airway dysfunction. Unlike the iconic restrictive impairment caused by interstitial lung disease or chest bellows deficit, this "restrictive dysfunction" worsens with bronchoprovocation and improves with bronchodilatation. Unlike classic airway obstruction in chronic obstructive pulmonary disease and most cases of asthma, the FEV 1 /FVC ratio is maintained, and FRC or residual volume is not or is minimally increased.
Eddy and associates demonstrated loss of subsegmental airways seen on computed tomography (1). This correlated with increased bronchial wall thickness, decreased luminal area, and ventilatory defects on hyperpolarized 3-He magnetic resonance imaging in patients with severe asthma (FEV 1 64-65% of predicted, FEV 1 /FVC 0.58-0.64) compared with less severe disease (FEV 1 88% predicted, FEV 1 /FVC 0.74). Eddy and colleagues' Figure 1 showing the difference in airway count in patients with severe asthma vividly illustrates the anatomic deficit. Recently, the Mount Sinai WTC group reported increased bronchial wall area on quantitative computed tomography in 167 exposed workers and volunteers with the "Low FVC Spirometric Pattern," confirming Eddy and colleagues' report (6).
Restrictive impairment attributable to asthma was described in 32 of 413 (8%) patients with asthma seen in a small inner-city hospital over 2 years (3). No patients had evidence of another disorder causing restrictive impairment. Plethysmographic FRC was normal or decreased in 22 of 25 patients in whom it was measured. Restriction as opposed to obstruction was attributed to airway closure rather than narrowing, an explanation consonant with Eddy and colleagues' demonstration of airway loss. Restrictive impairment in asthma was not generally recognized before this publication despite two illustrative reports almost a half-century ago cited in this article; Colp and Williams described in 1973 a "restrictive pattern of ventilatory impairment" in two patients with asthma. One patient had mucus plugging of main and lobar bronchi and resultant massive atelectasis clearly explaining her restriction. The other had "diffuse small airway involvement" on pathologic examination, which would cause the loss of airways described by Eddy and colleagues. Three years later, Hudgel and colleagues reported "reversible restrictive lung disease" in a young patient with asthma whose TLC decreased from 5.3 to 2.6 L during an acute episode.
Loss of airways similarly helps explain the characteristic findings of low FVC, normal FEV 1 /FVC, and small airway obstruction on oscillometry reported in first responders and area residents with "WTC Lung Disease" (2) and the accelerated "parallel decline" in FVC and FEV 1 reported in 11 coal miners in the absence of radiographic fibrosis (4). n
From the Authors:
We appreciate the thought-provoking comments of Dr. Miller and colleagues in response to our report on "missing" airways in participants with asthma and, in particular, severe asthma (1). We investigated chest X-ray computed tomography (CT) total airway count alongside magnetic resonance imaging ventilation across patients with a range of asthma severity (1). Our findings may help corroborate recent results in World Trade Center workers and volunteers (2, 3), in whom there was restrictive airflow (spirometry) and CT evidence of airway wall remodeling. We can also confirm that in our study of asthma, there was no CT evidence of restrictive parenchymal abnormalities or fibrosis and/or interstitial disease. There was, however, CT evidence of intraluminal airway plugs in 20 of 70 participants, which appeared to have minimally influenced total airway count measurements.
In response to the suggestions of Dr. Miller and colleagues, we retrospectively inspected all spirometry measurements from our study and identified that 10 of 70 (14%) participants reported diminished FEV 1 and FVC with a preserved FEV 1 /FVC ratio, which is similar to a previously reported rate (8%) in residents of a small inner-city hospital (4). From our reported study, we now provide Figure 1 for a representative participant with such findings. Magnetic resonance imaging ventilation is shown coregistered to the patient-specific CT airway tree, alongside oscillometry plots and pre-and postbronchodilator pulmonary function measurements (1). In this participant with severe (Global Initiative for Asthma 5) asthma, CT total airway count was 129, which is approximately less than onehalf of what is expected. FEV 1 and FVC were both normal and improved after bronchodilator, whereas FRC, residual volume, and TLC were not changed after bronchodilator. Oscillometry was also performed during the original study visit (1). Both resistance and reactance were abnormally elevated, and the frequency dependence of resistance and reactance suggested heterogeneously narrowed and stiffened small airways.
The substantially reduced CT total airway count and oscillometry evidence of small airways dysfunction may help explain the low FEV 1 and FVC with normal FEV 1 /FVC in this participant with asthma. Because postmortem studies of such patients are exceedingly rare, in vivo physiologic tests such as oscillometry and multiple-breath nitrogen washout may help establish any potential relationships between reduced CT total airway count, spirometry evidence of restrictive lung disease and small airway dysfunction in patients with asthma. Prospective investigations of the relationships between CT total airway count and oscillometry are currently underway.
As Dr. Miller and colleagues suggest, total airway count may help explain low FVC and observations of rapid lung function decline in World Trade Center workers (3) and coal workers with lung disease (5). Truncated airway trees measured using CT imaging, concomitant with oscillometry evidence of small airway obstruction, challenge our understanding and assumptions about the role of airway disease in all patients with chronic lung disease. n with CT airway tree (yellow) and oscillometry for a 61-year-old female with severe (Global Initiative for Asthma 5) asthma, abnormally low FEV 1 (1.96 L, 76% pred ) and FVC (2.64 L, 78% pred ), and preserved FEV 1 /FVC ratio (0.74) is shown. Both FEV 1 and FVC improved after bronchodilator. CT total airway count (129) and MRI ventilation defect percent (6%) were abnormal, and ventilation defect percent did not improve after bronchodilator. Oscillometry resistance and reactance were abnormally elevated and frequency dependent, which is suggestive of small airway disease. BD = bronchodilator; pred = predicted; R = resistance; RV = residual volume; X = reactance.
Lipid-Laden Macrophages Are Not Diagnostic of Pulmonary Alveolar Proteinosis Syndrome and Can Indicate Lung Injury
To the Editor: We read with interest the recent case report by Israel and colleagues that describes a young woman that presented with acute hypoxemia, bilateral pulmonary infiltrates, and a history of e-cigarette use (1). The authors concluded that this was a case of pulmonary alveolar proteinosis (PAP) secondary to vaping-associated lung injury on the basis of the radiological and cytological findings presented. The case presented is undoubtedly interesting, and the report raises several important topical issues, including the spectrum of e-cigarette-or vaping-associated lung injury (EVALI) and the utility of lipid-laden macrophages in BAL fluid. However, we have some remarks regarding this case and the suggested association between EVALI and PAP.
PAP is a rare syndrome characterized by progressive alveolar surfactant accumulation and hypoxemic respiratory failure and is categorized as primary, secondary, or congenital. Primary PAP accounts for the vast majority of cases and is caused by the disruption of GM-CSF (granulocyte-macrophage colony-stimulating factor) signaling, by GM-CSF autoantibodies (autoimmune PAP, accounting for 90% of cases), or by genetic mutations involving the GM-CSF receptor. Secondary PAP occurs in various conditions that cause altered function or a reduced number of alveolar macrophages resulting in abnormal surfactant clearance in the lung (2).
The case presented by Israel and colleagues is not entirely convincing for secondary PAP, and we believe it is more likely that either infection or EVALI was the principal issue for this patient. First, "crazy-paving" is not pathognomonic of PAP, and there are many other causes, including acute lung injury and lipoid pneumonia, both of which could be present as a result of EVALI in this case (3). Second, the presence of lipid-laden macrophages in BAL fluid is nonspecific, and although Oil-Red-O-positive cells are certainly a feature of PAP, they are present in many types of lung disease (4). Furthermore, the presence of periodic acid-Schiff-positive material again is not indicative of PAP alone and can be seen in a spectrum of pulmonary pathology (5). In this case, no biopsy was performed, and a label of secondary PAP was made on the basis of BAL and computed tomography findings. This is not the current best practice; indeed, all patients should have GM-CSF autoantibodies checked when PAP is suspected, and if there is no known secondary cause of PAP and GM-CSF signaling is intact, then a lung biopsy is needed to truly determine the presence of PAP syndrome (2). Finally, the rapid response to antibiotics and steroids, neither of which are effective therapies for primary or secondary PAP, go against this being a case of secondary PAP. Moreover, it would take several months for the alveolar macrophage pool to replenish/repair and export accumulated lipids, which is evidenced by the delayed response to inhaled GM-CSF seen in cases of autoimmune PAP (6). We conclude that this case more likely represents either infectious or inflammatory acute lung injury possibly related to EVALI, but the paucity of evidence cannot confirm secondary PAP.
Although we disagree that this is a case of secondary PAP, it highlights the importance of carefully interpreting the presence of lipid-laden macrophages in the lung. It has been demonstrated that in a mouse model of EVALI, there was altered surfactant phospholipid homeostasis and foamy macrophages but no evidence histologically of PAP lung disease (7). There have been numerous reports of Oil-Red-O macrophages in EVALI (8), but this likely represents lung injury resulting in abnormal surfactant production from type II pneumocytes or from altered macrophage function resulting in lipid accumulation. Hence, the interpretation of lipid-laden macrophages must be treated cautiously. With the increased recognition of EVALI as a novel pulmonary condition, there has been renewed focus on lipid-laden macrophages, but we conclude that foamy macrophages in EVALI likely indicate lung injury, and caution should be given to using this finding as a diagnostic marker (9). n Author disclosures are available with the text of this letter at www.atsjournals.org. | 2020-10-17T05:07:32.434Z | 2020-08-05T00:00:00.000 | {
"year": 2020,
"sha1": "850c3842a43c04bcd0d624361fa55a3d2f241f6e",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1164/rccm.202005-2034le",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "850c3842a43c04bcd0d624361fa55a3d2f241f6e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251159792 | pes2o/s2orc | v3-fos-license | How does decision-making change during challenging times?
Prospect Theory, proposed and developed by Kahneman and Tversky, demonstrated that people do not make rational decisions based on expected utility, but are instead biased by specific cognitive tendencies leading to neglect, under- or over- consider information, depending on the context of presentation. In this vein, the present paper focuses on whether and how individual decision-making attitudes are prone to change in the presence of globally challenging events. We ran three partial replications of the Kahneman and Tversky (1979) paper, focusing on a set of eight prospects, after a terror attack (Paris, November 2015, 134 subjects) and during the Covid-19 pandemic, both during the first lockdown in Italy (Spring 2020, 176 subjects) and after the first reopening (140 subjects). The results confirm patterns of choice characterizing uncertain times, as shown by previous literature. In particular, we note significant increase of risk aversion, both in the gain and in the loss domains, that consistently emerged in the three replications. Given the nature of our sample, and the heterogeneity between the three periods investigated, we suggest that the phenomenon we present can be explained stress-related effects on decision making rather than by other economic effects, such as the income effect.
Introduction
Decision-making under uncertainty is an important topic of research with implications that span well beyond the domain of academic psychology: it has obvious consequences for finance [1], economics [2], policy-making [3] and many other domains where individual choices play a central role.
For much of the 20th century, expected utility theory (EUT) has been the mainstream approach to the study of decisions in risky or uncertain contexts and its original formulation posits that individuals make their choices based on the comparison between expected utility values [4]. One of the most successful descriptions of human decision-making under uncertainty, however, has been prospect theory (PT) [5], together with its refinement, the Cumulative Prospect Theory (CPT) [6]. Tversky and Kahneman analyzed the answers of a sample of participants to a set of financial decision-making problems, inspired by paradoxes in the choice patterns initially discovered by Allais [7]. They documented a set of deviations from the predictions of EUT, such as utility functions depending on the domain of reference (concave in the gain domain and convex in the loss domain). Besides, they showed specific 'paradoxes', such as the preference for small certain gains over large uncertain gains even when the expected outcome is identical (certainty effect); the tendency to be risk seeking when trying to maximize gains, but risk averse when trying to minimize losses (reflection effect); the shifting in preference patterns when the same outcome is presented in terms of gains rather than losses from a reference point (framing effect). These results have drawn enormous interest from both psychologists and economists but, up until recently, systematic large-sample replications of the original studies were missing. Recently, a large replication study by Ruggeri and colleagues [8] substantially confirmed the original findings, albeit with attenuated effects, in a study enrolling 4098 participants from 19 countries. While this recent publication confirms the descriptive power of CPT in a set of circumstances, a series of works noted that decision-making can be affected by various factors, including events that have an impact at the collective level. For instance, Sacco and colleagues [9] studied decision patterns in the aftermath of the 9/11 terrorist attacks. While the authors confirmed the majority of the results originally reported by Kahneman and Tversky, they also found evidence of more widespread risk aversion, both in the gain and in the loss domains, as well as the loss of the reflection and framing effects for some pairs of prospects. In line with these results, a study [10] showed that the heightened fear of dread risks (low-probability, high-consequence events) linked to air travel after 9/11 led to an increased preference for land transportation (and, paradoxically, driving accidents). However, other experiments [11] carried out after heavy snowfalls and an earthquake in China (Wenchuan earthquake, 2008) did not completely replicate these tendencies: while finding increased risk-aversion in the loss domain, they also found an increased preference for small probabilities in the gain domain. Recently, one study [12] explored the interaction between risk attitudes during the COVID-19 pandemic and previous life experiences, suggesting that the impact of the pandemic becomes influential only in those participants who had been previously affected by negative life events.
The goal of this paper is by no means an attempt to verify or falsify PT; rather, it focuses on the short-and medium-term effects of external shocks on choice patterns. Indeed, although individual risk preferences are assumed to be stable over time by classical economic theories, more recent experimental work showed that risk preference can vary with time [13]. In this study, we intended to replicate what Sacco and colleagues found following the September 2001 terror attacks, as they can be seen as a prototype of globally challenging events with a potential impact on risk preferences. Indeed, similar cognitive effects could occur following other disruptive events. In this paper we investigate risk preferences using a set of economic prospects, both in the gain and loss domain. In particular, we focused on the prospects where Sacco and colleagues found significant differences from the patterns identified by Kahneman and Tversky [5]. The differences between the results of the two works are briefly summarized here. When probabilities of medium entities of winning or losing money after an initial gain are compared with sure options (Prospects A and A' in the present paper), the preference for a sure win of minor entity was still present, while the preference for a probable loss of major entity was not replicated, suggesting heightened loss aversion. When dealing with very unlikely events vs. sure small gain/loss (Prospects B and B') the overweighting of suffering a large loss with very low probabilities (the tendency leading to pay high insurance premiums) was still present, while the overweighting of a large gain with very low probabilities (tendency leading to buy lottery tickets) was not replicated-in line with national lottery ticket sales. When the two options involve small, quite similar options (Prospects C and C'), the preference for the slightly lower probability of winning a bigger amount of money disappeared, while the tendency to prefer the slightly higher probability of losing a smaller amount of money was similar.
When the two options involve very unlikely events (Prospects D and D') both the preference for the less probable event of winning a bigger amount of money and the more probable event of losing a smaller amount of money were not replicated; on the contrary, an opposite tendency of preferring most probable events emerged in the gain domain. Such deviations from PT predictions were interpreted by the authors as the effect of the catastrophic event and the subsequent change in decision-making attitudes: "a shared loss biases decision-making in favor of a search for security" [9].
To increase the generalizability of the results, we recorded data during two separate events: the Paris terror attack in 2015 (Experiment 1) and the COVID-19 pandemic (Experiment 2a: data collected during the first lockdown in Spring 2020; and Experiment 2b: data collected during the subsequent reopening phase in Summer 2020).
Furthermore, we explored some other possibly impacting factors: personality traits and severity of the crisis. The reason to explore possible correlations between personality traits and decision-making derives from the presence of contrasting results in the relative literature. Some papers found that personality traits significantly correlate with risk-taking [14][15][16] and that, in the financial domain, extraversion is linked with a higher likelihood to pay excessive prices for risky assets in a simulated market while high neuroticism is linked to holding fewer risky assets [17]. Other studies, however, did not confirm such results [18]. It is therefore worth investigating whether personality traits have or do not have an impact on decision-making processes during periods of crisis characterized by increased uncertainty. To check whether the severity of the pandemic had an impact on risk preferences, we used the daily increase of deaths, active cases of COVID-19 and Emergency Room (ER) admissions, as well as self-reported fear about the pandemic as measures of severity.
Finally, this work also affords us the opportunity of answering an issue raised by Li and colleagues [11] who pointed out that, by converting US dollars into Italian Liras, Sacco et al. modified the magnitude of the values included in the prospects (5$ became 30000£), possibly introducing a confounding factor. As euros and dollars are expressed in with amounts of the same order of magnitude, this paper can shed light on this issue.
Materials and methods
The 4 pairs of prospects used in the present study (see Table 1 for a list) are those where Sacco and colleagues [9] found a distribution of answers different from that predicted by the CPT, thus representing decision-making attitudes during periods of global uncertainty. The questions are reported below and were used in both Experiment 1 [Bataclan] and Experiment 2 D' Choose between one possibility out of 1000 of losing 6000€ or two possibilities out of 1000 of losing 3000€ [Covid]. The monetary values were based on the ones previously used, adjusted for inflation to 2020 values using the official tool provided by the Italian Bureau of Statistics (Istituto Nazionale di Statistica-tool available at http://rivaluta.istat.it), converted to euros and rounded to the most significant digit (e.g.: 3000€ instead of 3120€). For Experiment 2 only, we also administered an Italian version of the 10-item Big Five personality Inventory [19]. As well as this, we collected data related to the ongoing COVID-19 pandemic: the numbers of patients in intensive care, variation in the number of COVID positives and number of deaths were obtained through an official open-source repository, maintained by the emergency management agency of the Italian Government (Dipartimento della Protezione Civile). Furthermore, participants were asked two questions about their worry levels about the pandemic ('How much are you worried about the economic consequences of the COVID-19 pandemic?', and 'How much are you worried about the health consequences of the COVID-19 pandemic?'), as well as one question about their level of knowledge (How much are you informed about the COVID-19 pandemic?). The questions were rated using a 5-points Likert scale. Table 2 reports descriptive statistics for the personality traits and the pandemicrelated questions.
Participants and procedures
Experiment 1-2015 Paris terror attacks (Bataclan). The experimental sample consisted of students from the Faculty of Psychology, Università di Torino (Turin, Italy). Participants received a paper form and answered the questions by marking their preferred options. The answers were then manually transcribed on an electronic spreadsheet, checked and stored. We created eight sequences (four were randomly generated orders, four were obtained through reversing the random orders), and participants randomly received one of the eight possible forms. We received 134 fully filled out forms (mean age 19.8 years, SD 1.6; 75% women), transcribed the answers electronically and stored the results for further elaborations.
Experiment 2 -Covid-19. A list of 922 university students was drafted in March 2020. The list included students enrolled at the Pontificia Università Salesiana-IUSTO Rebaudengo and at the Università degli Studi di Torino (Turin, Italy). The students were pooled together. Students were randomly assigned to one of four groups, receiving the first invitation respectively on March 23rd, April 26th; July 26th and September 21st. We received 92 and 84 answers from the first two groups (mean age 22.4 years, SD 4.7, 85% women); 65 and 75 answers from the last two groups (mean age 22.2 years, SD 3.6, 78% women). Participants received an invitation via e-mail, with a brief explanation of the study. The experiment was entirely conducted online due to COVID-19 limitations, and all questions were administered through a LimeSurvey-hosted questionnaire. The questions were administered in a fully randomized order, and the answer options were randomized.
To verify whether our results differed from the recent replication of Kahneman and Tversky's results [8], we computed expected proportions of answers from two age-matched subsamples from their publicly available dataset. The first one only included 117 Italian subjects aged 18-25 years old (mean age = 22.11, 52% men), while the second one expanded the sample to include all countries (N = 1425, mean age = 22.47, 48% men). The same approach was used to compare our results with the study of risk preferences after the 9/11 attacks [9]. The study was approved by the internal review board of the Università degli Studi di Torino-Prot. n. 142238. In order to take part in the study participants had to explicitly confirm that they had read the informed consent and accepted participating in the study.
Data analysis
The data were analysed using SPSS 26.0. We compared the answers to each question using a chi-squared test against a uniform distribution. For Experiment 2, we also carried out pointbiserial correlations between the answer to the questions and personality scales, as well between the answers and the COVID-19 data. Chi squared tests were also used to compare our samples with the frequencies expected based on [8].The results of the chi-squared were corrected for multiple comparisons, experiment-wise, using the Holm-Bonferroni method [20]; the results of the correlations were corrected using the False Discovery Rate approach [21]. 02; all corrected p values non significant). There was no association between gender and the answers to any prospect (all p c > 0.05). 77% of the respondents chose the option with the highest probability of happening in prospect A, 66% e in prospect A', 51% in prospect B, 78% in prospect B', 53% in prospect C, 48% in prospect C', 57% in prospect D and 51% in prospect D'. Fig 1 shows the relative frequencies of the more likely options (e.g.: percentage of people choosing a sure win of 2000 € over 50% chance of winning 4000 €) for each couple of prospects in Experiment 1, as well as for the data gathered during the pandemic (Experiment 2) and reference values drawn from previous work [5,9]. (χ 2 = 1.03, all corrected p values non-significant). An association was found between gender and the answer to prospect D' (χ 2 = 4.46), but it was found to be non-significant after correction for multiple comparisons. 74% of the respondents choose the option with the highest probability of happening in prospect A, 68% in prospect A', 46% in prospect B, 66% in prospect B', 53% in prospect C, 49% in prospect C', 56% in prospect D and 46% in prospect D'.
Experiment 1: Decision-making after the 2015 Paris terror attacks
Correlations with personality traits and epidemiological data. The answers to the questions were not significantly correlated with levels of worry for the pandemic, self-perceived information about the pandemic or metrics related to the Covid-19 pandemic (all FDR-corrected p > 0.05). It is, however, worth noting that only 3.8% reported to be not informed or not very informed about the pandemic; 13% not preoccupied or slightly preoccupied about the health consequences and 7% not preoccupied or slightly preoccupied about the economic consequences. Only two correlations between personality and answers to the questions were significant after FDR correction: stability and A (r = -.19, pFDR = .017) and consciousness and A' (r = -.19, pFDR = .035).
Robustness checks. While pooling all our three experiments, our results differed from the Italian subsample of Ruggeri and colleagues, for the majority of prospects: A (χ 2 = 34.11, p c <0.001), A'(χ 2 = 6.66, p c = 0.03), B(χ 2 = 145.52, p c <0.001), C(χ 2 = 11.89, p c = 0.002), D (χ 2 = 102.89, p c <0.01) and D' (χ 2 = 15.19, p c <0.001), but not for prospects B' (χ 2 = 0.10) or C'(χ 2 = 2.88). In prospects A, A', B and D we observed a higher proportion of more certain prospects with respect to Ruggeri and colleagues (76% vs 62% for A; 64% vs 58% for A', 57% vs 31% for B), while in C and D' we did not notice a distribution different from chance, in line with what was noted in the analysis of the single experiments.
Discussion
Prospect Theory [5] demonstrated that people do not make rational decisions based on expected utility, but are instead biased by specific cognitive tendencies leading to neglect, under-or over-consider information, depending on the context of presentation. Ruggeri and colleagues [8] recently replicated the theory's main findings, despite some divergences.
A previous paper [9] started to investigate changes in decision-making in the presence of globally challenging events, suggesting a tendency towards risk aversion after the terrorist attack of 9/11. However, subsequent papers found contrasting results. On one side, Li and colleagues [11] showed that, after a natural disaster, people tend to overweight small probabilities favouring both protection from rare but catastrophic events and very small chances to reach a large gain. Voors and colleagues [22] found more risk-seeking behavior after a violent conflict in Burundi, as well as altruistic behaviour. Eckel and colleagues and Shupp and colleagues [23,24] found mixed results after two different hurricanes, and Page and colleagues [25] found that homeowner victims of the 2011 floods were more risk-seeking, as they were more likely to choose risky gambles. On the other side, and in agreement with Sacco and colleagues, other studies found that an external shock, such as a natural catastrophe, can increase risk aversion (e.g., Cameron and Shah [26] after floods or earthquakes in rural Indonesia; Cassar and colleagues [27] after the 2004 tsunami in Thailand; Reynaud and Aubert, [28] after floods in Vietnam, and Beine and collaborators [29] after earthquakes in Albania). The picture is therefore still unclear. Here, we present data gathered after different international events generating global disquiet: the first event (2015 Paris terror attack) was very similar to that studied by Sacco et al. [9] as it was again a terrorist attack, while the second one, the Covid-19 pandemic, was very different in nature.
We discuss our results (both Experiment 1 and Experiment 2) in light of previous data [9] and of the original Kahneman and Tversky results ( [5], K&T henceforth).
In Prospect A, all studies showed a significant preference for the sure win of medium entity with respect to a 50% probability of winning double the sum. Very differently, in Prospect A', the preference for a probable loss of greater magnitude, found by K&T, was not replicated in either Sacco and colleagues or in the experiments of the present study. During the Covid pandemic the choice for the safest option is significantly preferred, showing a heightened loss aversion and an inversion of preferences with respect to the PT previsions. In Prospect B the overweighting occurring when subjects had to choose between a small sure win or a very unlikely larger win (0.1% of winning, equivalent to the choice made when considering to buy a lottery ticket), found by K&T, was not replicated in either Sacco and colleagues [9] or in the experiments of the present study. On the contrary, in Prospect B', the overweighting of very low probabilities to suffer a loss was significantly present in all studies. We found fewer striking results for Prospects C-C' and D-D'. Indeed, in Prospect C-C', which involved similar alternatives, the preference for the slightly lower probability of winning a bigger amount of money disappeared during all periods of uncertainty. On the other hand, a weak (not significant) tendency to choose the slightly higher probability of losing a smaller amount of money was present in all the studies reported in Fig 1. In Prospect D-D', concerning very unlikely events, both the preference for the less probable event of winning a bigger amount of money and the more probable event of losing a smaller amount of money, found by K&T, were not replicated in either Sacco and colleagues or in the experiments of the present study.
It is possible that sociodemographic characteristics of our sample, such as age [30] and nationality can, in part, explain why our findings diverged from Tversky and Kahnemann's. For instance, [31] estimated the parameters used in the formulation of the Cumulative Prospect Theory in 53 nations, finding substantial heterogeneity. However, it is worth noting that the authors report that 94% of the Italian participants shows the reflection effect, which is largely lost in the data presented here (prospects B-B'). Furthermore, drawing age-matched subsamples from the recent replication effort by Ruggeri and collaborators does not significantly change the scenery: even when comparing our results with more recent and agematched reference samples drawn during times not characterized by natural disasters or other disrupting events, our data show a heightened preference for less uncertain prospects. Furthermore, the results of our experiments largely overlapped those obtained by Sacco et al. [9], showing an even greater preference for less uncertain options in prospects A and A'. More in general, and taking in consideration their entire sample, Ruggeri and colleagues completely replicated Prospects A-A' and B-B', i.e., those comparing probable vs. sure prospects. On the contrary, all our data-collected during different very uncertain historical periods-show a preference towards outcomes marked by no uncertainty. The other prospects, those where options with different probabilities are compared, were not completely replicated by Ruggeri and colleagues. In particular, in Prospect C Ruggeri could not replicate the effect originally found by K&T in the gain domain, and in Prospect D' Ruggeri found a strong attenuation of the effect found by K&T in the loss domain. These results are in line with what we found in all our experiments. Therefore, while the tendencies characterizing Prospects A-A' and B-B' seem to be specific of periods of uncertainty, the same conclusion cannot be stated for decisions taken in Prospects C-C' and D-D'. In particular, prospects C was not replicated by Ruggeri and colleagues and in C' even Kahneman and Tversky [5] found no preference between the options. Besides, distributions of answers in C-C' and D-D' are, in large part, not different from chance, showing indifference between the options. It must be said that the two options presented in these two pairs of prospects were similar in the probability of gain/loss and that the probabilities of winning/losing could be very small (D-D'): choices made between options that seem very similar could therefore be less salient.
A few other studies analyzed risk preferences during the Covid-19 pandemic. Most of them come from the economic literature and thus the methodologies used differ from that of the present work. Therefore, their results cannot be directly compared with ours, but they nonetheless give interesting suggestions. Some studies failed to detect any change in risk preferences before and after the start of the pandemic [32][33][34]. Angrisani and colleagues, for instance, use a laboratory task (Bomb Risk Elicitation Task) to test risk preference in a group of subjects before (February-March 2019) and after (April 2020) the emergence of COVID-19, finding no shift in risk preferences. A similar stability of risk preferences was noted by Drichoutis and Nayga and Lohmann, whose studies included a task similar to the one used in this study (Lottery choice) and used a student sample, like we did. While our results are largely at odds with Angrisani and colleagues, we nevertheless agree on their conclusion that the risk preferences in the experimental task are not modified by negative expectations on future financial situation. Different results come from longitudinal study that tried to assess Prospect Theory parameters [35] during the early stages of the COVID-19 pandemic (March 13 to May 11, 2020), and found an increased tolerance of non-tail risks, but a decreased tolerance of tail (extreme) risks, in the loss domain. In partial agreement with them, in this study we found a stronger search for security for the prospects (B-B') characterized by tail risks, both in the loss and gain domain, but observed the same behavior in prospects (A-A') characterized by nontail risks. Shachat et al. [36], using similar tasks, actually found a more complex picture: beside an increase in prosocial and cooperative behavior and an increased risk tolerance in the gain domain during the early stages of the pandemic, they found a decrease in risk tolerance in the loss domain. Besides, they found a transient correlation between the decrease of risk and ambiguity tolerance and the death of a prominent doctor, Li Wenliang, involved in the early stages of the fight against Covid-19. Their results in the loss domain are in line with the main result of the present paper, namely the cognitive tendency to risk aversion during periods of crisis. In the same direction, Bu and colleagues [37] found that exposure to the virus leads to an increase in risk aversion, and that as the exposure increases (i.e., residents of the city of Wuhan vs. residents of Hubei vs. residents of other provinces of China), risk aversion increases too, as measured both by lower amounts of planned risk taking and by lower allocation of money to risky investment decisions.
Therefore, this study is not alone in identifying increased risk avoidance, and this finding could be explained by different mechanisms. For instance, Ikeda and collaborators [35] suggested that stress could be the driver behind their results. Their hypothesis stems from earlier results [38] in which experimenters manipulated stress hormone levels by administering doses of hydrocortisone, and found that this increased risk aversion and overweighting of small probabilities in the gain domain.
A more recent study [39] supports the idea, as it found that individual with chronic anxiety disorders exhibited enhanced levels of risk aversion relatively to healthy controls, but this did not extend to of loss aversion. A growing literature is highlighting the link between anxiety and negative disposition towards risk and uncertainty [40].
Psychological stress is then a strong candidate to explain our results: participants in the present study reported both a high level of information and high levels of worry related to the pandemic and its consequences. This conclusion is reinforced by the fact that, as it could be expected, early studies reported psychological distress linked to the COVID-19 pandemic [41].
The role of stress could be mediated by limbic mechanisms specific to shocking public events, especially for the ones that can cause "flashbulb memories" [42].
However, Sharot et al. [43] found evidence of such limbic mechanism only for people that directly experienced traumatic events (such as being in the immediate proximity of the World Trade Center)-a condition that, in this study, could only hold for the Covid 19 experiment. Interestingly, the idea that psychological stress is a possible explanation for our results is also in line with the results of [44], who found increased risk aversion after elicitation of fearful emotions (namely, watching short fragments of horror movies).
Other causes, such as the income effect [25], caused by economic damages and loss of earning opportunities, have been linked to risk aversion. Both the composition of our samples and the fact that we observed a similarly heightened risk aversion after events characterized by wildly varying economic effects (doubtlessly lower for the Bataclan attacks with respect to the 9/11 terror attacks or the COVID-19 pandemic) push us away from supporting explanations simply based on economic effects. However, the lack of socioeconomical data makes it hard to either prove or disprove this specific hypothesis.
Furthermore, this study did not have the aim to support or disconfirm a specific decision making model: while we note that some authors [45,46] found, in human and primates, evidence of flexible decision making strategies (i.e.: shifting from multiplicative to additive combination of reward magnitude and probability), dependent on the task circumstances and the sequence of the events (i.e.: pseudorandom sequence of choices versus blocks of repeated choices), this study did not contrast different decision-making scenarios, nor was the limited set of prospects used in this replication study sufficient to estimate prospect theory parameters, as even the full set of prospects replicated by Ruggeri et al is "[. . .] grossly insufficient when it comes obtaining precise and reliable estimates of Prospect Theory parameters" [47] and further studies are needed, both to explore in more detail what could be the main driver behind the phenomenon described in this study and to test different models of decision making after a natural catastrophe.
In summary, this study replicates the findings of Sacco and colleagues [9], showing an increase in loss aversion after an external shock. The effect is not substantially modulated by personality factors, nor by the type of event (it generalizes across very different situations, from Covid pandemic to terrorist attacks), nor by its instantaneous perceived magnitude (1st versus 2nd wave of data gathered during Covid-19), levels of fear, or damage caused by the event (measured as loss of lives, daily admissions to ICUs and daily variation in the number of deaths): it seems that such variables have little to no effect when there is a sudden and substantial increase in the level of background risk. | 2022-07-30T06:16:51.337Z | 2022-07-29T00:00:00.000 | {
"year": 2022,
"sha1": "50dbc46fe2c3c5b93a23b337124da3bbd06bad80",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "25945188e538ed7abf6231a3bef7ca1ba4f7c0d8",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
91929993 | pes2o/s2orc | v3-fos-license | Human and Veterinary Vaccines against Pathogenic Escherichia coli
Pathogenic Escherichia coli constitute an important current problem of public health and animal production. Efforts have been made to fight the infections caused by these bacteria, and in this chapter, we present the progress made up to date in the vaccines generated for this purpose. Different vaccines have been tested against the pathotypes responsible for human diseases such as diarrhea and urinary infections. Also, the poultry market has deserved the effort of the researchers to obtain a product that fights the E. coli strains that cause diseases in them. Finally, advances are also presented for the zoonotic enterohemorrhagic E. coli (EHEC), which are a different problem due to their low importance as a disease factor in cattle, but they are a very important pathogen in humans. In several of these fields, authorized products have been developed and are currently being marketed.
Introduction
This chapter deals with the current developments on human and veterinary vaccines against pathogenic Escherichia coli of following pathotypes: enterohemorrhagic E. coli (EHEC) and Shiga toxin producing E. coli (STEC), enterotoxigenic E. coli (ETEC), extraintestinal pathogenic E. coli (ExPEC), in particular uropathogenic E. coli (UPEC), and avian pathogenic E. coli (APEC). Other pathotypes were not considered because of a lesser development related to vaccines. In some cases, only vaccines tested in the target species (human, cattle, chicken, etc.) were considered due to the high abundance of publications where experimental vaccines were tested on rodent or on other animal models.
Vaccines against EHEC/STEC for humans
Different factors make the development of a vaccine difficult to prevent EHEC/ STEC infection and hemolytic uremic syndrome (HUS) in humans. The lack of knowledge about what type of immune response may confer protection, and the multiplicity of infection routes comprising bovine-derived food products, leafy green vegetables, pool or drinking water, person-to-person transmission [1], and the lack of reliable animal models complicate the advance in this field. Szu and Ahmed developed polysaccharide conjugate vaccines composed of detoxified lipopolysaccharide (LPS) from E. coli O157, covalently linked to a carrier protein and a recombinant exoprotein of Pseudomonas aeruginosa (rEPA) that has been used for conjugation of polysaccharides and proteins [2]. Phase I and Phase II clinical studies were conducted in adults and in children ranging from 2 to 5 years old, respectively [3]. The E. coli O157 conjugate vaccines were safe for all ages, and a positive humoral IgG response with bactericidal activity was found in both age populations. However, there were certain limitations for using LPS-based vaccines. For example, LPS failed to induce a long-lasting humoral immune response especially in children, and STEC non-O157 serotypes were not covered. In one attempt to compensate for this shortcoming, the same group conjugated O-polysaccharide with the B subunit of Shiga toxin (Stx1) [2]. However, this formulation did not neutralize Shiga toxin (Stx2), the toxin type most frequently found in severe HUS cases.
The main virulence factor of STEC/EHEC is the Shiga toxin (Stx); in consequence, it is an optimal target to elicit neutralizing antibodies. Subsequently, various Stx-based vaccine approaches have been attempted. A vaccine consisting of a poly-N-acetylglucosamine (PNAG, a surface polysaccharide of STEC) conjugated to the B subunit of Stx1 was produced. The antibodies raised in rabbit neutralized Stx1 potently, but modestly Stx2. Passive transfer of antibodies indicates that anti-PNAG could confer protection, but the cross-reacting neutralization of Stx2 is limited [4].
To date, no vaccines have been approved for human use, exposing a void in both treatment and prevention of EHEC O157:H7 infections. Vaccine research and development efforts have oriented to cattle as the main reservoir.
Vaccines against EHEC for cattle
Up to date, different vaccine compositions have been tested to reduce the colonization of the bovine and the environmental dissemination of EHEC O157:H7. These vaccines have different immunogenic, adjuvants, inoculation pathways, number of doses, and of course differ in their development and evaluation level in experimental and natural conditions. In this occasion, we decided to consider the proposals whose capacity of protection was evaluated in cattle.
The key factor for achieving a protective immune response in the animal is the immunogen. Looking for the available literature, we can observe that there are several candidates, mainly colonization factors, which we can classify in: type III secretion system (T3SS) components, siderophore receptors and porin proteins, bacterins, whole-cell envelopes, flagellin, Shiga toxins toxoids, attenuated Salmonella, and combinations between more than one of these.
Vaccines based on T3SS components
The components of the T3SS were the first to be used as vaccines, because it was already known for the essential role that proteins such as intimin, Tir, EspA, and EspB play in the adhesion of EHEC O157:H7 to the host cell [5][6][7]. In 2004, Potter et al.
[8] tested a vaccine composed by a protein supernatant of EHEC O157:H7 (containing various Esps and Tir) with the adjuvant VSA3, in animals that were later challenged with E. coli O157:H7, as well as in animals in a clinical trial. They observed significant increase in serum antibodies against proteins of T3SS and O157 lipopolysaccharide. There was also a decrease in the number of bacteria in feces, in the number of shedder animals, and in the duration of excretion in the vaccinated group. The clinical trial showed a reduced prevalence of EHEC O157:H7 in typical feedlot conditions when cattle were vaccinated. In 2005, Van Donkersgoed et al. [9] published a field trial in nine feedlots using a vaccine similar to Potter et al. [8], and they did not observe a significant association between vaccination and pen prevalence of fecal E. coli O157:H7. Probably, the differences in the preparation of the secreted proteins, in this case with formalin, a different adjuvant and a different vaccination strategy, could cause the failure. Later, this same preparation, without formalin treatment and with VSA3 adjuvant, was standardized and analyzed in studies in commercial feedlots of beef cattle with a two-dose regimen. The authors evaluated the probability to detect the microorganism from terminal rectal mucosa as a measure of gut colonization [10] and other large-scale clinical trials on commercially fed cattle to test the efficacy of the regimen to reduce the environmental transmission of EHEC O157:H7 [11]. They concluded that the two-dose vaccine regimen was effective to reduce the probability for E. coli O157:H7 colonization of the terminal rectum of cattle at slaughter and reduces the probability for environmental transmission of the bacteria within commercial cattle feeding systems [12]. This evidence was accompanied by the generation of a commercial product known as Econiche(TM), which was developed by the Canadian company Bioniche Life Sciences. The vaccine was approved in Canada and the United Kingdom [13, 14] and had a pending conditional license in the U.S. [15], but in 2014, the Bioniche Animal Health business was purchased by Vèntoquinol SA [16], and the production of the vaccine was discontinued.
On the other hand, there were other groups that evaluated recombinant factor of the T3SS in various combinations. Van Diemen et al.
[17] evaluated the carboxyterminal 280 amino acids of intimin γ and β alone or combined with the portions of Efa-1 (EHEC factor for adherence). Immunized calves induced antigen-specific serum IgG and, in some cases, salivary IgA responses, but did not reduce the magnitude or duration of excretion of EHEC O26:H-(intimin β) or EHEC O157:H7 (intimin γ) after an experimental challenge. Similarly, immunization of calves with the truncated Efa-1 protein did not protect against intestinal colonization by EHEC O157:H7.
The vaccination of calves with recombinant EspA by intramuscular and intranasal routes induced high titers of antigen-specific IgG and salivary IgA, but these responses did not protect calves from intestinal colonization after a challenge with E. coli O157:H7 [18].
[19] assessed whether three purified proteins, intimin (C-terminal 531 amino acids), EspA, and Tir, could reduce shedding of EHEC O157:H7. Furthermore, they evaluated if the inclusion of purified H7 flagellin to the vaccine could modify the vaccination efficacy. They used the intramuscular route and the rectal submucosal route and obtained a significant increased response in serum anti-EspA, anti-intimin, and anti-Tir IgG. When H7 flagellin was present, mucosal IgA and IgG anti-H7 was generated. After experimental infection with EHEC O157:H7, cattle showed that immunization with these purified antigens could significantly reduce the total levels of bacterial excretion and that the addition of H7 flagellin can improve this effect. More recently [20], this group optimized the formulation of this vaccine and concluded that the immunization with a combination of EspA, intimin, and H7 flagellin causes a significant reduction in shedding of EHEC O157:H7, more enough to impact on transmission between animals.
[21] evaluated a vaccine composed by the C-terminal 280 amino acids of intimin γ and EspB. The intramuscular immunization elicited significantly high levels of serum IgG antibodies. Antigen-specific IgA and IgG were also induced in saliva, but only the IgA response was significant. Following experimental challenge with E. coli O157:H7, a significant reduction in bacterial shedding, was observed in vaccinated calves.
The Universe of Escherichia coli 4
Vaccines based on siderophor receptors (SRP) and porins proteins
This proposal is based on reducing the ability of the bacterium to obtain iron from the environment to decrease the level of infection [22]. Thornton et al. [23] assessed the efficacy of an SRP-composed vaccine (Epitopix LLC) to reduce the prevalence and fecal excretion of EHEC O157:H7 in calves after an experimental infection. A significant response in serum anti-SRP antibody titers was detected, and they concluded that the vaccination tended to decrease the fecal prevalence and concentration of EHEC O157:H7. In other study [24], this group evaluated the vaccine to control the burden of E. coli O157:H7 in feedlot cattle in field conditions. Vaccination with SRP was associated with the reduction of fecal concentration of EHEC O157:H7 and suggested to reduce the burden of these bacteria on cattle. In a third assay, the vaccine was evaluated in feedlot cattle naturally shedding E. coli O157. There were two different inoculum volumes of vaccine, 2 and 3 ml. They concluded that SRP vaccine at the 3 ml dose reduced prevalence of E. coli O157. These results led to the commercial elaboration of a product known as E. coli bacterial extract vaccine with SRP® technology [25] and manufactured by Pfizer Animal Health (Now Zoetis Services LLC). It has conditional license of the U.S. Department of Agriculture.
Vaccines based on bacterins and bacterial envelopes
To evaluate the protection conferred by a bacterin of EHEC O157:H7, van Diemen et al.
[17] prepared a formalin-inactivated bacterin from EDL933nalR strain that was inoculated in a combined schedule by intramuscular (with Alu-Oil) and intranasal (mixed with cholera toxin B subunit) routes. It elicited significant IgG responses against intimin and LPS from E. coli O157:H7, but did not confer protection against intestinal colonization by EHEC O157:H7 after challenge.
In 2011, Sharma et al.
[26] evaluated three heat-inactivated bacterins to reduce the fecal shedding of E. coli O157:H7. They used a hha + strain of E. coli O157:H7 and constructed a hha and hha sepB deletion mutants. These deletions enhance the expression and intracellular accumulation of T3SS proteins, respectively. There was a significant increase in IgG against LEE-encoded proteins in calves vaccinated with hha or hha sepB mutant bacterins compared to wild strain, and a reduction in the numbers of animals shedding EHEC O157:H7 and in the duration of the fecal shedding of bacteria in feces was also observed.
An alternative to bacterins was assayed by Vilte et al.
[27] by means of empty envelopes of EHEC O157:H7 known as bacterial ghosts (BGs). These envelopes retain all surface components in a nondenatured form. Animals were vaccinated with BGs (without adjuvants) by subcutaneous route and elicited significant levels of specific IgG in serum. Following oral challenge with E. coli O157:H7, a significant reduction in both the duration and total bacterial shedding was observed in vaccinated calves.
Vaccines based on flagellin
In 2008, McNeilly et al.
[28] assayed a systemic (intramuscular) and mucosal (intrarectal) immunization with purified H7 flagellin to evaluate its effects on the colonization of EHEC O157:H7 after a challenge. The vaccination induced high titers of anti-H7 IgG and IgA antibodies in both serum and nasal secretions by intramuscular injection, but the intrarectal route failed in generating any response against H7. With respect to colonization of EHEC O157:H7, they concluded that immunization reduced colonization rates and delayed peak shedding, but did not affect total bacterial fecal shedding.
Vaccines based on attenuated Salmonella
In 2010, Khare et al.
[29] assessed a live attenuated recombinant Salmonella enterica serovar Dublin aroA expressing intimin. The recombinant Salmonella was inoculated three times by oral route, but this did not produce a significant increase of intimin-specific IgA in serum and feces. Interestingly, they observed a transient clearance of E. coli O157:H7 in feces from vaccinated calves that subsequently reduced colonization and shedding of bacteria after an experimental challenge.
Vaccines based on Shiga toxins
An attractive target to research in cattle constitutes the Shiga toxins (Stx), the more important virulence factor for human health. In fact, Stx modulates cellular immune responses in cattle [30][31][32]. For that, in 2018, Schmidt et al.
[33] evaluated the response, in a calf cohort, to immunization with recombinant Shiga toxoids genetically inactivated (rStx1MUT/rStx2MUT). Calves were passively (colostrum from immunized cows) and actively (intramuscularly) vaccinated, and this generated a significant difference in serum antibody titers compared with a control group. There was no EHEC O157:H7 challenge, but the natural presence of fecal STEC was monitored, and they observed less fecal positive (by PCR) samples from calves vaccinated than those from control animals. It is interesting because this investigation was not restricted to a determined serotype of EHEC.
In other study, Martorelli et al.
[34] combined recombinant intimin and EspB with the B subunit of Stx2 fused to Brucella lumazine synthase (BLS-Stx2B) in order to evaluate whether the presence of Stx was able to improve the effect of the vaccine on fecal shedding of EHEC O157:H7 following an experimental inoculation. The immunization generates antibodies against Stx2B in serum and intestinal mucosa, but a superior level of protection compared with the use of intimin and EspB alone was not observed.
As was seen, there were and there are numerous efforts looking for a solution to reduce the contamination of cattle and its environment for EHEC O157:H7 and other dangerous serotypes too. Even two commercial products have been achieved, one of which has unfortunately been removed from the market. However, the fact that this pathogen does not constitute a direct problem for farmers, and because EHEC are not a cause of severe illness in cattle, makes our work more challenging. We have not only to find an adequate immunogen or formulation or doses that have a good response, but it must also be attractive enough for farmers to take it as a possible and desirable alternative to collaborate with one health perspective.
Vaccines against ETEC
ETEC is one of the leading bacteria that causes 200 million diarrheal cases and between 170,000 and 380,000 deaths annually in the world [35,36]. Children under 5 years of age in developing countries are the most affected by ETEC infections and 42,000 deaths have been reported only in 2013 [37]. As well, ETEC infections are the main cause of diarrhea reported in persons who travel to Latin America, Africa, and Asia [38], where approximately 10 million traveler's diarrhea cases have been reported worldwide per year [39,40]. There have been several attempts to obtain a vaccine against ETEC. The greatest efforts have been focused on virulence factors such as fimbriae called colonization factor antigens (CFA) and colonization surface antigens (CS) and two enterotoxins, heat-labile (LT) and heat-stable (ST). These virulence factors are extremely important during the pathogenesis of ETEC. CFA promote the attachment to enterocytes in the small intestine and are critical for colonization. After the attachment, ETEC releases LT and/or ST enterotoxins that disrupt fluid and cause electrolyte homeostasis in small intestinal epithelial cells [41]. Therefore, a vaccine directed against CFA could prevent the adherence and intestinal colonization, avoiding the subsequent release of enterotoxins by ETEC. Although 23 immunologically distinct CFA adhesins have been identified, its high variation present in the different circulating strains worldwide has prevented the development of a protective vaccine [42][43][44]. Studies of killed whole-cell vaccines demonstrate the development of colonization factor antigen I (CFA/I) and LT IgA antibodies but only were protective against homologous strains [45,46]. To date, isolated ETEC can be divided into 42 different clonal groups with a singular combination of colonization factors (CFs) and toxins [47]. Alternative approaches of CS targets have been evaluated. CFA/I fimbria, CS3, CS5, and CS6 are immunologically related to the more prevalent CFs covering a 50-80% of the clinical ETEC isolates. ACE527 and rCTB-CF are two whole-cell vaccines that include a wide repertory of CFs. Five CFA adhesins (CFA/I, CS2, CS3, CS5, and CS6), one CFA subunit (CS1), and the LT-B subunit compose the ACE527 vaccine, represented by three live attenuated ETEC strains [48,49]. The orally inoculated ACE527 protects challenged adults with homologous strains [49,50]; however, it had adverse effects on volunteers [51]. The rCTB-CF vaccine is composed by five formalin-killed ETEC strains, which presents CFA/I, CS1, CS2, CS3, CS4, and CS5 adhesins supplemented with recombinant B subunit of the cholera toxin (rCTB) [52,53]. The immune response induced by rCTB-CF vaccine showed to reduce the risk of developing diarrhea in adult travelers [54], but presented little protection and some adverse effects in young children [55,56]. Despite the improvements made to rCTB-CF and ACE527 [50,51,57], these vaccines fail to protect against some ETEC strains since they do not contain the heat-stable class a(STa) or LT-A antigens.
Neutralizing the effects of these enterotoxins is considered a highly effective approach for preventing ETEC diarrhea. However, the development of vaccines from toxoids has not presented satisfactory results either. Both LT and ST are potent toxins; therefore, no toxin can be used directly as a vaccine antigen. However, detoxified derivatives of LT including the B subunit (not toxic LT-B) have demonstrated immunological properties even as an adjuvant in many animal models [58][59][60]. The A subunit is also included in studies of ETEC LT (LT-A) vaccine. The purpose of this incorporation is to induce a mostly protective immune response [61,62]. On the other hand, STa unlike LT is poorly immunogenic due to its small size.
Recent progress in toxoids antigens enhances the potential for developing an effective and safe subunit vaccine against ETEC diarrhea. A skin path vaccine containing LT toxin was applied to humans. Immunized adults developed strong IgG and IgA antibody responses to LT [63,64], which reduced the incidence of moderate-to-severe diarrhea caused by ETEC in healthy adults traveling to Mexico or Guatemala [65]. A secondary study demonstrated that the LT patch provided protection against LT + ETEC diarrhea but provided no protection against STa + ETEC [66]. Therefore, the use of the LT patch alone cannot be considered a suitable approach for vaccinating against ETEC [67].
Subunit vaccine from a mutant LT toxin (mLT) has been proposed. Although it is safer than LT, up to now, mLT has not demonstrated a wide efficacy in the protection against diarrhea caused by ETEC [66]. However, it has been explored mainly as a vaccine adjuvant. mLT demonstrated a higher protective efficacy of vaccine candidates for whole cell ETEC and a CFA + candidate adhesin subunit vaccine [68]. Therefore, its function as adjuvant favors a greater response of the candidate as well as allows the generation of anti-LT response.
Most of the ETEC strains isolated from patients with diarrhea are STa+ alone or LT+. The low immunogenicity and the high need to generate an immune response against STa led the researcher to develop mLT-STa fusions. Results of mouse immunization studies showed that LT-STaN12S toxoid fusion induces neutralizing anti-STa antibodies [69]. The high titer in mice presented against both toxoids makes it a promising antitoxin subunit vaccine.
Alternative adhesion tip of the CfaE and multiepitope fusion antigen (MEFA) were used as a conservative antigen for the development of a broadly protective ETEC antiadhesin vaccine [70]. Nonhuman primate immunized with CfaE showed protection against a CFA/I ETEC challenge [71]. However, the coadministration of CfaE and mLT did not protect against ETEC strains expressing Sta. MEFA is represented by epitopes from the seven most important CFA adhesins expressed by ETEC strains which was strongly immunogenic inducing high titers of antibodies specific to all adhesins [72]. This combination is an efficient means of developing a vaccine for antigenically heterogeneous pathogens like ETEC.
Novel antigens, such as the glycoprotein EtpA and the outer membrane adhesin EaeH, have been identified by genome sequencing [73]. Antibodies against EtpA demonstrated a significant reduction in the colonization of mice by the challenge ETEC strain (H10407) [74]. The identification of new antigens could be the way to incorporate epitopes that allow a greater range of protection against the different ETEC strains. These new epitopes, incorporated into the candidate vaccines that contain the most conserved and representative virulence factors of ETEC, could enhance the protection against diarrhea caused by ETEC.
ETEC is the most common cause of E. coli diarrhea in farm animals, and in the first four days of calves, life can be responsible for severe diarrhea with high mortality [75]. The strains are characterized by the surface adhesins fimbriae being F5, F7, and F17, more frequently involved in diarrhea in calves [76][77][78][79]. In addition, CS31 adhesin is prevalent on isolates from calves with E. coli septicemia [80,81]. In regards to toxins, STa is the only toxin associated with disease in neonatal calves infected with ETEC [82], rarely LT are identified [76,83]. Killed ETEC possessing F5-fimbriae or purified F5 fimbriae are contained in the commercial vaccines for calves. These vaccines do not contain F17, CS31, or STa; however, the impact of their absence is unknown. The maternal vaccination with these vaccines protects the neonatal ETEC infections by passive colostral and lactogenic immunity [84,85]. Once the lactation stage is over, the cattle being more resistant [86]. In this way, vaccination dams are an effective strategy to prevent ETEC diarrhea in neonates calves [87,88].
Vaccines against ExPEC
ExPEC causes a vast majority of urinary tract infections (UTIs), mostly in women with highly common recurrent episodes. ExPEC pathotypes causing UTI are called uropathogenic E. coli (UPEC). A recent review of Nesta and Pizza describes progresses in UPEC vaccines [89]. Most of the vaccines are aimed to stimulate the mucosal immune system. Initial attempts to the development of vaccines against ExPEC infections have been unsuccessful [90,91]. The immunogen in these vaccine was single-purified virulence factors such as hemolysin [92], pilin, or the O-specific polysaccharide LPS, conjugated to either Pseudomonas aeruginosa endotoxin A (TA) or cholera toxin (CT) as carrier proteins [93,94]. Because of high heterogeneity of O-specific polysaccharide, the design of a polysaccharide vaccine able to prevent ExPEC infections has been extremely challenging [95]. The O18-polysaccharide conjugated to either cholera toxin or to P. aeruginosa exoprotein A (EPA) was safe and able to induce antibodies with opsonophagocytic killing activity (OPK) in human volunteers. IgG purified from immunized individuals were protective in mice in an E. coli O18 challenge sepsis model [93]. However, a further test with a 12-valent O-antigen showed difficulties of cross protection.
Three vaccines against UTI reached market status in different countries. Vaccines based on whole or lysed fractions of inactivated E. coli have been evaluated in human clinical trials and have been so far the most effective in inducing some degree of protection in patients with recurrent urinary tract infections. The sublingual vaccine Uromune, an inactivated whole preparation of E. coli, Klebsiella pneumoniae, Proteus vulgaris, and Enterococcus faecalis, evaluated as prophylactic treatment in a multicenter retrospective observational study, demonstrated a certain degree of clinical benefit in terms of reduced recurrence rate in women suffering recurrent UTI [96].
The Solco Urovac vaccine, a vaginal suppository polymicrobial vaccine consisting of 10 inactivated uropathogenic bacteria, including six E. coli serotypes, Proteus mirabilis, Morganella morganii, K. pneumoniae, and E. faecalis strains, showed a minimal efficacy in Phase I and two Phase II trials in women suffering of recurrent UTIs [97][98][99]. However, in two additional clinical studies, the vaginal mucosal vaccine given for a 14-week period increased the time to reinfection in UTI susceptible women, representing a valuable alternative to the antibiotic-based prophylactic regimens [98,100].
One of the first vaccine tested was based on E. coli extract was presented by Frey et al. [101]. This development lead to Uro-Vaxom, a commercial vaccine that was assessed in larger clinical trials a few years later [102] leading to the recommendation of Uro-Vaxom for prophylactic treatment of patients with recurrent urinary tract infections. OM-89/Uro-Vaxom vaccine demonstrated modest protection in women [103]. However, in a more recent trial on 451 female subjects, the lyophilized lysate of 18 E. coli strains, OM-89/Uro-Vaxom, manufactured using a modified lytic process, based on alkaline chemical lysis and autolysis, failed to show a preventive effect on recurrent uncomplicated UTIs [104].
Other vaccines reached clinical trial status. The development of ExPEC4V, a novel tetravalent bioconjugate vaccine developed by Glaxo Smith Kline against extraintestinal pathogenic E. coli, started by an epidemiological screening of the prevalent E. coli serotypes causing infection in women in Switzerland, Germany, and the USA. The authors selected the O antigens from LPS from the prevalent serotypes. It was evaluated for safety, immunogenicity, and clinical efficacy in placebo-controlled phase Ib trial [105]. By glycoengineering, the O antigens were conjugated in E. coli. The vaccine was well tolerated and elicited a robust antibody response in patients suffering from recurrent UTIs. Data indicated a reduced incidence of UTIs after vaccination, especially for higher bacterial loads. Clinical trial was performed in a population of healthy women with a history of recurrent UTI allowed for an additional, preliminary assessment of the candidate's clinical efficacy. In a multicenter Phase Ib clinical trial, 92 healthy adult women with a history of recurrent UTI received a single injection of either intramuscular ExPEC4V or placebo. The authors concluded that the tetravalent E. coli bioconjugate vaccine candidate was well tolerated and elicited functional antibody responses against all vaccine serotypes [106]. DOI: http://dx.doi.org/10.5772/intechopen. 82835 Mobley et al. investigated four defined antigens (IreA, Hma, IutA, and FyuA) associated with iron uptake, as an immunogen to prevent UTI [107]. The adjuvant used was cholera toxin. They tested the formulation in mice and observed antigenspecific IgG response. High antibody titers correlate with low colony forming units (CFUs) of UPEC following transurethral challenge of vaccinated mice. In addition, sera from women with and without histories of UTI have been tested for antibody levels to vaccine antigens. They indicated that iron uptake components are a suitable target for vaccination against UTI. Later, it was observed that the iron receptor FyuA is present in 77% and it is highly conserved among UPEC isolates [108]. FyuA immunization of mice reduced the colonization of UPEC in bladder and kidney. Adhesins and bacterial appendages as flagella have a long history as immunogenic single antigens component of experimental vaccines against UTI. FliC (or pilin) and FimH (from type 1 fimbriae) were administered to mice as a fusion or mixed and elicited higher levels of serum and mucosal. Different combinations and adjuvants elicited good protection against UPEC [109].
Vaccines against APEC
APEC that belongs to the ExPEC pathotype is a major causative agent of colibacillosis, aerosacculitis, polyserositis, septicaemia, and other diseases in chickens, turkeys, and other avian species. It is responsible for significant loss for the poultry industry. Main APEC serogroups associated with disease are O1, O2, and O78.
An ideal vaccine for poultry has to be able to induce cross protection against various APEC serogroups capable of causing disease. To be deliverable via a massive immunization method such as administering the antigens in drinking water or feed, in ovo and spray, in order to immunize thousands of broiler chickens, must be used. And, the vaccine has to be administered at a young age so that the birds develop a protective immune response by the age of 21 days when they are most vulnerable to APEC infection [110].
Inactivated bacterin vaccines or autovaccines of APEC are frequently used in the field, but their protective efficacy was not demonstrated. Landman and van Eck studied the protection conferred in laying hens against E. coli peritonitis syndrome (EPS) disease. Vaccines were formulated either as aqueous suspension or as waterin-oil induced protection against homologous challenge, while protection against heterologous challenge was inconclusive. However, other study [111] indicated no protection against a challenge with homologous or heterologous strain, in spite of a raise of IgY titer in vaccinated animals.
A recombinant Salmonella enterica serovar Typhimurium strains expressing the heterologous O polysaccharide of E. coli O1 and O2 was used to immunize chickens and elicited production of serum IgG and mucosal sIgA antibodies against the LPS of APEC O1 and O2. The immune response induced resulted protective against a lethal dose of both APEC serogroup strains [112]. An attenuated Salmonella (Δlon, ΔcpxR, and ΔasdA16) delivery system containing the genes encoding P-fimbriae (papA and papG), aerobactin receptor (iutA), and CS31A surface antigen (clpG) of APEC was constructed, and its potential as a vaccine candidate against APEC infection in chickens was evaluated. It induced an immune response and an effective protection against colibacillosis caused by APEC [113].
Mixed recombinant APEC surface proteins EtsC (a type I secretion system protein), the porins OmpA and OmpT, and TraT of APEC were used as antigens to immunize chickens seeking for a broad protection against several serotypes of APEC. The experimental vaccine elicited specific IgY and the induction of diverse cytokines in spleen and resulted in a reduction of lesion scores in different organs and a reduction of bacterial loads in blood and organs [114]. *Address all correspondence to: cataldi.angeladrian@inta.gob.ar A commercial vaccine (Gall N tect CBL) against avian colibacillosis for layer hens is produced and marketed in Japan since 2012. It consists of a live attenuated O78 APEC with a Δcrp deletion. A big trial in layer hens [115,116] demonstrated that it prevents avian colibacillosis infection and improves productivity. Live attenuated APEC strains were used as experimental vaccines for various research groups in colibacillosis fields. Strains deleted in aroA [117], carAB [118], and galE [119] were tested. Another commercial vaccine, based in subunit components, is Nobilis (MSD) composed by F11-and FT-antigens of APEC in a water-in-oil emulsion. No trials have reported by the company, but Gregersen et al. in 2010 [120] observed that in a controlled trial the vaccine application did not affect the overall mortality rate between the vaccinated and control flocks, but mortality due to E. coli infections made up only 8.2% in vaccinated birds compared with 24.6% in unvaccinated birds. Also, differences in average first week mortality, average weight at 38 days, and food conversion rate among vaccinated and control birds, respectively, were not found.
Conclusion
A high interest in the development of vaccines against pathogenic E. coli occurred in recent years. This interest is related both to pathotypes affecting human and animal health. Few vaccines have been licensed and reached market and public health status. There is an intrinsic difficulty in directing the immune response to a bacterial species that is commonly part of the animal microbiota. The state of the art consists in identifying antigenic components that are exclusive of pathogenic subtypes.
In spite of these difficulties, science has gained a relevant knowledge of virulence, pathogenicity, genomics, and epidemiology of pathogenic E. coli, and with no doubt this will benefit vaccinology concerning pathogenic E. coli.
Conflict of interest
The authors declare no conflict of interest. | 2019-04-03T13:10:15.421Z | 2019-01-24T00:00:00.000 | {
"year": 2019,
"sha1": "43901bb2459e8e7f91deb16a2389ab0441034581",
"oa_license": "CCBYNCSA",
"oa_url": "https://ri.conicet.gov.ar/bitstream/11336/148237/2/CONICET_Digital_Nro.0d862bbe-1686-4c2f-9a3d-c3e7f1859b91_A.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9dbd07356b5db311757608c1361c18726ace2aa2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
220843222 | pes2o/s2orc | v3-fos-license | Genetic diversity and population structure of Aedes aegypti after massive vector control for dengue fever prevention in Yunnan border areas
Dengue fever is a mosquito-borne disease caused by the dengue virus. Aedes aegypti (Ae. Aegypti) is considered the primary vector of Dengue virus transmission in Yunnan Province, China. With increased urbanization, Ae. aegypti populations have significantly increased over the last 20 years. Despite all the efforts that were made for controlling the virus transmission, especially on border areas between Yunnan and Laos, Vietnam, and Myanmar (dengue-endemic areas), the epidemic has not yet been eradicated. Thus, further understanding of the genetic diversity, population structure, and invasive strategies of Ae. aegypti populations in the border areas was vital to uncover the vector invasion and distribution dynamic, and essential for controlling the infection. In this study, we analyzed genetic diversity and population structure of eight adult Ae. Aegypti populations collected along the border areas of Yunnan Province in 2017 and 2018. Nine nuclear microsatellite loci and mitochondrial DNA (mtDNA) sequences were used to achieve a better understanding of the genetic diversity and population structure. One hundred and fourteen alleles were found in total. The polymorphic information content value, together with the expected heterozygosity (He) and observed heterozygosity (Ho) values showed high genetic diversity in all mosquito populations. The clustering analysis based on Bayesian algorithm, the UPGMA and DAPC analysis revealed that all the eight Ae. aegypti populations can be divided into three genetic groups. Based on the mtDNA results, all Ae. aegypti individuals were divided into 11 haplotypes. The Ae. aegypti populations in the border areas of Yunnan Province presented with high genetic diversity, which might be ascribed to the continuous incursion of Ae. aegypti.
Results
Microsatellite genetic diversity. The polymorphic information content (PIC) value is one of the indicators used to measure allele richness of genes. A total of 114 alleles were found for the nine genetic markers; all alleles were sequenced. As shown in Table 1, the microsatellite locus SQM 6 had the highest number (20) of alleles, while marker SQM 1 had the lowest number (7) of alleles. The PIC values were high, ranging from 0.392 to 0.886, and the average value was 0.672, which indicated that sites were highly polymorphic and can reflect the genetic characteristics of all Ae. aegypti populations 24 . The microsatellite locus SQM 6 had the highest PIC value (0.886), whereas the locus SQM 1 the lowest PIC value (0.392) ( Table 1).
Normally, the genetic diversity of the mosquito population is positively related to the expected heterozygosity (He) and observed heterozygosity (Ho) value of the population. He and Ho value of all Ae. aegypti populations ranged from 0.385 to 0.605, which suggests that the genetic diversity of all mosquito populations is relatively high (Fig. 1). Except for CY, the Ho value in other regions is lower than the He value, which indicates that there may be inbreeding within the species. In addition, CY may have a large amount of individual external supplements or may experience bottlenecks.
Scientific RepoRtS | (2020) 10:12731 | https://doi.org/10.1038/s41598-020-69668-7 www.nature.com/scientificreports/ Nearly all the F IS values in all Ae. aegypti populations, except population YJ, were positive, ranging from 0.05882 to 0.24657 (Table 2), which further indicated that these populations contained different degrees of inbreeding and Heterozygote deficiency that also may be the reason for the deviation of all populations from HWE.
Microsatellite genetic structure. The F IT value over all populations was 0.309, which indicated that there were significant population differences among the individuals screened. The AMOVA results indicated that the largest proportion of genetic variation in Ae. aegypti population existed in individuals and individuals within populations, accounting for 69.14% and 15.69% of the variation, respectively (Table 3). Although the sampling site is an important factor (P < 0.0001), the ratio was relatively low ( Table 3). The Bottleneck effect analysis of all populations revealed that nearly all populations, except population ML and RL, are mutation-drift equilibrium (Table 4). www.nature.com/scientificreports/ In order to analyze the specific genetic structure of all eight Ae. aegypti populations, based on the Bayesian algorithm, a clustering analysis was carried out (Fig. 2). Combining with The UPGMA and DAPC analyses, revealed that all eight Ae. aegypti populations can be divided into three genetic groups of the populations from Lincang city, Xishuangbanna prefecture and Dehong prefecture were genetically correlated, except for RL population, which was highly related to the population Xishuangbanna prefecture (Figs. 3, 4).
The IBD analysis displayed that the genetic distance of all eight Ae. aegypti populations were positively related to geographic distance, which meant the geographical isolation was the primary cause of genetic diversity of Ae. aegypti (Fig. 5). While the Ae. aegypti can only move hundreds of meters around their larval habitats, which suggests that the transmission of Ae. aegypti in Yunnan Province does not depend on its own activities, but on other factors, such as human activities.
The pairwise F ST values of Ae. aegypti ranged from 0.061 to 0.220 (Table 5), showing significant genetic differences. All P values were significant (P < 0.05) after Bonferroni corrections were applied.
Haplotype networks and diversity. Based on the mitochondrial COI, ND4, and ND5 areas of Ae.
aegypti, all Ae. aegypti individuals from nine populations were divided into 11 haplotypes, among which H1, 2, 3, 7, and 8 were the main haplotypes. The distribution of these haplotypes in a specific population is shown in Fig. 6. All the new sequences generated in this study are available from GenBank (Table 6). Among the eleven haplotypes, H1, 2, 3, 7, and 8 were the main haplotypes in all populations. The haplotype H1 had the most individuals, mostly from MD, MH, CY, LC, and RL, while H3 and H8 were mainly distributed at Xishuangbanna prefecture and H2 and H7 were only distributed at YJ and JH, respectively (Fig. 6). The vast majority of individuals have shared haplotypes, and sampling sites had individuals that belonged to haplotypes. Neutral test and mismatch analysis based on the mitochondrial genes ND4 and ND5 of Aedes aegypti showed that the distribution of nucleotide mismatches in this population had a single peak structure, indicating that the population of Ae. aegypti had experienced at least one significant population expansion (Fig. 7).
Discussion
The information on invasion and spread of mosquito vectors is essential for understanding vector-borne disease outbreak, transmission dynamics among human populations and implementing effective mosquito control programs 25 . These are all important factors influencing the mosquito population dynamics, genetic structure patterns, and pathogen transfer through vector populations 26 . Ae. aegypti is the most important epidemic vector that can cause DF and dengue hemorrhagic fever (DHF) in human and is mainly distributed in southeast China. The most suitable habitats of Ae. aegypti include Hainan, Guangdong, Guangxi, the western and southern border areas of Yunnan, and parts of the southern Guizhou region 27 . Yet, due to climate changes and increased urbanization, a significant northward shift occurred in the northern Chinese region over recent years 28 .
Ae. aegypti is an invasive species and potential vector of disease agents in China, which has a significant impact on public health. Ae. aegypti-associated infection was first reported in Yunnan Province (Jiegao Port, near Ruili City, Dehong prefecture) in 2002 29 . In 2009, the Ae. aegypti was detected for the first time in Guanlei Port, Mengla city, Xishuangbanna prefecture 30 , and later on (2014) in Lincang, Mengding county 31 . The distribution range and abundance of Ae. aegypti species have significantly increased, and were established in at least eight cities in Yunnan Province. Therefore, the monitoring of Ae. aegypti species is essential for preventing and controlling vector-borne infectious diseases. In our study, all the Ae. Aegypti samples were collected from eight sampling places in three prefectures of Yunnan Province (Xishuangbanna prefecture, Lincang city, and Dehong prefecture). The DF cases in Yunnan Province mainly originated from these prefectures.
Our population genetics analyses of the populations from Yunnan border area that were based on two types of genetic markers (Microsatellites and mtDNA) revealed the genetic structure and the population distribution within this region. The PIC value, He and Ho value were important parameters for measuring the genetic Table 3. Hierarchical analysis (AMOVA) of the genetic variation in the Ae. aegypti samples. www.nature.com/scientificreports/ diversity of a population; the higher the value, the more complex the population structure is. Our results revealed that Ae. aegypti species in the Yunnan border region had a great allelic variation. The Ae. aegypti mosquitoes may easily transmit the virus to humans and usually find shelter in indoor habitats. Their flight range is limited, www.nature.com/scientificreports/ which means they can only move hundreds of meters around their larval habitats 25,32 . The relatively high genetic diversity of all mosquito populations is most likely caused by invasion events and human activities 33 . The results of IBD analysis support this conclusion that the dispersal of Ae. aegypti species is aided by human activities and transportation in Yunnan Province. Bayesian algorithm-based population analysis showed that all Ae. aegypti populations could be divided into three genetic groups. The first group had four populations in the Xishuangbanna prefecture (JH, MH, and ML) and RL city, which might be related to the close tourism and commercial trade exchanges between these two regions. The second group represented two populations from Lincang City. The third group was composed of LC and YJ. Contrary to other six populations, YJ population is closely related with LC population, which is significantly different from the UPGMA result. The differences may come from the sample size and range.
Except for Yingjiang, the inbreeding coefficient (F IS ) values of eight other Ae. aegypti populations were positive. Combined with the UPGMA and DAPC analysis, the results indicated that there may have a recent invasion and colonization of the Ae. aegypti in YJ city. Due to the limited flight range, this phenomenon is common for Ae. aegypti population on a small spatial scale 33 . In 2016, the Xishuangbanna Prefecture established a "Spring www.nature.com/scientificreports/ Patriotic Health Movement" with the scope to provide the integrated control for infectious disease vectors. This may explain the bottleneck effect observed in ML and RL. MtDNA markers have been widely used to evaluate the genetic diversity of Ae. aegypti populations 34,35 . In our study, the degree of polymorphism found in the COI and ND4 sequences were relatively high (eight populations were divided into eleven haplotypes). The H1, which was the dominant haplotype, was found in five places. The analysis of all mosquito samples from two localities in Lincang City (MD and CY) showed only one haplotype (H1) for each gene. Dehong Prefecture is close to Myanmar border, and the intensive personnel activities have led to a large number of invasion events. The abundant waters and commercial activities in Xishuangbanna prefecture have also contributed to many invasion events. This idea was supported by the high levels of polymorphism detected in Xishuangbanna and Dehong prefectures (six haplotypes and seven haplotypes, respectively), which may be the main entry points of Ae. aegypti in Yunnan Province. The H2 haplotype was only distributed at YJ independent from other regions. Combined with the negative F IS value of population YJ, the Ae. aegypti species likely invaded Yunnan Province from this region over recent years.
Our research shows that in Dehong and Xishuangbanna prefectures, Ae. aegypti population invade these areas because of the continuous tourist and business activities. Inspection and quarantine need to be strengthened at the border ports and further investigation and research on mosquito vectors should be carried out. The government needs to designate effective prevention and control measures, strengthen environmental governance in the border areas and implement mosquito control measures. www.nature.com/scientificreports/ conclusion The nuclear microsatellite markers and mtDNA sequences (COI, ND4, and ND5) were used to uncover the population genetics of the Ae. aegypti in the border area of Yunnan Province. Although several attempts have been made by the government of Yunnan Province to control the mosquito vectors, the Ae. aegypti populations in this region showed high genetic diversity and genetic structure due to the continuous invasion, and increased urbanization. Our research confirms that, over recent years, a significant Ae. aegypti invasive event occurred in YJ City; and that the Xishuangbanna and Dehong prefectures were important areas for the Ae. aegypti invasion. In summary, our results suggest that the control of Ae. aegypti in Yunnan Province is still a demanding task that needs to be taken seriously. Thus, monitoring of suspected cases of DF and the vectors should be enhanced.
Materials and methods
Mosquito sampling and DNA isolation. All the adult Ae. aegypti samples were collected from following eight locations along the border area of Yunnan Province between May 2017 and September 2018 (Fig. 8, Table 7). Each collection site covered an area of approximately 500 m in diameter. According to the Surveillance Methods for Vector Density-Mosquito (GB/T 23797-2009), a hand-held aspirator was used to collect the adult mosquitoes (intercepted before biting). All the samples were identified through the analysis of morphological characteristics in the wild field 36 and preserved in 100% ethanol at 4 °C for the isolation of genomic DNA 37 .
According to the standard DNA extraction procedure, genomic DNA was isolated from individual mosquito sample with the TaKaRa Mini-BEST Universal Genomic DNA Extraction Kit (Takara, Dalian, China); the quality and quantity of extracted DNA were analyzed using NANOdrop1000, after which samples were stored at − 20 °C until further analysis. www.nature.com/scientificreports/ PCR amplification and microsatellite genotyping. Nine microsatellite polymorphic loci were screened from 58 loci, which were described in previous studies by denaturing polyacrylamide gel electrophoresis 38,39 . The primer sequences and information are summarized in Table 8 (Table 8) for 45 s) and 72 °C for 45 s. The final extension was performed at 72 °C for 10 min. All PCR amplification products were verified by electrophoresis of 3 μL on a 1.5% agarose gel. The formamide was mixed with LIZ 500-labeled size standard using a ratio of 100:1, and 15 μL mixture was added into the sample plate. The PCR amplification products were diluted at 1:10, 1 μL was added into the reaction, and then run on an ABI3730XL (Applied Biosystems, Foster City, USA) capillary sequencer. All microsatellite alleles were evaluated using GeneMapper software (Applied Biosystems) 14 .
Microsatellite data analysis. Genetic diversity.
The PIC values of all nine loci were calculated with PIC-Calc 0.6 14 . The genetic diversity of all Ae. Aegypti populations were characterized by expected heterozygosity (He) and observed heterozygosity (Ho), using POPGENE version 1.32. The F IS value of each mosquito population was also calculated. The statistical significance test was performed with the exact tests available in POPGENE 40 .
Genetic structure. The genetic variation was tested by the AMOVA test with Arlequin (version 3.5.2.2) for the interpretation of genetic variability and structure among different locations, mosquito populations. The AMOVA was evaluated at four different hierarchical levels: (1) all samples (non-grouped) were analyzed as a single group to test the overall genetic differences between samples; (2) the samples in one region were analyzed as a unique group; (3) the interregional populations were analyzed as an unique group; (4) the individual sample within population was analyzed as an unique group. Based on the stepwise mutation model (SMM), the recent genetic bottleneck in each mosquito populations was calculated by the software BOTTLENECK 1.2.02. The data were analyzed with the recommended settings: an index statistic closer to 1 indicates that the population is in a stable state, while a very low value indicates that the population has experienced a genetic bottleneck in the past 41 . The sign test implemented in the software was used to test for significant heterozygosis excess. The isolation by distance (IBD) was estimated with Mantel's test in R, using the correlation between genetic distance and geographic distances by the regression of pairwise FST/(1 − FST) on the natural logarithm (Ln) of straight-line geographic distance. www.nature.com/scientificreports/ For the determination of real genetic clusters (K) within all mosquito samples, a Bayesian clustering algorithm-based software STRU CTU RE 2.2 was employed. All mosquitos were divided into different populations represented by a specific number (K = 8), under the assumption of Hardy-Weinberg equilibrium and linkage equilibrium 42 . The software parameters were set as follows: the assumed populations ranged from 1 to 8, and the calculation model was set as admixture ancestry and independent allele frequency models 100,000 burn-in steps followed by 1,000,000 MCMC replicates, and each population was calculated for 10 runs. The optimum K value was estimated with Evanno's △k method based on the second-order rate of change in the log probability of the △k among 10 runs of each assumed K 43 , and all the results were uploaded to a web-based utility Harvest for the calculation of the optimum K value (https ://taylo r0.biolo gy.ucla.edu/struc t_harve st/). Furthermore, the : AAT CGT GAC GCG TCT TTT G CT10(TT)CT 233-239 R: TAA CTG CAT CGA GGG AAA CC SQM 2 F: CAA ACA ACG AAC TGC TCA CG GA15 157-183 R: TCG CAA TTT CAA CAG GTA GG SQM 3 F: ATT GGC GTG AGA ACA TTT TG CAT7 156- | 2020-07-29T14:58:54.300Z | 2020-07-29T00:00:00.000 | {
"year": 2020,
"sha1": "1c19e769204286d68cbda015afe6a78877f1b9ca",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-69668-7.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c19e769204286d68cbda015afe6a78877f1b9ca",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234508852 | pes2o/s2orc | v3-fos-license | Strengthening of Independent Character Through Online Learning in Elementary School
The Covid-19 pandemic in Indonesia has resulted in learning activities being carried out online. Online learning patterns make it difficult for teachers, students, and parents because they have to implement a new learning system, especially for students who are still in elementary school. Online learning requires independent learning, discipline, and responsibility of students. Therefore, good adaptation and cooperation between teachers, students, and parents are needed so that online learning can be carried out effectively. Of course, this collaboration must also be supported by teaching materials that are by the conditions of the students. Teaching materials used in the application of online learning are generally technologybased, but teachers can still use other teaching materials that can support learning, such as printed teaching materials. Online learning can be a place to foster character in students, mostly independent characters.
INTRODUCTION
The learning process in Indonesia is generally carried out face-to-face between teachers and students classically. Still, this condition has now changed due to the Covid-19 pandemic, which has plagued Indonesia since March 2020. Covid-19 is an infectious respiratory tract disease caused by the novel coronavirus or a new type of coronavirus (SARS-CoV2). This virus can be transmitted through droplets of body fluids and physical contact with infected people. Over time, the number of cases that occur is increasing every day. Besides that, the death rate also continues to increase [1]. This condition led the Indonesian government to issue a study at home policy. The government took this action because students generally enjoyed hanging out with their friends at school. If one of the students is positive for the virus, it is feared that all students in the school will also be infected. As a result, learning patterns that are usually carried out face-to-face between teachers and students now have to be done online to break the chain of virus transmission. The implementation of online learning is a new thing, both for students, teachers, and parents, so not all parties are ready to implement an online home learning system [2]. There are still some obstacles and constraints experienced by each online learning implementer.
Teachers carry out online learning by utilizing technology. The use of this relatively new technology often confuses students, especially elementary school students. Online learning is difficult for elementary students because they are not old enough and are not familiar with the use of technology [1]. There need to be teaching materials that are easy to understand by all online learning implementers especially parents, because, in this situation, the role of parents is crucial to assist students in learning at home. Parents must guide their children so that their learning independence can be formed. In essence, the success of learning is not only determined by the teacher, but also needs to be supported by the availability of infrastructure, environment, and the willingness and motivation of students to actively develop their potential [3]. The teacher does not only act as a conveyor of material to students, but the teacher is also a motivator, giving direction, and a good role model for students. Through suitable learning activities, it will certainly form the quality of useful human resources.
If the implementation of online learning correctly, it can manifest character in students, especially independent character. Independent is the attitude or behavior of an individual who is not easily dependent on others [4]. To realize the independent character of students while studying at home, good cooperation between teachers and parents is needed. Teachers need to educate parents about the stages of good learning to foster student learning independence. This is where online learning plays a role in forming students' independent character. The purpose of this study was to describe the benefits of online learning for students' independent character building. The results of a learning process are not only seen from academic achievement, but there are changes in behavior and character that are better than before. In other words, education should form intelligent and characterized individuals so that they will be able to produce superior human resources in achievement and politely interact by the noble values of the nation's culture [5]. These characters will later become guidance for students to live in society.
METHOD
This research uses descriptive qualitative research methods with a literature study. Literature research is a theoretical study, references, and other scientific literature related to culture, values, and norms that develop in the social situation under study [6]. The objectives of qualitative research are to understand the views of individuals, to tell the findings and to explain processes, and to extract in-depth information about a limited subject or setting of research [7]. In this research, the literature study is carried out by removing information from books, research results, and other supporting reading materials.
The following is the sequence of research steps. First, determine the focus of research, namely in the form of topics to be studied with literature studies. Second, determine the source of books, research results, and other supporting materials according to the research focus. Third, compiling literature analysis instruments according to the variables of the topic under study, namely about strengthening character, online learning, and the application of both in elementary schools. Fourth, carry out reference collection and analysis according to the previously prepared instruments. Fifth, perform data reduction and draw conclusions according to the focus of the study.
RESULT AND DISCUSSION
The Covid-19 pandemic in Indonesia has had a significant impact on various aspects of life, including economic, social, and tourism aspects. All activities in this aspect must be stopped due to a pandemic, which requires people to reduce activities outside the home. Also, the significant impact of Covid-19 has been felt in the education sector. So far, the learning process in Indonesia has been carried out face-to-face, but due to the Covid-19 pandemic learning activities have had to be carried out online. The Indonesian government has issued a study at home policy by Law Number 6 of 2018 concerning Health Quarantine, then confirmed by Presidential Decree Number 21 of 2020 and Regulation of the Minister of Health Number 9 of 2020 concerning Large-Scale Social Restrictions. This is done to prevent more comprehensive virus transmission. For junior high school, high school, and university students, the online learning system is not too difficult to implement. Still, for elementary school students, online learning is not very useful if applied in the long term because elementary students need more assistance.
Online learning is learning that utilizes internet network technology, both computers, and mobile-based, so that there is no face-to-face interaction between teachers and students [8]. The implementation of online learning in elementary schools is carried out in various ways according to the circumstances of the students and their respective environments. Based on the results of the study, explains that one of the ways teachers implement online learning is by giving assignments to students through the WhatsApp class group consisting of students and parents [9]. Every day the teacher gives assignments and explanations for the material in the form of videos that are accessed via YouTube or self-made, then sent to the group for students to do. The assignments given are then carried out by students in the assignment book and collected once a week.
The success of learning activities cannot be separated from the role of teaching materials. Dick et.al explained that "instructional material contains the content either written, taught material, or facilitated by an instructor that a student as used to achieve the objective also includes information that the learners will use to guide the progress [10]. "Based on the expression of Dick, et al., It can be concluded that teaching materials are everything that students need in learning activities to achieve specific goals, be it in the form of printed or nonprinted materials, which in practice are provided by the teacher. In the application of online learning, teachers generally use materials technology-based teaching.
Besides used teaching material technology-based such as videos and power points, teachers can use additional teaching materials such as textbooks to support the implementation of online learning. Books can be teaching materials that are in the current situation because, in essence, learning using books gives students the freedom to repeat the material being taught at the speed of learning of each student [11]. Textbooks can be easily used by anyone because they contain extensive explanations and material. Some of the uses of books include: (1) an alternative learning resource for students, (2) make it easier for teachers to teach learning materials, (3) gives students the freedom to repeat the material being conducted at the speed of learning for each student, (4) students can be motivated to learn because exciting material is available, (5) assisting teachers in implementing the applicable curriculum, (6) providing new knowledge for educators and students [11].
There are so many benefits of textbooks for students, teachers, and parents. Books can help smooth class management, add teaching materials, and help students study at home with their parents so that textbooks can become functional supporting teaching materials in the implementation of online learning.
Advances in Social Science, Education and Humanities Research, volume 508
So far, textbooks have become the primary teaching material in classroom learning, both at the primary and secondary school levels. Regulation of the Minister of National Education Number 11 of 2005 explains that textbooks are reference books used in schools that contain learning material in the context of increasing faith and piety, character and personality, the ability to master science and technology, sensitivity and aesthetic skills, as well as physical potential. And health, which is prepared based on national education standards [12]. Based on the above statement, it can be said that a textbook is a book that contains a collection of materials and subject matter used by students and teachers in learning activities. If students feel like adding study material, textbooks can be an alternative teaching material and learning resources that students and teachers can use.
The material in the textbook can be easily modified by the teacher and adapted by needs. Books can also be a means of cultivating independent character because important components in supporting independent learning are included in textbooks, both material and question exercises. Students can practice their understanding through practice questions contained in books. Students can also read over and over again material that they do not understand. To support the cultivation of students' independent character, textbooks can also add character education, such as examples of actions that reflect independent personality. This character education is expected to indirectly change the character of students, by reading and observing students later will imitate the attitudes in the book so that they can apply it in their daily life. With the existence of textbooks, parents will be helped in guiding their children when implementing online learning.
The role of parents is significant in implementing online learning. Parents of students are the primary guides in learning at home when there are no teachers. As explained by Winingsih that there are four roles of parents during online learning, including: (1) parents have the part of teachers at home who are tasked with guiding their children when they cannot do learning in school, (2) parents act as facilitators in implementing online learning, (3) parents act as support providers, motivators, and enthusiasm for learning for their children, and (4) parents as directors [13]. Based on this statement, it can be concluded that when students study at home, parents become the primary educators for students. Therefore, in delivering the material, teaching materials are needed that are easy to use and understand by parents who, of course, have different backgrounds.
There needs to be coordination between teachers, students, and parents before implementing online learning activities so that learning can run effectively. In addition to providing instructions to students regarding the assignment given, teachers also need to provide direction to parents so they can assist and guide learning activities at home. Parents as a substitute for teachers at home only supervise and help when students experience difficulties [14]. This is where parents play an essential role in building students' independent character while studying at home. Parents only become mentors, not replace the part of students in doing the assigned assignments. Parents need to familiarize students with doing the projects given by the teacher individually. For example, if students get a math assignment from the teacher, parents first provide direction to students about how to do it correctly, after which students are allowed to work on these problems independently. Parents are only in charge of monitoring student learning, not giving direct answers to given assignments. It is intended that students do not always depend on their parents and friends, in addition to training how to think and improve student understanding.
When students have finished working on the questions, parents must check the students' answers, and if there are wrong answers to questions, the parents indicate where the students went wrong so that they can be corrected independently. Parents of students should not help as a whole or provide answers directly to the questions given by the teacher. Allow students to work on complex assignments independently, but instructions that can be used as specific references for students are needed to be used as guidelines in working on questions Students must be able to do assignments independently and not always depend on others. That way, student learning independence will be formed. Independent learning is a learning process that is carried out on internal encouragement from individuals without depending on others, has its responsibility to master competencies to solve a problem [16]. In addition to the potential that is owned since birth, the development of independence is also influenced by various stimulations that come from the environment.
Children who have learning independence will show unique characteristics in the learning process. These characteristics usually appear in the various actions he performs. According to Desmita [17] independence is generally characterized by various factors, including the ability to determine self-determination, creativity, and initiative, regulate behavior, be responsible, be able to hold back, make your own decisions, and be able to solve problems without any influence of others. Independent learning is a condition of independent learning activities that do not depend on others, have a willingness, and are responsible for solving their learning problems. Learning independence that has been formed in students over time will strengthen their independent character. Independence in learning can also foster high motivation and self-confidence in students [14].
Online learning has many benefits. Among other things: increasing the level of learning interaction Advances in Social Science, Education and Humanities Research, volume 508 between teachers and students, allows learning anywhere and anytime, reach a wide range of students, facilitate the completion and storage of learning materials [18]. Online learning can be a place to develop students' independent character. The results of research conducted by [19] showed that online learning was able to foster selfregulated learning. This statement is also reinforced by the opinion of Kuo et.al which states that online learning is more student-centered so that it creates responsibility and autonomy in learning [19]. Students are required to prepare and manage study time and maintain their learning motivation independently. Based on this statement, essentially, online learning requires high learning independence, responsibility, and student discipline. This attitude will emerge and become a habit if practiced every day. When students can apply online learning effectively and optimally, of course, the character of independent learning will naturally form.
In addition to independent learning, teachers and parents can work together to supervise student activities at home. For example, students are asked to get used to living independently by making their beds, sweeping the yard, washing their clothes, washing plate, watering plants. Then as evidence that students have done the task, parents secretly take photos and send them to the teacher via the WhatsApp application [1]. This activity is one of instilling an independent character in students. Elementary students prefer activities that involve physical activity rather than having to work on questions continuously. The students' independent activities above are also in line with the government's appeal to adopt clean living habits in their daily life.
Independent character needs to be accustomed since the child is in elementary school because at this age is the most appropriate time to lay the first and foremost foundation in developing various potentials and abilities of physical, cognitive, language, art, social-emotional, moral, religious, self-concept, discipline, and independence [20]. This statement is also confirmed by Wibowo's opinion that the psychological characteristics of elementary school-age students are the dominant times in character and personality formation, if at this time, the cultivation of independent character is carried out optimally, it will become the necessary foundation of student personality until adults someday [5]. Independence is also one of the components of forming social life skills or basic abilities that students must have to be able to adapt to their social environment. Independent character in students can be seen if these students can take the initiative, with or without the help of others, in determining learning activities, formulating learning objectives, knowing resources, and controlling the learning process by themselves [21].
Students who are accustomed to implementing an independent character in themselves tend to have a high enthusiasm for learning. This statement is also supported by the results of research which states that students with good learning independence tend to learn well and can evaluate what they do [22]. This also affects the way students think which is more critical, able to analyze and solve problems in daily life, active, never give up, responsible, self-confident, and able to take advantage of free time by doing useful activities. states that PPK is implemented by applying Pancasila values in character education, especially covering religious values, tolerance, hard work, discipline, creative, democratic, independent, honest, curiosity, love the country, respect for achievement, love peace, communicative, love to read, care about the environment, national spirit, care socially, and be responsible [23].
The PPK program launched by the government through the 2013 curriculum aims to make Indonesian education not only prioritize student cognitive aspects but also encourage affective and spiritual aspects to develop. This is by Law number 20 of 2003 concerning the National education system article 1 paragraph 1, which states that education is a conscious effort to realize an active learning process, to develop students' potential in spiritual attitudes, shape personality, intelligence, and skills needed in social life. Based on this law, we can all know that the purpose of education is not only about the cognitive aspects but also coupled with the formation of character and spiritual attitudes of students.
Character education is crucial to be used as a basis for behavior in everyday life so that students have a good personality [1]. Character education itself has three main functions, namely: (1) the function of forming and developing potential, namely creating and developing the potential of students to behave by the Pancasila philosophy, (2) the function of improvement and strengthening, namely improving and strengthening the role of the family, educational unit, community, and the government to participate and be responsible for developing the potential of citizens and nation-building, and (3) filter function, namely character education, sorting out the culture of the nation itself and filtering the culture of other countries that are inconsistent with national cultural values and noble character [3].
Efforts that can be made by the teacher to continue to foster independent character for students even though online learning include (1) providing clear learning instructions and easy for students to understand, (2) the teacher needs to provide a summary of the material that is in accordance with the core competencies and essential competencies in the questions -problems that are done by students, and (3) assignments that are given do not Advances in Social Science, Education and Humanities Research, volume 508 always have to work on items, but the teacher can assign the task of reading the learning material as an effort to implement literacy activities during online learning, so that the literacy culture that has been carried out in school is still carried out even though the students are learning at home, besides that the teacher can also give assignments in the form of physical activities such as giving students tasks to clean the house, praying dhuha (morning sunnah praying for Muslim), washing clothes and dishes by themselves, as well as activities that can foster an independent character in students. Activities to train independence, if carried out continuously and repeatedly every day, will become a habit until the students mature.
Online learning includes a new learning system for students, parents, and teachers, so there are various obstacles in its implementation. These obstacles include the ineffectiveness of the teaching and learning system. Students are more challenging to understand the material, causing addiction to playing social media resulting in dependency, endangering eye health if it is done continuously, besides that the teacher cannot monitor the development of student learning optimally [24]. Other obstacles are also explained, these are (1) not yet accustomed to applying technology in learning, (2) not all teachers have the same literacy skills, some are relatively more able to adapt, but some are less able to adapt, (3) not all teachers and students have the minimum devices that can be used, and (4) the quality of the connection and the availability of data packages are still limited, so it requires a large amount of money [25].
Although the effectiveness of online learning is not as good as face-to-face learning, if online learning is well planned and implemented by all learning implementers, the results will also be optimal. Apart from the various obstacles experienced by teachers, parents, and students, as a new form of learning, online learning can be a means to foster character education in students, especially independent character.
CONCLUSION
As a result of the Covid-19 pandemic in Indonesia, teaching and learning activities were carried out online. Based on studies that have been carried out, online learning can foster and strengthen independent character for students, especially elementary school students. There needs to be good cooperation between teachers and parents to foster this independent character. Teaching materials that are appropriate to the current situation are also required. Learning textbooks can be an alternative teaching material that can be used in online learning besides technology-based media. Textbooks can adjust the learning speed of each student; with textbooks, students can easily apply to learn independently. Students 'independent behavior affects students' more critical thinking, can analyze and solve problems in daily life, is active, never gives up, is responsible, is confident, and can take advantage of free time by doing useful activities. Parents need to provide opportunities for students to do their assignments individually to strengthen their independent character. | 2020-12-31T09:05:52.258Z | 2020-12-15T00:00:00.000 | {
"year": 2020,
"sha1": "9191d7eda18138226ea6b2f698d961e4e2df917b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/assehr.k.201214.245",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0b276949868b8a7b3bde4412ff6b553309729ccf",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
249025051 | pes2o/s2orc | v3-fos-license | SERVICE QUALITY, PRICE, CUSTOMER SATISFACTION AND WORD OF MOUTH IN HOSPITAL X OUTPATIENT SERVICES
The aim of this study is to determine the effect of service quality and price on customer satisfaction, and customer satisfaction with word of mouth. Customer satisfaction is an important factor that the hospital needs to pay attention to, because satisfied customers will tell their experience of being treated at the hospital, and invite their families and friends to take part in using the hospital's services, or commonly known as word of mouth. Complaints from outpatients at hospital X regarding the length of waiting time and prices provided, is a form of lack of customer satisfaction. This research is an associative quantitative study, carried out by cross-sectional survey / cross-section of 155 respondents. Data analysis was performed using Structural Equation Modeling (SEM). The results of this study indicate a correlation between service quality and customer satisfaction, a correlation between price and customer satisfaction, and a correlation between customer satisfaction with word of mouth. These findings indicate that by improving the quality of hospital services, and providing appropriate prices, customer satisfaction will increase, which will also contribute to increased word of mouth.
INTRODUCTION
In the era of globalization as it is today, geographical distance feels narrow and all information can spread quickly. In addition to increasing opportunities to develop business, this condition also causes higher competition, including for hospital businesses. When making marketing strategies, business people try to find the most effective marketing strategies to influence customers in deciding to consume the products or services produced. Customer satisfaction is the main goal, because satisfied customers will discuss and recommend to others about the products or services they use, or often referred to as word of mouth strategy (Nugrahani, 2008). E-ISSN: 2614-1345 15 trust in the hospital. Several previous research results state that there is a relationship between satisfaction with word of mouth (Khoironi, Syah and Dongoran, 2018;Tsai and Chang, 2010;Kabir, 2016). Amin and Zahora (2013) say that word of mouth is a situation where customers spread positive experiences they have experienced to friends, family, and other people they know.
Jurnal Ekonomi dan Manajemen
Positive word of mouth occurs on the satisfaction felt by individuals from the quality of services used, one of which is the quality of the sick rash service. Customer satisfaction can occur when successfully meeting customer expectations for products and services with perceived performance.
If the perceived performance matches the customer's expectations of service, they are satisfied. If not, they are not satisfied. Jung and Yoon (2013) argue that a company is very wise when measuring customer satisfaction on a regular basis, because one of the keys to customer retention is customer satisfaction. One factor that is predicted to play an important role in customer satisfaction is service quality. According to Parasuraman et al. (1985) effective service quality must have five dimensions consisting of tangible, reliability, responsiveness, assurance, and empathy. Hospitals, businesses that prioritize services as their main product with their own unique characteristics that are directly related to human life. Most of the hospital customers actually do not want to take hospital services for that reason, their demand for the quality of service is more critical than other service provider organizations (Kotler, 2012). The results of Ismail & Yunan's (2016) research show that the dimensions of service quality, namely tangible, reliability, responsiveness, assurance and empathy, have a significant relationship with customer satisfaction and customer loyalty. Al-Borie and Damanhouri (2013) explain that there is a significant difference that the quality of private hospital services has a higher influence on patient satisfaction than public hospitals and explains the highest dimensions such as comfort, easy-to-reach location followed by medical friendliness and staff friendliness when handling patients had the highest effect on patient satisfaction in private hospitals. Several previous studies have shown the same thing that there is a significant relationship between service quality and satisfaction (Amin and Nasharuddin, 2013;Shpëtim, 2012;Sohail, 2003;Ayuni and Mulyana, 2019;Susan and Ratnawati, 2017). In contrast to one of the research results of the Ministry of Education & Bags (2018) which shows that tangible dimensions such as clothes and appearance of service personnel are the least significant factors. In addition, the results of Jana's (2014) study show that there is a weak relationship between responsiveness and different satisfaction with the dimensions of tangibles, E-ISSN: 2614-1345 reliability, assurance and empathy which have a strong relationship with customer satisfaction at Ranchi's casual dining restaurant.
Jurnal Ekonomi dan Manajemen
In addition to service quality, price is also thought to influence customer satisfaction. Price justice perceived by customers is an important role for customer satisfaction and subsequent behavior. If the price is considered fair then it will bring positive behavior from customers such as satisfaction, intention of return visits, and loyalty. Conversely, an unfair price will cause negative behavior such as dissatisfaction and complaints (Liu and Jang, 2009 (2019), the conclusions of their research, explain that price fairness has an effect on satisfaction and there is a positive relationship with customer intention to visit again. Zhan and Lioyd (2013) in their research explained that when customers pay higher prices, they show a stronger intention to switch stores and this effect on dissatisfaction and triggers negative word of mouth when the price difference gets bigger. Santos and Basso (2012), their research findings explain that old customers who compare prices with lower prices offered to prospective customers will trigger perceptions of price inequality that lead to distrust (cognitive drivers) and negative emotions of dissatisfaction (emotional drivers). While Wu (2014) shows different research results, namely that there is a weak relationship between price and satisfaction.
Hospital X is a hospital located in the DKI Jakarta area. Hospital X provides outpatient, inpatient, emergency room, pharmacy, radiology and other services. There are quite a lot of patients visiting this hospital for outpatient care, more than 200 patients per day. Unfortunately, despite the crowds of visitors, Hospital X cannot be separated from patient complaints. The complaint most frequently raised by patients is the length of waiting time for registration and the price given. The waiting time of patients registering, both old patients and new patients, until patients get treatment can reach 30 minutes to 1 hour.
In addition, patients also complained that hospital prices were not in accordance with services provided by patients. This needs to be a concern of the hospital, because it indicates that not all patients feel happy and satisfied with the services provided. The various problems above are very important, because in the face of increasingly competitive competition, hospitals need to Jurnal Ekonomi dan Manajemen E-ISSN: 2614-1345 increase patient satisfaction as much as possible so that the level of loyalty is high in the hope that patients are willing to return to using the same services, causing high positive word of mouth.
However, there are gaps in this study by looking at the results of previous studies, it was found that there was no consistency in the results in knowing the relationship between variables and this study was different from the previous empirical in that the previous empirical was not aimed at outpatients. While the respondents of this study were specifically conducted on outpatients at X Hospital, located in the DKI Jakarta area. In addition, the researcher wanted to see the effect of the four variables used, namely service quality, price, satisfaction and word of mouth, where the combination of these four variables was rarely found in previous studies. The various problems above are very important, because in the face of increasingly competitive competition hospitals need to increase patient satisfaction as much as possible so that the level of patient loyalty is high, where patients want to return to use the services provided and spread positive word of mouth. Specifically, the purpose of this study was to analyze the relationship between service quality with level of patient satisfaction, price with level of patient satisfaction, and patient satisfaction with word of mouth at hospital X.
LITERATURE REVIEW
Service Quality according to Parasuraman et al. (1988) is a reflection of evaluative customer perceptions of service received at a certain time. So the dimensions contained in it are tangible, reliability, responsiveness, assurance, and empathy.
Price according to Kotler and Armstrong (2001) price is the amount of money exchanged for a product or service, furthermore the price is the sum of all the values consumers exchange for a number of benefits by owning or using an item or service. Price refers to the amount paid for a product or service.
Customer Satisfaction according to Hawkins and Lonney (2003) quoted in Tjiptono (2014) is the conformity of expectations with what is felt over the performance of a product that encourages customers to visit and repurchase and recommend the product to friends or family. So the dimensions contained in it are the suitability of expectations, the interest of visiting again, and the willingness to recommend. (1987) is a form of informal communication carried out by a customer and addressed to other customers about the characteristics and experiences gained when using a product or service.
Figure 1. Research Model
Based on the research model, the following hypotheses are arranged: H1: There is a correlation between service quality and patient satisfaction, where the better the quality of service, the higher the level of patient satisfaction.
H2: There is a correlation between price and customer satisfaction, where the more fairs the price is given the higher the level of patient satisfaction.
H3: There is a correlation between patient satisfaction and word of mouth, where the higher the level of patient satisfaction the higher the word of mouth.
METHODS
This research is an associative quantitative research. The study was conducted without giving intervention (treatment) to the research variables with the main objective to analyze about a situation objectively. The study was conducted by cross-sectional survey.
The data collection technique used in this study was a survey, which was conducted by distributing questionnaires to respondents at the research site. Data obtained through questionnaires distributed to respondents, which contained statements and answers of respondents A pre-test was conducted to test the validity by using Kaiser Meyer Olkin (KMO) to test whether there was a correlation between the variables. The KMO value that can be accepted is above 0.500. For the validity of each research questionnaire conducted with the Anti-Image Matrix test. The expected MSA (Measure of Sampling Adequacy) value is a minimum of 0.500 (Malhotra, et al., 2012). Then proceed with the reliability test to determine the extent to which the gauges can be trusted or not, and measure the extent of the consistency of the research measuring instrument. Hypothesis
H1
There is a correlation between service quality and patient satisfaction 8,10 Accepted
H2
There is a correlation between price and customer satisfaction, where the more fair the price is given the higher the level of patient satisfaction.
H3
There is a correlation between patient satisfaction and word of mouth, where the higher the level of patient satisfaction the higher the word of mouth.
Figure 2. Path Diagram T-Value
The table 15 shows that the Sig (2-tailed) value is 0.000 <0.05. Stock risk data shows a significant value smaller than the significance level of α = 5% (0.05). This means a significant difference between the risk of the Jakarta Islamic Index during the economic crisis (2018) and the Jakarta Islamic Index during the pandemic (2020).
DISCUSSION
Test results on service quality variables on patient satisfaction indicate that there is a positive influence (H1-accepted), where the better the quality of service at Hospital X the more patient satisfaction increases. Outpatient assessments of the services provided as a whole is in the medium category, with ratings varying for each indicator. Based on the questionnaire regarding service quality, the highest score is found in the statement of Hospital X has a strategic location. This shows that the superiority of Hospital X is in its location that is easily accessible by patients, in addition to the location which is on the main street in the city center, the means of transportation to reach Hospital X is quite diverse and complete so that patients more easily reach their destination. This advantage is an additional value for Hospital X to become the hospital of choice.
E-ISSN: 2614-1345 UMKT
Conversely, the statement with the lowest score is stated in the statement that the length of waiting time is not in accordance with the standard, which can be detrimental to the patient. This may occur due to the accumulation of patients who register together at certain times of the day, such as in the morning and after lunch, and the lack of manpower in the registration and initial examination. For the other questions in the questionnaire, the results vary from the moderate to high categories. The final score as a whole is quite satisfying and directly proportional to the relatively high value of customer satisfaction. It can be seen that the quality of outpatient services at Hospital X is quite good and is able to provide satisfaction for its patients. Customer satisfaction is the result of an evaluation of a product or service whether the product or service is in accordance with consumer expectations. The results of this study are in line with previous studies conducted by Ismail and Yunan (2016) where the dimensions of service quality, namely tangible, reliability, responsiveness, assurance and empathy correlate significantly with customer satisfaction. Shpetim (2012) also states that service quality positively influences satisfaction. Jana (2014) in his research found a positive relationship of each service quality factor to customer satisfaction, and there was a strong positive correlation between customer satisfaction and customer loyalty. Ogletree (2014) also found a positive relationship between service quality and customer loyalty.
Based on the test results of the price variable and patient satisfaction, the correlation between the two variables is obtained (H2-accepted). Where the price given by Hospital X is considered reasonable and is not inferior to competitors, then customer satisfaction is also high. The average assessment of the price of outpatient Hospital X is included in the medium category. Questionnaire on prices has the highest score in the statement of prices of services offered in accordance with its quality, which is included in the high category. This shows that patients feel that outpatient services at Hospital X are sufficient to meet their expectations, with a strategic hospital location, new equipment, employees and doctors who have good abilities, patients feel the price they pay is comparable to the services they get. While the statement with the lowest score is in the sentence of the benefits of the services provided, in accordance with the price offered, which is included in the medium category. This can be overcome by continuing to improve outpatient services provided by hospitals to patients so that patients feel the price they pay is in accordance with the service they are getting. In the results of testing the second hypothesis (H2), it was found that the results of the analysis support the H2 hypothesis, namely the correlation between price and customer satisfaction. Price affects customer satisfaction, where if the price is reasonable and is comparable Jurnal Ekonomi dan Manajemen E-ISSN: 2614-1345 to the product or service received by the customer, the customer will feel satisfied. Conversely, if customers feel the costs incurred are not comparable to the services they receive, customer satisfaction will decrease. Likewise, if the price given by the company is far above its competitors without any special advantages over the services provided, then the customer will feel dissatisfied.
(Goles et al., 2009) These results are consistent with previous studies, for example Celil Cakici, et al. (2019) in his research found that fairness of prices will increase customer satisfaction and affect the intention of customer return visits. Santos, and Basso (2012) state that customers who feel that they are getting unfair prices will decrease their trust in the company, which causes the intention to switch to using other companies' products and spread negative word of mouth.
Based on data analysis, it can be seen that customer satisfaction affects the level of word of mouth (H3-accepted). In Hospital X the average patient was satisfied with the services provided so that their desire to share their experiences and invite their acquaintances to seek treatment at Hospital X was quite high. In the questionnaire regarding customer satisfaction, the highest score in my statement was satisfied with the rapid response of nurses in Hospital X, which is in the high category. While the lowest score in my statement is satisfied with the service of Hospital X in accordance with the costs that have been incurred. However, the statement is still included in the category of high customer satisfaction, and therefore it can be assessed that Hospital X customer satisfaction is good. But to continue to improve patient satisfaction Hospital X can be done by reviewing whether the prices applied are in accordance with the services provided, and if possible, a reduction in prices for certain services. The results of the word of mouth questionnaire were seen as an average level of patient's desire to recommend Hospital X, including the moderate category.
The highest score is in the statement "I would recommend Hospital X to friends or family who will seek treatment.", which is included in the medium category. While the lowest score is on the statement "I am happy if there are friends / family who also subscribe to the Hospital X." which is also included in the medium category. A statement about word of mouth shows that outpatients in Hospital X are quite interested in recommending the services they have received to their family / friends. This can happen because these patients are satisfied, so they are happy to invite their acquaintances to seek treatment at the hospital. Of course, this is very good, because it can be an effective way to market hospitals and without additional costs. To continue to improve word of mouth hospital patients need to continue to improve themselves and improve customer satisfaction by improving aspects that are considered still lacking as already mentioned above. In the results E-ISSN: 2614-1345 UMKT 23 of testing the third hypothesis (H3), it was found that the results of the analysis support the H3 hypothesis that satisfied customers will increase word of mouth. This can be seen in outpatients Hospital X who are generally satisfied with the services provided so they are interested in recommending Hospital X to their family or friends. This result is in accordance with some previous studies, such as research conducted by Ogletree (2014) where a positive relationship is found where customers who are satisfied with good service will become loyal customers who are willing to provide recommendations and have the intention to return. Kessler and Mylod (2011) show how patient satisfaction will significantly influence hospital preferences that they will attend throughout their lives. If a patient is satisfied with the services provided during treatment at a hospital, then that patient will tend to go back to that hospital and recommend to his acquaintances. (2011) suggest that the customer's expression when obtaining a service can determine whether he will recommend the service. If the customer's expression shows excitement and satisfaction, then the customer is most likely to recommend the service that he got.
Söderlund and Gabrielson
Conversely, if the customer shows a disappointed or angry expression, then the customer will not give recommendations to those around him, or maybe even the customer will spread negative word of mouth. Of course, this has a bad effect and every company wants to avoid it. According to Bearden and Teal (1983) getting and maintaining customer satisfaction is an important determinant of positive word of mouth.
CONCLUSION
Based on research conducted at the X hospital located in Jakarta, it can be seen that there is a pattern that shows that there is a correlation between service quality, price, customer satisfaction, and word of mouth. At Hospital X, the quality of outpatient services is good, and the price that is felt to be comparable to the services obtained and not more expensive than its competitors causes high customer satisfaction. These satisfied customers will voluntarily tell their experiences of taking treatment at Hospital X and inviting their family and friends to take part in treatment at this hospital. Can be seen the level of customer satisfaction is directly proportional to the level of word of mouth, which is an important key in marketing strategies. Loyalty, including word of mouth, is greatly influenced by customer satisfaction. Satisfied customers will significantly influence the preferences of the hospital they will visit throughout their lives, tend to go back to the hospital, and recommend the hospital to their acquaintances. Customer satisfaction at Hospital X is strongly influenced by the quality of services and prices provided by the hospital, although overall it is in the high category, there are a number of points that still need to be improved, such as the length of time waiting for treatment. With continuous improvement, patient satisfaction will continue to grow, which results in higher levels of word of mouth.
Finally, this study proved that there is an influence between service quality, price, and customer satisfaction on word of mouth. Where good service quality, and appropriate price will increase patient satisfaction, which affects the higher level of word of mouth. There is a direct influence between service quality on customer satisfaction. If the quality of service provided is good, then customer satisfaction will increase. At Hospital X the quality of outpatient services provided is quite good, although there are several aspects that need to be improved. In direct proportion to the quality of service, patient satisfaction at Hospital X is quite high. Prices affect customer satisfaction. Hospital X outpatients assess the price of services provided by hospitals affordable and comparable to the services they get, also not higher than competitors. The level of satisfaction of outpatients at Hospital X was high. Moreover, customer satisfaction directly affects word of mouth. Customer satisfaction and word of mouth run with a straight comparison, the higher the level of customer satisfaction the higher the word of mouth. In Hospital X the level of patient satisfaction is high, as is their desire to recommend the hospital where they seek treatment to those around them. | 2022-05-25T15:25:13.326Z | 2021-05-05T00:00:00.000 | {
"year": 2021,
"sha1": "5e4d292f45bb8acf5868db8928af4fb16090d523",
"oa_license": "CCBY",
"oa_url": "https://journals.umkt.ac.id/index.php/JEM/article/download/2101/865",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "36b014c3d03216e2e4f3ab96c615fa165e78e646",
"s2fieldsofstudy": [
"Business",
"Medicine"
],
"extfieldsofstudy": []
} |
212626797 | pes2o/s2orc | v3-fos-license | Agricultural expansion in Uruguayan grasslands and priority areas for vertebrate and woody plant conservation
Habitat loss due to land-use change is the greatest threat to biodiversity on a global scale, and agriculture has been the principal driver of change. In Uruguay, the conversion of native grasslands to croplands (e.g., soybean) and exotic forest plantations (Eucalyptus and Pinus) has accelerated during the last two decades. We studied the vulnerability of vertebrate and woody plant diversity to the loss of grassland areas, driven by agricultural and forestry expansion, to identify priority areas for conservation. We assessed the spatial variability of biodiversity vulnerability in function of species richness and number of focal species (i.e., prioritized species) of woody plants and terrestrial vertebrates that use grassland ecosystem as habitat. The top 17% of vulnerable sites (51 of 302 cells) were selected as priority conservation areas for Uruguay, following Aichi Target number 11. Approximately 36 % of the original continental territory of Uruguay, mainly grasslands, was converted to cropland (28%) and exotic forest plantations (8%) in 2015. Approximately 27% of the priority cells for conservation of vertebrates and woody plant diversity have been transformed, especially in three ecoregions in which habitat loss was between 35-45%. We simulated a land-use scenario for 2030, based on national production goals of soybean and exotic forest plantations, projecting that: (1) the overall loss of original habitat (mainly grasslands) would reach 48% of the country’s land area, and (2) 45% of the priority cells would be converted to agricultural lands, especially in four ecoregions, with habitat losses greater than 50%. Our results suggest an urgent need to develop strategies to reduce the rate of natural grassland loss in Uruguay, as well as to conserve biodiversity and ecosystem services associated with these systems. Conservation efforts should focus on prioritized cells, especially those with no protection status and a high likelihood of agricultural conversion in 2030, through expanding public and private protected areas and promoting wildlife-friendly agricultural alternatives, such as beef production in natural grasslands.
INTRODUCTION
Humans have been transforming and replacing ecosystems across most of the terrestrial biosphere throughout history (Ellis et al. 2010). About half of the terrestrial ice-free area has been modified by human activities, through replacing and modifying natural habitats by agricultural and urban systems (Chapin et al. 1997, Kareiva et al. 2007, Ellis et al. 2010. At the global scale, agriculture has been the principal driver of land-use change. The expected increase in global consumption suggests a strong increase of global food demand until 2050 (Green et al. 2005, Bodirsky et al. 2015, thus increasing the pressure to further expand productive areas (e.g., Popp et al. 2017). Land-use and land-cover change (we use "land-use change" for simplicity) are key drivers of the present loss of biodiversity and associated ecosystem services in terrestrial ecosystems (Vitousek 1994, MEA 2005, Haines-Young 2009, which is expected to continue in the future under certain socioeconomic scenarios (Sala et al. 2000, Newbold 2018).
The historical process of land transformation has been highly heterogeneous across the surface of the earth, with some biomes and regions almost entirely transformed and others almost uninfluenced by direct human activity (Ellis et al. 2010). Nowadays, the loss of natural forests (e.g., tropical rainforest) is a focus of attention for scientists and the public but the highest levels of anthropogenic transformation have occurred in open biomes. The greatest historical land-use changes have occurred in grasslands, savannas, and shrublands, with all of these experiencing more than 80% of land-use conversion from 1700 to 2000 (Ellis et al. 2010). Most of this land-use change was the result of converting both wildlands and seminatural lands to rangelands and croplands. In the case of temperate grasslands, about 41% worldwide have been converted to agricultural use, another 6% to urbanization, and an additional 7.5% to commercial forestry and other disturbances (White et al. 2000). Today temperate grasslands are considered the most altered terrestrial ecosystem on the planet and are recognized as the most endangered ecosystem on most continents because they have the lowest level of protection (about 4%) among the world's 14 biomes (Henwood 2010). The consequences of land-use changes on biodiversity have been relatively less studied in temperate grasslands, particularly in South America (Henwood 1998, IUCN 2009).
The Río de la Plata Grasslands is one of the largest complexes of grasslands in South America, covering more than 750,000 km² in the vast plains of central-east Argentina, southern Brazil, and Uruguay (Soriano et al. 1992, Paruelo et al. 2007. It comprises two ecoregions, the Pampas (Argentina) and the Uruguayan Savannas or Campos (Uruguay, Brazil, and Argentina;Soriano et al. 1992, Dinerstein et al. 1995. During the conservation assessment of the terrestrial ecoregions of Latin America and the Caribbean (Dinerstein et al. 1995), excessive grazing by livestock and the conversion of natural habitats to agriculture were identified as the primary threats to biodiversity. During the last two decades, the rate of grassland conversion to croplands and exotic forest plantations has been alarming in this region, mainly driven by the high price of commodities (e.g., soybean) in the international market (Jobbágy et al. 2006, Baldi and Paruelo 2008, Modernel et al. 2016).
The Río de la Plata Grasslands includes a diversity of ecosystems. In addition to several grassland types, there are other marginal but well-distributed ecosystems, such as native forests, woodlands, savannas, shrublands, wetlands, and several aquatic systems (Soriano et al 1992). This variety of habitats sustains a substantial levels of species diversity, with thousands (2000-4000) of vascular plants including more than 500 graminoid species, approximately 100 species of mammals, and over 500 bird species (Bilenca andMiñaro 2004, Overbeck et al. 2007). Recent evidence has suggested that landscape modification in the Río de la Plata Grasslands due to land-use change could have significant impacts on plant and animal diversity as well as on the provision of ecosystem services (Overbek et al. 2007, Medan et al. 2011, Aspiroz et al. 2012, da Silva et al. 2015, Modernel et al. 2016, Paruelo et al. 2016. The evidence reported in these studies, mainly on birds and mammals of the Argentinean Pampas, shows that agricultural expansion has reduced the geographic ranges and/or abundance, sometimes leading to regional extinction, of many mammal and bird species, including grassland specialists and large herbivores and carnivores. Other species were unaffected (birds) or also benefited (bird, rodent).
In Uruguay, land-use change has been relatively moderate within the context of the Río de la Plata Grasslands, but the conversion of wildlands and rangelands to croplands (mainly soybean) and exotic forest plantations (mainly Eucalyptus) has accelerated over the last two decades (Dinerstein et al. 1995, Baldi and Paruelo 2008, Henwood 2010. The agricultural sector is a crucial component of the Uruguayan economy, highly specialized in commodities and services based on natural resources, which comprise 70% of total exports (Sandonato and Willebald 2018). The economic strategy of Uruguay is heavily based on the growth of this productive sector and, therefore, specific national goals of growth have been delineated for the future (2030) by the Office of Planning and Budget (OPP) of the Presidency of the Republic of Uruguay (OPP 2009). The fulfilment of these goals implies a significant expansion in crop cover and exotic forest plantations in the next decade. This represents an important challenge for the sustainable development of Uruguay. Conservation planning, focusing on species vulnerable to agricultural expansion, is key to developing efficient conservation measures that protect biodiversity in such intensively managed agricultural landscapes. Some spatial prioritization studies for biodiversity conservation have been carried out in the region (e.g., Bilenca and Miñaro 2004, Brazeiro et al. 2008, 2015a, Soutullo and Bartesaghi 2009, Nori et al. 2016), but to our knowledge, research has not focused on vulnerability to agricultural expansion, even though it is recognized as the main threat to biodiversity in our region.
We analyzed the vulnerability of the diversity of vertebrates and woody plants of Uruguay to the loss of grassland areas, driven by agricultural and forestry expansion, to identify priority areas for conservation. Three main questions are addressed: (1) where are the most vulnerable areas for vertebrates and woody plants conservation located? (2) to what extent have the areas of highest vulnerability been converted to croplands and afforestation, or are expected to be impacted in the near future by agricultural expansion? and (3) where to prioritize efforts to conserve vertebrates and woody plants in the face of future agricultural expansion?
Grasslands, including prairies and open woodlands, occupied more than 80% of the territory in the Pre-Hispanic period, representing the matrix ecosystem in the landscape, in combination with dispersed patches of native forests, woodlands, and wetlands (CLAES 2008). The main land uses are livestock, cropping, and exotic forest plantations covering about 90% of the territory (MGAP 2016). Cattle breeding for meat and milk production on natural/seminatural grasslands is the dominant productive activity, and soybean, wheat, rice, barley, sunflower, and maize are the main annual crops (MGAP 2016). In the forestry sector, Eucalyptus (E. globolus and E. grandis) and Pinus (P. ellottii, P. taeda, P. pinaster) species are the most extensively planted in afforestation systems (Petraglia and Dell'Acqua 2006).
Seven natural ecoregions can be distinguished in Uruguay according to geomorphology, soils, physiography, and biota (vertebrates and woody plants; Brazeiro et al. 2015a), which are used as inputs for conservation planning in the National Strategy of Biological Diversity (MVOTMA 2016) and Protected Areas Plan (SNAP 2015).
Assessing biodiversity vulnerability to agricultural transformation
According to a risk assessment framework (Villa and McLeod 2002), we operationally defined vulnerability as the susceptibility of ecosystems to suffer degradation in their conservation value, due to loss of natural habitat by the implantation of crops or exotic productive forests. Thus, the quantity of valuable and susceptible elements of a given area defines its vulnerability level.
In Uruguay, agricultural expansion affects almost exclusively grasslands and other open ecosystems (shrublands, wetlands), because natural forests, including palm and park savannas, are legally protected (Nº 15.939/1988). Illegal logging of natural forests is very marginal, and there is evidence that forest area has increased during the last 50 years (MGAP 2018, National Forest Strategy). Therefore, grassland and open habitat species (from here, we refer to them as grassland species, for simplicity), are clearly more susceptible than forest species to agricultural transformation. Among grasslands species, those endangered, geographically restricted, endemic, or functionally relevant should be of special conservation concern. Species fulfilling such prioritization criteria were named as focal grassland species in this study. So, we used two kinds of biodiversity indicators commonly included in prioritization studies (e.g., Wilson et al. 2009, Reece and Noss 2014, IUCN 2016 to develop a site vulnerability index (VI): the richness of vulnerable species and focal species. We calculated VI as a function of the richness of grassland species (GS) and the richness of focal grassland-species (FGS). Vulnerability index, varying between 0 and 100, was calculated as a weighted sum of the two indicators (weight; VI: https://www.ecologyandsociety.org/vol25/iss1/art15/ 40, FGS: 60), previously standardized to vary between 0 and 1, using the following equation: VI = (GS x 40) + (FGS x 60).
We used the spatial database of records and potential occurrences of 853 species of woody plants and terrestrial vertebrates (i.e., amphibians, reptiles, birds, and mammals) reported by Brazeiro et al. (2015b) to calculate GS and FGS. The records and potential occurrences (obtained from models and expert opinions) of species are given over a grid of 302 cells of 33 x 20 km, covering the entire Uruguayan territory. Previous version of this database have been used in other publications (e.g., Canavero et al. 2010, Haretche et al. 2012, Pérez-Quesada and Brazeiro 2013, Brazeiro et al. 2015a and to design the management plan of the National System of Protected Areas of Uruguay (SNAP 2015). From this species assemblage, we selected all species that use grasslands and/or shrublands as exclusive (i.e., habitat specialists) or secondary habitats (i.e., habitat generalists), according to recent local bibliography on woody plants ( To obtain FGS, we defined as focal grassland species those grasslands woody plants and terrestrial vertebrates included in the national list of priority species for conservation (Soutullo et al. 2013). This priority species list was defined using classic conservation criteria (i.e., endangered, geographically restricted, endemic, functionally relevant, and valuable species) and today is largely utilized in environmental planning and management in Uruguay.
Selecting priority vulnerable areas to agricultural transformation
The selection of the priority-vulnerable areas was performed using a threshold of the 17% highest vulnerability cell, following the Aichi Target number 11, which aims to ensure that by 2020 at least 17% of ecosystems are protected, especially those of greater importance for biodiversity and ecosystem services. Thus, we decided to highlight 17% of the cells of the country, i.e., 51 cells (of 302), as the priority-vulnerable areas. The 51 cells were proportionally distributed among the 7 ecoregions of Uruguay according to their area, to incorporate the criteria of representativeness, and complementarity in the prioritization approach (following Margules and Pressey 2000). The cells with the highest vulnerability of each ecoregion were selected, until the allocated number of cells per ecoregion was reached.
We used the official land-cover shapefile of 2015, available at the website of the Ministry of Environment of Uruguay (MVOTMA), to describe the current pattern of land-use change. This shapefile contains the land-cover classification performed by analyzing a set of LANDSAT 5TM scenes, with a spatial resolution of 30 m. Land cover was classified using the FAO system (LCCS), with a total of 48 classes integrated in 8 major classes: (1) cultivated and managed terrestrial areas; (2) artificial surfaces and associated areas; (3) artificial waterbodies, snow, and ice; (4) cultivated aquatic or regularly flooded areas; (5) natural and seminatural vegetation; (6) natural and seminatural aquatic or regularly flooded vegetation; (7) bare areas; and (8) natural waterbodies, snow, and ice. The resulting classification was checked in the field and good levels of accuracy were reported, for example, 94.3% in cultivated and managed terrestrial areas and 94.6% in natural and seminatural vegetation, during the land-cover classification of 2008 (MVOTMA 2012).
The first four major classes were integrated into a superclass, "highly transformed areas," to estimate natural habitat loss since the Pre-Hispanic period. The main class, cultivated and managed terrestrial areas, was disaggregated into two subclasses, i.e., croplands and exotic forest plantations, the most important landuse drivers in Uruguay. To integrate land-use and biodiversity vulnerability data, we crossed the shapefiles of croplands, exotic forest plantations, and highly transformed areas with the shapefile containing the grid (302 cells of 33 x 20 km). Finally, we summed and mapped the areas of croplands, exotic forest plantations, and highly transformed areas by cell. Geoprocessing was performed in QGIS 2.18.
Projecting future land use: scenario 2030 Regional land-use models often adopt a two-phase approach, beginning with an assessment of aggregate quantities of land use for the entire region, and following with a downscaling procedure to create fine resolution land-use patterns (de Chazal and Rounsevell 2009). The general two-phase approach used in our study is illustrated in the flowchart presented in Figure 1. In our case, the total quantity of land use projected for 2030 (phase one) was derived from the national goals of economic growth for 2030, proposed by the Office of Planning and Budget (OPP) of the Presidency of the Republic of Uruguay, as a target scenario (OPP 2009). The Office of Planning and Budget proposed national goals of economic growth for 2030 (GDP 2030 : US$68,707 x 10 6 , Growth Rate 2008-2030 : 5.0%, Exports 2030 : US$22,028 x 10 6 ), which are based on the goals of production for each economic sector (OPP 2009).
In Uruguay, the main economic sector driving land-use change is agriculture (cropping and exotic forest plantation), which is responsible for 93% of the transformed land cover of Uruguay; the other 7% are urban areas, infrastructures, and artificial water bodies (MVOTMA 2012). To achieve the national production goals for 2030 proposed by OPP (2009), it would be necessary to increase by approximately 1,000,000 ha the area of both croplands and exotic forest plantation. Among crops, soybean has been the main driver of agricultural expansion over the last two decades, whereas the planted area of other crops has remained relatively constant (MGAP 2016). Thus, we focused on soybean expansion to develop the 2030 scenario and assumed that the area of other crops will remain constant until 2030. These production targets are in-tune with the growing trend of the international prices of soybean and wood pulp observed from 2000 to date, despite the high variability among years. The downscaling procedure (phase two) to create a fine-resolution spatial pattern of exotic forest plantations for 2030 was based on the following assumptions about forestry expansion: (1) preference for the legally defined priority areas (Decrees 452/988 and 220/06, Forestry Direction/MGAP), according to the observed trend during the last 20 years; (2) within priority areas, the preference is to consolidate the four established forestry regions (northeast, west, centre, and southeast) because of logistic advantages; (3) development of new forestry region of 100,000 ha in suitable soils (5.02b category, sensu CONEAT 1979) surrounding (< 200 km) the new (2014) cellulose pulp plant of Montes del Plata (MDP) in the locality of Conchillas (Colonia) because of the higher profitability associated with lower transport costs. Montes del Plata has already made efforts in such directions. Using these assumptions, we assessed the conversion likelihood of all natural and seminatural vegetation patches (polygons) detected in the land-cover map of 2015. The assessment included two sequential questions ( Fig. 1): (1) is the patch located in forestry priority soil? and (2) is it included within a consolidated forestry region (i.e., < 100 km from the regional centre. Are closer patches planted first?)? If both answers were "yes," we assumed the conversion likelihood of this patch is one, and thus the patch was converted to forest plantation in the 2030 scenario. This logic continued with the assessment of other patches, until the cumulative converted area reached the national expansion goal (phase one), and the process was stopped.
In the case of soybean expansion over natural/seminatural vegetation patches, a fine-scale spatial projection was based on the following assumptions: (1) preference for soils of high aptitude for agriculture because soil suitability is an important determinant of crop profitability. We defined the likelihood of soybean expansion (p) as function of soil aptitude for soybean crops, using four categories: highly suitable (p = 1), suitable (p = 0.8), marginally suitable (p = 0.5), and unsuitable (p = 0). Spatial information of soil suitability was obtained from the Soils Map of Uruguay (1:1,000,000), using the index of soils suitability for summer crops of Cayssials and Álvarez (1983). (2) Among equally suitable patches, the likelihood of conversion is proportional to the proximity to the centre of the agricultural regions already consolidated (south, southwest, centre, west, northwest, and east) because of logistic advantages. These assumptions were used to assess the conversion (to soybean crop) likelihood of all natural and seminatural vegetation patches detected in the land-cover map of 2015. The assessment also included two main sequential questions (Fig. 1), following the same logic described for exotic forest plantation.
In some cases, the likelihood of conversion to forest plantation and soybean crop were comparable. In such cases, we assumed that soybean was preferred over forest plantation because of its higher economic profitability.
Finally, to make the land-use scenario for 2030 spatially comparable with our biodiversity data, the patch-level data were summed and mapped over the grid of 302 cells of 33 x 20 km. Geoprocessing was performed in QGIS 2.18.
Biodiversity vulnerability and ecoregional priorities for conservation
Half of the species of woody plants and terrestrial vertebrates of Uruguay use grassland ecosystems as habitat, and about 11% of them are focal species because of their precarious conservation Ecology and Society 25(1): 15 https://www.ecologyandsociety.org/vol25/iss1/art15/ status or high ecological or social value (Table 1). The richness of grassland species showed broad geographic variability, with the west and east fringes and the southeast region being the most diverse (Fig. 2a). Focal species richness showed a somewhat similar pattern to overall grassland species (Fig. 2b), being positively and significantly correlated in space (Spearman-rank correlations: r S = 0.91, P < 0.0001). Therefore, the biodiversity vulnerability index to land-use change, derived from the previous indicators, also resembled the spatial pattern of grassland species richness. High-vulnerable cells are mainly concentrated in five ecoregions: (1) northern zone of the Western Sediment Basin; (2) northern and eastern zones of the Gondwanic Sediment Basin; (3) northeast of Eastern Sierras; (4) southern zone (Atlantic fringe) of the Merin Lagoon Graven; and (5) south (Atlantic fringe) of the Santa Lucía Graben (Fig. 2c).
We identified 51 cells (~17% of 302) as the priority vulnerable areas of the country that were proportionally distributed among the 7 ecoregions (Fig. 2c). All prioritized cells were located in the regions of the high vulnerability described above, or nearby. Currently, seven priority cells (13.7%) overlap with protected areas of the national system (SNAP; Fig. 2c).
Land-use change: present patterns and future projections
Land-use dynamics in Uruguay resembled the regional pattern, showing a slow and gradual growth of agricultural lands during the 1990s and an accelerated expansion from 2000 (Fig. 3). Soybean has been the main driver of the acceleration phase, growing from less than 40,000 ha before 2000, to more than 1,200,000 ha in 2015. The other important driver of change has been the forestry sector, which has been encouraged by tax reductions during the late 1980s and 1990s, in certain zones and types of soils (forestry-priority zones, law Nº 15.939 of 1987). This policy triggered a pronounced development of exotic forest plantations, mainly with eucalyptus and pines, which rose from less than 200,000 ha before the 1990s to more than 1,000,000 ha in 2015 (Fig. 3).
According to the land-cover map of 2015, 36.2% of the original continental territory of Uruguay (176,500 km²) has been transformed by croplands (including artificial prairies, 27.5%), exotic forest plantations (7.9%), and urban and other artificial areas (0.8%). Croplands are mainly distributed in the southwest and west regions, and in part in the east (Fig. 4). The forestry sector is mainly distributed in four regions, with the west and northeast regions containing the most extensively planted areas ( Fig. 4). At present, the ecoregions most affected by land-use change have been the Santa Lucia Graben (SLG), Crystalline Shield (CS), and the Western Sediment Basin (WSB) with an overall loss of natural habitat, mainly grasslands, of about 50% or greater (Fig. 5a). The Merin Lagoon (MLB) and the Gondwanic Sediment Basin (GSB) ecoregions present intermediate levels of natural habitat conversion (20-30%), whereas the Basaltic Slope (BS) and the Eastern Sierras (ES) ecoregions showed the lowest levels (< 20%; Fig. 5a).
In the projected land-use scenario for 2030, croplands will cover about 32.7% of the territory and exotic forest plantations about 15.2%. If urban and other artificial areas remain at present levels (0.8%), the total loss of original habitat would reach 48.7% of the country's surface area. Land-use change would be intensified in the three ecoregions highly transformed in 2015 (SLG, CS, and WSB), with natural habitat conversion of about 80% (Fig. 5a). The MLG, ES, and GSB ecoregions would lose about 40% of their original habitat, and the BS ecoregion would be unchanged (Fig. 5a).
Land-use change in conservation-priority sites
At present, about 27% of the total area of the priority cells (51) has been transformed by land-use change in 2015, but there was substantial variability among ecoregions (Figs. 4 and 5b). Whereas four ecoregions (BS, MLG, ES, and GSB) suffered low conversion (< 25%) within their priority cells, in three ecoregions (SLG, CS, and WSB), the loss of natural habitat was between 30 and 40% (Fig. 5b). Land conversion in SLG was mainly driven by the urbanization of the capital city (Montevideo) and by croplands (Fig. 4). The priority cells of the CS and WSB ecoregions were mainly transformed by croplands and exotic forest plantation, respectively (Fig. 4).
Under the projected scenario for 2030, the overall habitat loss within priority cells would ascend to 45%. One ecoregion would remain almost unchanged with less than 25% of habitat loss (Basaltic Slope) and two ecoregions will lose between 35 and 42% (MLG and ES; Fig. 5b). Four ecoregions (GSB, SLG, CS, and WSB) would be highly impacted by habitat loss (50-70%) in their priority cells (Fig. 5b). The expansion of exotic forest plantations would be the main driver of land transformation in the priority cells of the GSB ecoregion, whereas croplands will be the main driver in the other three ecoregions (Fig. 4).
Agricultural expansion and loss of natural grasslands
More than one-third of Uruguay's natural habitats, largely grasslands, have been converted into croplands, exotic forest plantations, and urbanization in 2015. The causes of land-use change in Uruguay during the last 30-40 years, as in the entire Río de la Plata Grassland region, have been largely discussed (see Paruelo et al. 2006, Baldi and Paruelo 2008, Modernel et al 2016. The high international prices of soybean and wood pulp, the accessibility to new technologies (i.e., no-tillage cropping and genetically modified organisms), and fiscal policies favorable to exotic forestry development (1980-1990s) have been the main drivers.
In the case of soybean, an annual crop, the dynamics of planted areas tracked very well, with a one-two year delay in the price of a metric ton in the Chicago market (http://www.indexmundi.com/ commodities/). For example, the peaks of the planted area observed in Uruguay during 2010 and 2015 (Fig. 3) were associated with growing prices during 2006-2008 and 2011-2014, respectively. Likewise, the observed drop in the planted area after the 2010 peak was associated with lower prices during the next years. After the 2015-peak, the historical maximum, a slow-down has occurred in the soybean expansion according to producers' declarations (MGAP 2019), differing from our model projection (Fig. 3). This slow-down in the planted area of soybean is also associated with lowering international prices. Despite the smallscale fluctuations, the international prices of soybean and wood pulp have been growing in the mid-long term, driving the agricultural expansion in Uruguay. Thus, we think that beyond the small-scale fluctuations, the global market of these commodities (i.e., soybean and wood pulp) will increase in the mid-long term, promoting the future expansion of the agricultural border in our region, ultimately supporting the https://www.ecologyandsociety.org/vol25/iss1/art15/ production targets of the Uruguayan government for 2030 (OPP 2009).
The achievement of these national production targets of soybean and exotic forestry would imply the loss of almost half (48%) of the natural habitat of Uruguay in 2030. In the case of the forestry sector, there are additional local pressures to expand the planted area. There are two cellulose pulp mills of high productivity capacity (1.1-1.3 x 10 6 tons per year) operating in the country (one in the west and the other in the south) and recently a project was approved to open a third pulp mill in the centre of the country.
At present, three ecoregions (SLG, CS, and WSB) have lost about 50% of their original grasslands, which could have consequences in the delivery of critical ecosystem services, such as soil conservation, water provision, and habitat provision for diversity, as documented in previous studies in the region (e.g., Overbek et al. 2007, Medan et al. 2011, Aspiroz et al. 2012, da Silva et al. 2015, Modernel et al. 2016, Paruelo et al. 2016. Further land conversion within these ecoregions should be minimized or carefully studied to prevent environmental problems. For example, serious problems with water quality already exist in the Santa Lucía Graven ecoregion, affecting the water supply to the capital city of Uruguay (Montevideo) and the adjacent metropolitan region (Barreto et al. 2017).
Vulnerability of vertebrates and woody plants diversity to grassland loss
We found that almost half of the woody plants and terrestrial vertebrate species of Uruguay are vulnerable to agricultural expansion. These species use the grassland ecosystem, which has been largely converted to croplands and exotic forestry plantations in the country, as habitat . In spatial terms, vulnerability is higher where there are more species and more focal species potentially affected. Consequently, we defined as priority cells, the top (17%) vulnerable cells by ecoregion. Although our study of vulnerability and spatial prioritization contributes to conservation planning, we recommend deepening the analysis in future studies by incorporating the herbaceous flora, a very representative and diverse biotic component of grassland ecosystems.
In 2015, about 27% of the priority-cells area had been converted into croplands and forestry plantations, particularly in three of the seven ecoregions of Uruguay, with a grassland loss of 30-40%. According to the projected scenario of agricultural expansion for 2030, the current situation could deteriorate substantially. Almost half (45%) of the priority cells would be transformed, including in four ecoregions in which grassland loss could reach 50% or higher. In this context, the most important question concerning the biodiversity conservation of such relevant areas of the country, is probably: How much habitat is required for species persistence?
Forecasting how individual species will be affected by habitat loss is extremely difficult given the variety of interactions among species and threats, nonlinearities, and the emergence of yet unforeseen drivers of change (Balmford and Bond 2005). The relationship between habitat loss and population extinction probability is nonlinear, whereby a threshold appears to exist above which the extinction risk increases from near-zero to nearone following a small additional loss of habitat (Fahrig 2001(Fahrig , 2003. Theoretical studies (models) suggest that threshold values may vary substantially (1-99%) among species and landscape contexts (Fahrig 2001). Nonetheless, many empirical studies have reported negative effects on habitat-specialist species when the amount of suitable habitat in the landscape was reduced to 10-30% (Andrén 1994, Hanski 2011). There is also empirical evidence in the Río de la Plata region showing that grassland specialists have been the most injured species among assemblages of birds (Aspiroz et al. 2012, Brazeiro et al. 2018) and mammals (Andrade-Núñez and Aide 2010) when grasslands were converted.
We do not know the thresholds of suitable habitat for the vertebrates and woody plant species of Uruguay, but using a security threshold of 50%, we found 5 priority cells (of 51) under such value (> 70% of habitat loss) in 2015. We defined such cells as "converted cells" and assigned them a very low conservation priority (Fig. 6). According to our land-use scenario, the number of converted cells (habitat loss > 50%) in 2030 would ascend to 19 (37%). At present, 11 of these cells do not have the protection of the National System of Protected Areas (NSPA) and given their high probability of habitat conversion, we assigned them the highest conservation priority (very high, Fig. 6). We defined as low priority, seven cells currently protected, at least partially, by the NSPA (Fig. 6). Among unprotected cells, we classified as medium priority 10 cells with low conversion probability (< 20%), and 17 cells with habitat loss between 21 and 50% in 2030 scenario as high priority (Fig. 6).
In addition to habitat loss, agricultural expansion could also affect species viability via habitat fragmentation (Fahrig 2003). The fragmentation of the Río de la Plata Grasslands during the period from 1985-2004 was noteworthy, spatially heterogeneous, and higher in landscapes dominated by cropland (Baldi and Paruelo 2008). Additionally, farming and forestry management practices could generate new sources of threats to biodiversity because of initial land clearance, soil tillage, land rotation, soil erosion, changes in water quantity and quality, as well as pesticide inputs (Donald 2004, Jobbágy et al. 2006.
Management recommendations
In comparison to the accelerated transformation of the natural landscape in neighboring countries of the region, land conversion in Uruguay can be considered moderate at present Paruelo 2008, Vega et al. 2009). Due to the lower degree of landuse conversion, the relic grassland-dominated landscapes of Uruguay have a strategic value for regional conservation.
In developing countries of the region, global and national pressures converge to promote agricultural expansion, while increasingly endangering biodiversity. The increasing human demands for food and goods increase the international price of commodities, while at national level, governments search for greater economic growth to respond to basic social demands. The dilemma is how to conserve biodiversity in productive landscapes, in the context of agricultural expansion and intensification? This is the main challenge for the conservation of grassland biodiversity in Uruguay.
Our scenario of land-use changes for 2030 makes clear the urgent need to develop strategies to reduce the future rate of grassland loss. Protected area implementation is a classic and valuable tool Ecology and Society 25(1): 15 https://www.ecologyandsociety.org/vol25/iss1/art15/ Fig. 6. Spatial prioritization of the highest vulnerable cells (top 17%) for conserving vertebrate and woody plant diversity in the face of agricultural expansion in Uruguay. Very low priority was assigned to currently converted cells (i.e., habitat loss > 50% in 2015). Protected cells (i.e., overlapping with protected areas of the national system) were classified as low priority for conservation. Currently unprotected cells (i.e., without protected areas) were classified according to the conversion probability in the 2030-scenario in: medium priority (habitat loss < 20%), high priority (habitat loss between 21 and 50%), and very high priority (habitat loss > 50%). The ecoregions of Uruguay are indicated according to the following codes: Western Sediment Basin (WSB), Basaltic Slope (BS), Crystalline Shield (CS), Gondwanic Sediment Basin (GSB), Merin Lagoon Graben (MLG), Santa Lucía Graben (SLG), and Eastern Sierras (ES).
for this aim, and our spatial prioritization study could contribute to future reserve designations. Although valuable, the contribution of the National System of Protected Areas (NSPA) will be insufficient to conserve all vulnerable species from the projected grassland loss. In Uruguay, the NSPA is the most recently implemented of the region (First area incorporated in 2008) and covers 285,265 ha, which represents only 0.90 % of the continental Uruguayan territory. The Aichi target number 11 (i. e., at least 17% of the most relevant zones are conserved in protected areas), endorsed by Uruguay as a signatory country of the Convention of Biological Diversity (CBD), is far from being reached in 2020. Currently, only 7 (13.7%) of the 51 priority cells are incorporated in the NSPA. The future expansion of the NSPA will be a very hard task because there are practically no available public lands in Uruguay, and the economic resources for the system of protected areas are very limited. However, we believe that the https://www.ecologyandsociety.org/vol25/iss1/art15/ country should continue advancing in the expansion of the NSPA, at least to mitigate the further grassland loss in the Santa Lucia Graben, Crystalline Shield, and the Western Sediment Basin ecoregions, as well as in the high priority cells for the conservation of vertebrates and woody plants (see Fig. 6).
Besides expanding the NSPA, we urgently need to find and promote productive alternatives that conserve biodiversity and the environment. To do that, and in the required time frame to balance the accelerated land-use change occurring now in the region, we highlight three key issues to solve.
First, society and particularly policymakers, should be better informed and aware of the magnitude of the land-use change in the country, and the potential environmental and social impacts. The academic sector should undertake this task with greater commitment. The national brand "Uruguay Natural," used to promote the country in the world, also generates the sensation at a local level, that the country has been little transformed, and therefore conservation is not an urgent issue at the moment. However, a recent opinion survey (March 2017, 1300 cases) revealed that 59% of the respondents believed the brand "Uruguay Natural" is not in line with the country's environmental reality (http://www.opcion.com.uy/opinion-publica/?p=1661).
Second, agricultural and environmental national policies should seek greater articulation and integration. The newly created Watershed Management Committees provide a very good opportunity for coordination, in which the goal of "sustainable intensification" (i.e., greater production with reducing environmental impacts) promoted by the Ministry of Agriculture (MGAP) should be balanced with conservation goals.
Third, the private sector must be better integrated in national conservation policies. Without the contribution of private resources, the expansion of protected areas and productive areas with sustainable management will be insufficient to balance the impacts of land-use changes. We must find alternative agricultural systems that could reach productive and economic targets while minimizing environmental impacts. A promissory initiative was carried out with the Uruguayan beef sector, with the aim to build a coordinated agricultural transformation pathway to meet objectives for sustainable development (Kanter et al. 2016). By applying the approach and methodological toolkit developed by the Agricultural Transformation Pathways initiative, productivity and environmental targets for 2030 were developed in tandem with a wide range of stakeholders to maximize productivity, while minimizing a suite of environmental impacts, including biodiversity. The agreed goal, with respect to biodiversity, is for zero expansion in the amount of land devoted to beef production between 2016 and 2030, meaning that the grazing land remains constant (Kanter et al. 2016). As such, beef production seems to be a viable sustainable alternative of agricultural production in Uruguay, especially with respect to grassland conservation, although overgrazing and pasture modification with forage species (agricultural improvements) could have an effect on biodiversity. The sustainability of beef production is also promoted by the regional initiative "Alianza del Pastizal" (http:// www.alianzadelpastizal.org/). The Alliance, promoted by NGO´s of Argentina, Brazil, Paraguay, and Uruguay, is primarily concerned with regional bird conservation, by promoting adaptive and wildlife-friendly productive practices, with strong participation from landowners and national authorities. A promising initiative of the Alliance is the certification of meat produced under a sustainability protocol, which could encourage more breeders to adopt conservation practices. There are also viable opportunities to promote conservation efforts within the forestry sector. The international certification of responsible forestry production (e.g., Forest Stewardship Council, FSC) is widespread among forestry companies in Uruguay. This provides the opportunity to advance in the implementation of private reserves in areas of high conservation value, and in the adoption of wildlife-friendly practices of production (Brazeiro et al. 2014). Soybean production is the most complex agricultural sector for incorporating conservation practices. The productive cycle is short; many producers are tenants or foreigners; it is simpler to move to other countries or change the productive activity according to profitability; and therefore the farmer fidelity to land is less than in other sectors. In this context, it is difficult to promote incentives to adopt sustainable practices, as well as to conduct environmental control. Soybean production is one of the main drivers of land-use change in Uruguay and given the complexity of the sector, the search for strategies to promote the implementation of reserves and wildlife-friendly practices in these agricultural landscapes is a key challenge for biodiversity conservation in Uruguay.
Responses to this article can be read online at: http://www.ecologyandsociety.org/issues/responses. php/11360
Acknowledgments:
This study was conducted thanks to the results gathered from two projects: Prioridades geográficas para la conservación de la biodiversidad terrestre de Uruguay (PDT32-26, 2008) and Bases para la planificación ecoregional de Uruguay (PPR/MGAP-FAO, 2012). We express our gratitude to Federico Haretche for his collaboration in the construction of the database of grasslands species of Uruguay and to Alejandra Bentancurt for her support in the analysis of the 2015 land-cover map. Christine Lucas helped us very much with the revision of English. We appreciate the detailed reviews by two anonymous reviewers and the suggestions of the subject editor, which substantially improved the manuscript. | 2020-02-27T09:30:29.416Z | 2020-02-24T00:00:00.000 | {
"year": 2020,
"sha1": "3fad51ecd7064ccc8aa760c986c0e9e252d9bb82",
"oa_license": "CCBY",
"oa_url": "http://www.ecologyandsociety.org/vol25/iss1/art15/ES-2019-11360.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c976b56da9a05963d56738542083f29555785f7f",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Geography"
]
} |
17138533 | pes2o/s2orc | v3-fos-license | Systematic computerised cardiovascular health screening for people with severe mental illness
Aims and method People with severe mental illness (SMI) die relatively young, with mortality rates four times higher than average, mainly from natural causes, including heart disease. We developed a computer-based physical health screening template for use with primary care information systems and evaluated its introduction across a whole city against standards recommended by the National Institute for Health and Care Excellence for physical health and cardiovascular risk screening. Results A significant proportion of SMI patients were excluded from the SMI register and only a third of people on the register had an annual physical health check recorded. The screening template was taken up by 75% of GP practices and was associated with better quality screening than usual care, doubling the rate of cardiovascular risk recording and the early detection of high cardiovascular risk. Clinical implications A computerised annual physical health screening template can be introduced to clinical information systems to improve quality of care.
People with a diagnosis of severe mental illness (SMI) such as schizophrenia and bipolar disorder die 15-20 years earlier than the general population, mainly from natural causes. 1 In particular, they have an increased risk of cardiovascular disease. 2 This health inequality was reviewed by the Disability Rights Commission in a 2006 report titled Equal Treatment: Closing the Gap. 3 Deprivation and lifestyle were major factors, but not sufficient to account for the health inequalities. The report proposed that 'diagnostic overshadowing', or clinical blindness to physical problems in people with mental illness, was a form of inadvertent discrimination by health professionals that led to underdiagnosis, underinvestigation and undertreatment of potentially preventable or treatable physical disease in people with mental illness. The Royal College of Psychiatrists has made recommendations to address physical health inequalities through better training of psychiatrists and better collaboration with primary care. 4 Psychiatrists believe that physical health is important and are aware that pharmacological treatment is another factor producing a higher risk of mortality. Antipsychotic medications can cause sudden cardiac death 5 and diabetes, 6 and have a dose-dependent relationship to mortality. 7 Early death in people with SMI has been recognised since the 1990s. 8,9 Since then evidence has grown that there are high death rates from cardiovascular disease and other natural causes. [10][11][12][13] The risk of dying from cardiovascular disease alone significantly exceeds the risk of dying from Aims and method People with severe mental illness (SMI) die relatively young, with mortality rates four times higher than average, mainly from natural causes, including heart disease. We developed a computer-based physical health screening template for use with primary care information systems and evaluated its introduction across a whole city against standards recommended by the National Institute for Health and Care Excellence for physical health and cardiovascular risk screening.
Results A significant proportion of SMI patients were excluded from the SMI register and only a third of people on the register had an annual physical health check recorded. The screening template was taken up by 75% of GP practices and was associated with better quality screening than usual care, doubling the rate of cardiovascular risk recording and the early detection of high cardiovascular risk.
Clinical implications A computerised annual physical health screening template can be introduced to clinical information systems to improve quality of care.
Declaration of interest
The authors have provided a single paid consultation to another primary care organisation that has used the template.
Systematic computerised cardiovascular health screening for people with severe mental illness suicide. 12,14 In contrast to suicide risk assessment and prevention, cardiovascular risk assessment is relatively well evidenced, with clinical algorithms for cardiovascular risk prediction and a range of clinical interventions for primary prevention, such as lifestyle advice and treatment for elevated blood pressure and lipids. However, routine screening for cardiovascular risk is less common than screening for suicide risk, especially in secondary care. All people diagnosed with an SMI such as schizophrenia should have an annual physical health check that includes metabolic screening. 15 The National Institute for Health and Care Excellence (NICE) has also recommended a standard cardiovascular disease risk calculation as part of the annual health check since 2002. 16 In this study we focus on the cardiovascular risk assessment element of the computerised physical health check template.
Aims
We planned to carry out a cross-sectional retrospective service evaluation of the quality of physical health monitoring of all registered SMI patients in the Bradford and Airedale region using the standards recommended by NICE for schizophrenia. 17 We designed and implemented a computer template for the primary care information system to support a standard annual physical health check for SMI patients. We wanted to see whether patients who received the template-based screening got better or worse quality care than patients who did not.
Method
All but one general practice in the Bradford region used the same computer system, SystmOne (www.tpp-uk.com/ modules), allowing data on almost the whole SMI register to be anonymously and centrally collated. This would have been a huge task if done manually using paper-based checklists.
We designed a physical health screening template for the primary care computer system to help general practitioners (GPs) carry out a high-quality annual health check using standards recommended by NICE for physical health checks in schizophrenia. 17 We designed it to help GPs submit data returns for the Quality and Outcomes Framework (QOF) 18 which makes payments to GP practices for specific tasks, including physical health monitoring in SMI.
The physical health screening template is two pages long (with two further pages of explanation and information). It is updated by the data quality team if NICE standards and QOF criteria change. It guides GPs to collect the clinical information needed to identify a range of physical morbidity and health risks, including cardiovascular risk, without needing to learn the detailed NICE guidance or the requirements of QOF. The template looks like every other template on the system and fits into GPs' normal workflows. It automatically includes any pre-existing data from the patient record in order to increase efficiency. It facilitates the allocation of tasks to the primary care team (e.g. ordering blood tests). Results are fed back through the usual channels in the computer system. This integrates physical health monitoring for SMI patients into normal practice.
We then began a process of promoting the template to GP practices in 2011-2012. All 80 practices using SystmOne were contacted and 48 received a 30-minute staff training session delivered by the data quality specialist (K.B.) and/or the physical health project lead (K.D.). Primary care teams decided if and when to use the template.
We carried out the evaluation of template use retrospectively, in a naturalistic setting, using data that were recorded in the course of day-to-day practice by primary care teams in the year leading up to the assessment date, 1 July 2013. We used CTV3 (Clinical Terms Version 3) Read codes (http://systems.hscic.gov.uk/data/uktc/readcodes/ index_html), including codes used in the QOF codes formulary, to construct database reports on template usage. CTV3 Read codes identify elements of activity in the primary care information system and QOF codes are used to generate incentive payments to GP practices to improve service quality. There are specific codes for physical health monitoring in SMI and also for the details of clinical historical data, examination findings and test results. We wrote our reports in the SystmOne reporting module. Almost all practice activity is recorded on the computer system, and our method necessarily disregards any activity not recorded in this way.
The reports captured activity for all patients registered with SystmOne GPs in the Bradford and Airedale region. We compared the usual practice of annual monitoring of physical health of SMI patients in primary care with the new practice of using a standard physical health screening template in the annual check-up.
We chose to use the standard QRisk A 2 cardiovascular disease risk calculator (http://qrisk.org/) in our template. The information system already had rules that calculated 'default' QRisk A 2 scores even without health data entered: average population data are inserted into blank fields within the default QRisk A 2 calculator, meaning that the scores potentially underrepresent risk in a high-risk population such as SMI patients. We created a new report to identify 'data-rich' QRisk A 2 scores in which the following four factors were always recorded: systolic blood pressure, HDL:cholesterol ratio, smoking status and body mass index. In doing this we aimed to audit calculations that were more accurate than those provided as a default by the computer system.
We were aware that there was a second Joint British Societies CVD risk calculator (http://www.jbs3risk.com) available to GPs on the system and recorded when it was used.
We had support from a number of primary care leaders, including nurses and doctors. We also had support from the mental health services in Bradford District NHS Care Trust and the NHS West and South Yorkshire and Bassetlaw Commissioning Support Unit. Ethical boundaries on the use of 'big data' are not yet standardised and we thought it appropriate to have oversight for the project from the relevant employers. No patient identifiable data were used in the evaluation.
Results
Results were derived from reports written for this evaluation in the SystmOne reports module. We used all relevant CTV3 Read codes and QOF codes recorded in the system to construct the reports. Tests for significance in our comparisons were calculated using the chi-square test function in Microsoft Excel. We examined two main areas: the uptake of the template following a single 30-minute promotion and differences in cardiovascular screening outcomes between patients who were and those who were not offered the template-based health check.
On 1 July 2013, there were 568 677 patients registered fully with GPs in Bradford and Airedale. There were 5056 people on the SMI register. The register was incomplete because 576 patients (10.2% of the potential register) were excluded after initial allocation for various reasons. This compares with only 3.3% exclusions of potential diabetic register patients in the region (P50.01). Only 32% of people on the SMI register received an annual physical health check recorded by a QOF code.
Sixty general practices (75%) used the screening template at least once during the 12-month period from 1 July 2012; 12 of these had not received the direct health promotion session, but had discovered the template on the system independently and 20 practices included at least 10 patients.
Overall, 335 template-based physical health reviews were carried out, which amounted to 20.5% (335/1631) of patients given a physical health review in the 12-month period. Of those, 23% (77/335) had a 'data-rich' QRisk A 2 recorded compared with only 8.5% (120/1296) of patients who had an annual physical review without a template-based health check (P50.01) ( Table 1).
QRisk A 2 scores above 20% indicate a need for primary intervention even with overt pathology, because the risk of a fatal cardiovascular event within 10 years is significant. QRisk A 2 scores greater than 20% were found in 3.9% of template-based reviews, compared with 1.5% of reviews not using the template (P50.01). This difference is broadly in line with the increased proportion of 'data-rich' QRisk A 2 scores associated with template-based reviews. This suggests that use of the template significantly increased the detection of cardiovascular risk, compared with usual practice, and that this may simply be a feature of screening patients more accurately by using a high standard of QRisk A 2 measurement.
QRisk A 2 scores greater than 20% were found in 16.7-16.9% of 'data-rich' QRisk A 2 records, regardless of whether or not the template was used. This rate is somewhat higher than the 9.3% rate found in the general adult population in Bradford and Airedale derived from the GP database (P50.05) and the 10.5% population estimate from Dalton.19 This demonstrates how the health inequality detected in SMI research can also be found using a general practice database not designed for research.
Use of the annual physical health check template was associated with an increased proportion of patients receiving individual measures relevant to calculating cardiovascular risk (P50.01). Table 2 compares the frequency of recorded measures that are used in the calculation of cardiovascular risk for the whole SMI register and for those patients who had a template-based review.
Clinical history and examination measures were conducted for about three-quarters of patients on the SMI register, but fewer than half had the necessary blood tests for lipids. By contrast, three-quarters of patients with a template-based review received the recommended lipid screening and over 90% had the history and physical examination measures. This suggests that use of the template had the effect of encouraging primary care teams to collect the data needed to make high-quality cardiovascular risk assessments.
Discussion
Our data are derived from an administrative system rather than a research protocol and therefore rely on clinicians' behaviour, the IT system architecture and reporting capabilities. SystmOne has a powerful reporting module that makes use of CTV3 Read codes and QOF codes. Reports can be built that accurately represent the physical health screening activity offered to patients. It is likely that all activity recorded on the system was captured and this accurately reflects the real rate of physical health checks recorded in primary care.
Based on our reports, we found that people with SMI experienced disadvantages in health screening compared with other high-risk groups. Fewer people with SMI were included in the SMI register compared with the proportion of people with diabetes (another high-risk group) included in the diabetes register. Despite long-standing evidence of high physical and cardiovascular health risks, SMI patients are less likely than patients with diabetes to have access to physical health checks in primary care. The death rate in adults with SMI is four times higher than in the general population 20 and health screening is potentially life-saving in this high-risk group.
Although there are areas of good practice, the systematic prevention and treatment of physical disease in people with SMI has received relatively little attention. Many guidelines have been produced but none have been adequately implemented. 21 De Hert et al 22 have helpfully summarised a range of actions that could be taken, but were unclear on the mechanism to bring about these quality improvements. Health screening using a paper-based template is one possible mechanism and has been promoted by the Royal College of Psychiatrists (using the Positive Cardiometabolic Health (Lester) Algorithm 23 ) and Rethink, a campaigning mental health charity. 24,25 It is hard to see how these screening templates can be systematically implemented in paper form. Our study took the extra step of integrating a standard health screening tool into the primary care information system, so it could be automated, in the hope that this would facilitate the practice of physical health screening in SMI. Overall, we found that adherence to the NICE standard of one physical check-up per year for SMI patients was lamentably low at 32%. This could be due to low adherence to the standard for health checks or low adherence to recording them with the correct code. Cardiovascular risk assessment received a low priority, with less than 10% of patients on the SMI register getting a high-quality 'datarich' risk calculation. If this is merely a data quality issue, then better recording would help. Our method depended on accurate data recording and could not tease out how much unrecorded activity may have taken place.
Uptake of the template was about 1 in 5 of all annual physical reviews, which is encouraging given that there was no incentive to use the template other than to improve quality of care. We did not employ any performance targets.
Use of the template was associated with more than double the rate of adherence to the NICE standards in relation to the calculation of cardiovascular risk. The template was also associated with more than double the rate of detection of significant cardiovascular risk. These findings suggest that, by making a computerised health screening tool available, GP teams were helped to carry out higher-quality physical health reviews and detect more patients at risk of early cardiovascular death. Conversely, our results also suggest that low-quality screening fails to identify cardiovascular risk. The use of automated QRisk A 2 calculators that fill in empty fields with average data should be discouraged with SMI patients. It is possible, but unlikely, that GPs could have biased results by selecting high-risk patients for template-based reviews.
NICE and QOF have not yet delivered universal physical health checks for people with severe mental illness in primary care and additional approaches to improve practice are needed. Although our computer-based template seems to increase quality, it may not be easy to replicate this work in future, since the standard QOF incentive for annual health checks in primary care will be removed in 2014. Instead, NHS England will write a new CQUIN (Commissioning for Quality and Innovation) incentive that will encourage mental health trusts to monitor and improve the physical health of SMI patients. 26 The problem with this secondary care approach is that mental health trusts lack the clinical skills in physical healthcare and the sophisticated information systems present in primary care. However, it should still be possible to introduce health screening templates into mental health information systems and build reports from them.
The computerised physical health screening template is a device that can facilitate high-quality practice. We found that practices that received promotion of the template were more likely to use it, so stronger promotion of a computerised physical health check template could increase the uptake. In secondary care trusts, a physical health screening template could be paired with performance targets to achieve the new CQUIN payment.
It is not yet clear whether screening for cardiovascular risk in people with SMI can lead to a reduction in early death, although structured intervention programmes based on screening have demonstrated small health gains. 27 Long-term longitudinal studies will be needed to answer this question. | 2016-05-04T20:20:58.661Z | 2014-12-01T00:00:00.000 | {
"year": 2014,
"sha1": "0301a4dfcefafaaa2da4ef94e33fd8fc08553fce",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/55D4AF10CAAC5772EC19D31808A7D264/S2053486800002757a.pdf/div-class-title-systematic-computerised-cardiovascular-health-screening-for-people-with-severe-mental-illness-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0301a4dfcefafaaa2da4ef94e33fd8fc08553fce",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
5667334 | pes2o/s2orc | v3-fos-license | Improving Slot Filling Performance with Attentive Neural Networks on Dependency Structures
Slot Filling (SF) aims to extract the values of certain types of attributes (or slots, such as person:cities_of_residence) for a given entity from a large collection of source documents. In this paper we propose an effective DNN architecture for SF with the following new strategies: (1). Take a regularized dependency graph instead of a raw sentence as input to DNN, to compress the wide contexts between query and candidate filler; (2). Incorporate two attention mechanisms: local attention learned from query and candidate filler, and global attention learned from external knowledge bases, to guide the model to better select indicative contexts to determine slot type. Experiments show that this framework outperforms state-of-the-art on both relation extraction (16% absolute F-score gain) and slot filling validation for each individual system (up to 8.5% absolute F-score gain).
Introduction
The goal of Slot Filling (SF) is to extract pre-defined types of attributes or slots (e.g., per:cities of residence) for a given query entity from a large collection of documents. The slot filler (attribute value) can be an entity, time expression or value (e.g., per:charges). The TAC-KBP slot filling task (Ji et al., 2011a;Surdeanu and Ji, 2014) defined 41 slot types, including 25 types for person and 16 types for organization.
One critical component of slot filling is relation extraction, namely to classify the relation between a pair of query entity and candidate slot * This work was carried out during an internship at IBM Research. filler into one of the 41 types or none. Most previous studies have treated SF in the same way as within-sentence relation extraction tasks in ACE 1 or SemEval (Hendrickx et al., 2009). They created training data based on crowd-sourcing or distant supervision, and then trained a multi-class classifier or multiple binary classifiers for each slot type based on a set of hand-crafted features.
Although Deep Neural Networks (DNN) such as Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) have achieved state-of-the-art results on within-sentence relation extraction (Zeng et al., 2014;Liu et al., 2015;Santos et al., 2015;Nguyen and Grishman, 2015;Yang et al., 2016;, there are limited studies on SF using DNN. Adel and Schütze (2015) and Adel et al. (2016) exploited DNN for SF but did not achieve comparable results as traditional methods. In this paper we aim to answer the following questions: What is the difference between SF and ACE/SemEval relation extraction task? How can we make DNN work for SF?
We argue that SF is different and more challenging than traditional relation extraction. First, a query and its candidate filler are usually separated by much wider contexts than the entity pairs in traditional relation extraction. As Figure 1 shows, in ACE data, for 70% of relations, two mentions are embedded in each other or separated by at most one word. In contrast, in SF, more than 46% of query, filler entity pairs are separated by at least 7 words. For example, in the following sentence: E1. "Arcandor query owns a 52-percent stake in Europe's second biggest tourism group Thomas Cook, the Karstadt chain of department stores and iconic shops such as the KaDeWe f iller in what used to be the commercial heart of West Berlin.", Here, Arcandor and KaDeWe are far separated and it's difficult to determine the slot type as org:subsidiaries based on the raw wide contexts. Figure 1: Comparison of the Percentage by the # of Words between two entity mentions in ACE05 and SemEval-2010 Task 8 relations, and between query and slot filler in KBP2013 Slot Filling.
In addition, compared with relations defined in ACE (18 types) and SemEval (9 types), slot types are more fine-grained and heavily rely on indicative contextual words for disambiguation. Yu et al. (2015) and demonstrate that many slot types can be specified by contextual trigger words. Here, a trigger is defined as the word which is related to both the query and candidate filler, and can indicate the type of the target slot. Considering E1 again, owns is a trigger word between Arcandor and KaDeWe, which can indicate the slot type as org:subsidiaries. Most previous work manually constructed trigger lists for each slot type. However, for some slot types, the triggers can be implicit and ambiguous.
To address the above challenges, we propose the following new solutions: • To compress wide contexts, we model the connection of query and candidate filler using dependency structures, and feed dependency graph to DNN. To our knowledge, we are the first to directly take dependency graphs as input to CNN.
• Motivated by the definition of trigger, we design two attention mechanisms: a local attention and a global attention using large external knowledge bases (KBs), to better capture implicit clues that indicate slot types.
2 Architecture Overview Figure 2 illustrates the pipeline of a SF system. Given a query and a source corpus, the system retrieves related documents, identifies candidate fillers (including entities, time, values, and titles), extracts the relation between query and each candidate filler occurring in the same sentence, and finally determines the filler for each slot. Relation extraction plays a vital role in such a SF pipeline.
In this work, we focus on relation extraction component and design a neural architecture. Given a query, a candidate filler, and a sentence, we first construct a regularized dependency graph and take all governor, dependent word pairs as input to Convolutional Neural Networks (CNN).
Moreover, We design two attention mechanisms: (1) Local Attention, which utilizes the concatenation of Query and Candidate Filler vectors to measure the relatedness of each input bigram (we set filter width as 2) to the specific query and filler.
(2) Global attention: We use prelearned slot type representations to measure the relatedness of each input bigram with each slot type via a transformation matrix. These two attention mechanisms will guide the pooling step to select the information which is related to query and filler and can indicate slot type.
Regularized Dependency Graph
Dependency parsing based features, especially the shortest dependency path between two entities, have been proved to be effective to extract the most important information for identifying the relation between two entities (Bunescu and Mooney, 2005;Zhao and Grishman, 2005;GuoDong et al., 2005;Jiang and Zhai, 2007). Several recent studies also explored transforming a dependency path into a sequence and applied Neural Networks to the sequence for relation classification (Liu et al., 2015;Cai et al., 2016;Xu et al., 2015). However, for SF, the shortest dependency path between query and candidate filler is not always sufficient to infer the slot type due to two reasons. First, the most indicative words may not be included in the path. For example, in the following sentence: E2. Survivors include two sons and daughters-inlaw, Troy f iller and Phyllis Perry, Kenny query and Donna Perry, all of Bluff City.
the shortest dependency path between Kenny and Troy is: "Troy ← conj Perry ← conj Kenny", which does not include the most indicative words: sons and daughters for their per:siblings relation. In addition, the relation between query and candidate filler is also highly related to their entity types, especially for disambiguating slot types such as per:country of birth per:state of birth and per:city of birth. Entity types can be inferred by enriching query and filler related contexts. For example, in the following sentence: E3. Merkel query died in the southern German city of Passau f iller in 1967.
we can determine the slot type as city related by incorporating rich contexts (e.g., "city").
To tackle these problems, we propose to regularize the dependency graph, incorporating the shortest dependency path between query and candidate filler, as well as their rich contextual words.
Given a sentence s including a query q and candidate filler f , we first apply the Stanford Dependency Parser to generate all dependent word pairs: governor, dependent , then discover the shortest dependency path between query and candidate filler based on Breadth-First-Search (BFS) algorithm. The regularized dependency graph includes words on the shortest dependency path, as well as words which can be connected to query and filler within n hops. In our experiments, we set n = 1. Figure 3 shows the dependency parsing output for E1 mentioned in Section 1, and the regularized dependency graph with the bold circled nodes. We can see that, the most indicative trigger owns can be found in both the shortest dependency path of Arcandor and KaDeWe, and the context words of Arcandor. In addition, the context words, such as shops, can also infer the type of candidate filler KaDeWe as an Organization.
Graph based CNN
Previous work (Adel et al., 2016) split an input sentence into three parts based on the positions of the query and candidate filler and generate a feature vector for each part using a shared CNN. To compress the wide contexts, instead of taking the raw sentence directly as input, we split the regularized dependency graph into three parts: query related subgraph, candidate filler related subgraph, and the dependency path between query and filler. Each subgraph will be taken as input to a CNN, as illustrated in Figure 2. We now describe the details of each part as follows.
Input layer: Each subgraph or path G in the regularized dependency graph is represented as a set of dependent word pairs G = { g 1 , d 1 , g 2 , d 2 , ... g n , d n }. Here, g i , d i denote the governor and dependent respectively. Each word is represented as a d-dimensional pre-trained vector. For the word which does not exist in the pre-trained embedding model, we assign a random vector for it. Each word pair g i , d i is converted to a R 2×d matrix. We concatenate the matrices of all word pairs and get the input matrix M ∈ R 2n×d .
Convolution layer: For each subgraph, M ∈ R 2n×d is the input of the convolution layer, which is a list of linear layers with parameters shared by filtering windows with various size. We set the stride as 2 to obtain all word pairs from the input matrix M . For each word pair p in = v g i , v d i , we compute the output vector p out of a convolution layer as: where p in is the concatenation of vectors for the words v g i and v d i , W denotes the convolution weights, and b is the bias. In our work all three convolution layers share the same W and b.
K-Max Pooling Layer: we follow Adel et al. (2016) and use K-max pooling to select K values for each convolution layer. Later we will incorporate attention mechanisms into K-max pooling.
Fully Connected Layer: After getting the highlevel features based on the (attentive) pooling layer for each input subgraph, we flatten and concatenate these three outputs as input to a fully connected layer. This layer connects each input to every single neuron it contains, and learns non-linear combinations based on the whole input.
Output Layer: It takes the output of the fully connected layer as input to a softmax regression function to predict the type. We use negative loglikelihood as loss function to train the parameters.
Local Attention
The basic idea of attention mechanism is to assign a weight to each position of a lower layer when computing the representations for an upper layer, so that the model can be attentive to specific regions (Bahdanau et al., 2014). In SF, the indicative words are the most meaningful information that the model should pay attention to. applied attention from the entities directly to determine the most influential parts in the input sentence. Following the same intuition, we apply the attention from the query and candidate filler to the convolution output instead of the input, to avoid information vanishing during convolution process (Yin et al., 2016).
For q or f that includes multiple words, we average the vectors of all individual words. For each convolution output F , which is a feature map ∈ R K×N , where N is the number of word pairs from the input, and K is the number of filters, we define the attention similarity matrix A ∈ R N ×1 as: where L ∈ R K×2d is the transformation matrix between the concatenated vector v and convolution output. F [:, i] denotes the vector of column i in F . Then we use the attention matrix A to update each column of the feature map F , and generate an updated attention feature map F as follows:
Global Attention
Considering E1 in Section 1 again, the most discriminating word owns is not only related to the query and filler, but more specific to the type org:subsidiaries. Local attention aims to identify the query and filler related contexts. In order to detect type-indicative parts, we design global attention, using pre-learned slot type representations. explored relation type attention with automatically learned type vectors from training data. However, in most cases, the training data is not balanced and some relation types cannot be assigned high-quality vectors with limited data. Thus, we designed two methods to generate pre-learned slot type representations.
First, we compose pre-trained lexical word embeddings of each slot type name to directly generate type representations. For example, for the type per:date of birth, we average the vectors of all single tokens (person, birth, date) within the type name as its representation.
Another new method is to take advantage of the large size of facts from external knowledge base (KB) to represent slot types. We use DBPedia as the target KB and manually map KB relations to slot types. For example, per:alternate names can be mapped to alternativeNames, birthName and nickName in DBPedia. Thus for each slot type, we collect many triples: query, slot, filler and use TransE (Bordes et al., 2013), which models slot types as translations operating on the embeddings of query and filler, to derive a representation for each slot type. Compared with the first lexical based slot type representation induction approach, TransE jointly learns entity and relation representations and can better capture the correlation and differentiation among various slot types. Later, we will show the impact of these two types of slot type representations in Section 5.2.
Next we use the pre-learned slot type representations to guide the pooling process. Formally, let R ∈ R d×r be the matrix of all slot type vectors, where d is the vector dimension size and r is the number of slot types. Let F ∈ R K×N be a convolution output, which is the same as Section 4.1. We define the attention weight matrix S as: where W ∈ R K×d is the transformation matrix for pre-learned slot type representations and convolution output. Given the weight matrix S, we generate the attention feature map F as follows: We apply local attention to each convolution output of each subgraph, then take the concatenation of three flattened attentive pooling outputs to a fully connected layer and generate a robust feature representation. Similarly, another feature representation is generated based on global attention. We concatenate these two features to the softmax layer to get the predicted types.
Data
For model training, Angeli et al. (2014) created some high-quality clean annotations for SF based on crowd-sourcing 2 . In addition, Adel et al. (2016) automatically created a larger size of noisy training data based on distant supervision, including about 1,725,891 positive training instances for 41 slot types. We manually assessed the correctness of candidate filler identification and their slot type annotation, and extracted a subset of their noisy annotations and combined it with the clean annotations. Ultimately, we obtain 23,993 positive and 3,000 negative training instances for all slot types.
We evaluate our approach in two settings: (1) relation extraction for all slot types, given the boundaries of query and candidate fillers. We use a script 3 to generate a test set (4892 instances) from KBP 2012/2013 slot filling evaluation data sets with manual assessment. (2) apply our approach to re-classify and validate the results of slot filling systems. We use the data from the KBP 2013 Slot Filling Validation (SFV) shared task, which consists of merged responses returned by 52 runs from 18 teams submitted to the Slot Filling task.
We used the May-2014 English Wikipedia dump to learn word embeddings based on the Continuous Skip-gram model (Mikolov et al., 2013). Table 1: Hyper-parameters.
Relation Extraction
We compare with several existing state-of-the-art slot filling and relation extraction methods on slot filling data sets. Besides, we also design several variants to demonstrate the effectiveness of each component in our approach. Table 2 presents the detailed approaches and the features used by these methods. We report scores with Macro F 1 and Micro F 1 . Macro F 1 is computed from the average precision and recall of all types while Micro F 1 is computed from the overall precision and recall, which is more useful when the size of each category varies. Table 3 shows the comparison results on relation extraction.
We can see that by incorporating the shortest dependency path or regularized dependency graph into neural networks, the model can achieve more than 13% micro F-score gain over the previously widely adopted methods by state-of-the-art systems for SemEval relation classification. It confirms our claim that SF is a different and more challenging task than traditional relation classification and also demonstrates the effectiveness of dependency knowledge for SF.
In addition, by incorporating local or global attention mechanism into the GraphCNN, the performance can be further improved, which proves the effectiveness of these two attention mechanisms. Our method finally achieves absolute 16% F-score gain by incorporating the regularized dependency graph and two attention mechanisms.
To better quantify the contribution of different attention mechanisms on each slot type, we further compared the performances on each single slot type. Table 4 shows the gain/loss percentage of the Micro F1 by adding local attention or global attention into GraphCNN for each slot type. We can see that both attentions yield improvement for most slot types.
Slot Filling Validation
In TAC-KBP 2013 Slot Filling Validation (SFV) (Ji et al., 2011b) task, there are 100 queries. We first retrieve the sentences from the source corpus (about 2,099,319 documents) and identify the query and candidate filler using the offsets generated by each response, then apply our approach to re-predict the slot type. Figure 6 shows the F-scores based on our approach and the original system. For a system which has multiple runs, we select one for comparison. We can see that our approach consistently improves the performance of almost all SF systems in an absolute gain range of [-0.18%, 8.48%]. With analysis of each system run, we find that our approach can provide more gains to the SF systems which have lower precision. Previous studies (Tamang and Ji, 2011;Rodriguez et al., 2015;Viswanathan et al., 2015;Rajani and Mooney, 2016a;Yu et al., 2014a;Rajani and Mooney, 2016b) for SFV trained supervised classifiers based on features such as confidence score of each response and system credibility. For comparison, we developed a new SFV approach: a new SVM classifier based on a set of features (docId, filler string, original predicted slot type and confidence score, new predicted slot type confidence score based on our neural architecture) for each response to take advantage of the redundant information from various system runs. Table 5 compares our SFV performance against previous reported scores on judging each response as true or false. We can see that our approach advances state-of-the-art methods.
Detailed Analysis
Significance Test: Table 3 shows the results of multiple variants of our approach. To demonstrate the difference between the results of these Applying a pairwise ranking loss function over CNNs word embedding, word position embedding Context-CNN (Adel et al., 2016) Splitting each sentence into three parts based on query and filler positions, and apply a CNNs to each part word embedding Our Methods
DepCNN
Applying CNNs to the shortest dependency path between query and filler approaches are not random, we randomly sample 10 subsets (each contains 500 instances) from the testing dataset, and conduct paired t-test between each of these two approaches over these 10 data sets to check whether the average difference in their performances is significantly different or not. Table 6 shows the two-tailed P values. The differences are all considered to be statistically significant while all p-values are less than 0.05.
Impact of Training Data Size:
We examine the impact of the size of training data on the performance for each slot type. Table 4 shows the distribution of training data and the F-score of each single type. We can see that, for some slot types, such as per:date of birth and per:age, the entity types of their candidate fillers are easy to learn and differentiate from other slot types, and their indicative words are usually explicit, thus our approach can get high f-score with limited training data (less than 507 instances). In contrast, for some slots, such as org:location of headquarters, their clues are implicit and the entity types of candidate fillers are difficult to be inferred. Although the size of training data is larger (more than 1,433 instances), the f-score remains quite low. One possible solution is to incorporate fine-grained entity types from existing tools into the neural architecture.
Impact of Wide Context Distribution:
We further compared the performance and distribution of instances with wide contexts across all slot types.
A context is considered as wide if the query and candidate filler are separated with more than 7 words. The last column of Table 4 shows the performance by incorporating regularized dependency graph (Con-textCNN v.s. GraphCNN). We can see that, for most slot types with wide contexts, such as per:states of residence and per:employee of, the f-scores are improved significantly while for some slots such as per:date of birth, the f-scores decrease because most date phrases do not exist in our pre-trained embedding model.
Error Analysis: Both of the relation extraction and SFV results showed that, more than 58% classification errors are spurious. Besides, we also observed many misclassifications that are caused by conflicting clues. There may be several indicative words within the contexts, but only one slot type is labeled, especially between per:location of death and per:location of residence. For example, in the following sentence: E4. Billy Mays query , a beloved and parodied pitchman who became a pop-culture figure through his commercials for cleaning prod- In addition, as we mentioned before, slot typing heavily relies on the fine-grained entity type of candidate filler, especially for the location (including city, state, country) related slot types. When the context is not specified enough, we can only rely on the pre-trained embeddings of candidate fillers, which may not be as informative as we hope. Such cases will benefit from introducing additional gazetteers such as Geonames 4 .
Related Work
One major challenge of SF is the lack of labeled data to generalize a wide range of features and patterns, especially for slot types that are in the longtail of the quite skewed distribution of slot fills (Ji et al., 2011a). Previous work has mostly focused on compensating the data needs by constructing patterns (Sun et al., 2011;Roth et al., 2014b), automatic annotation by distant supervision (Surdeanu et al., 2011;Roth et al., 2014a;Adel et al., 2016), and constructing trigger lists for unsupervised dependency graph mining . Some work (Rodriguez et al., 2015;Viswanathan et al., 2015;Hong et al., 2015;Rajani and Mooney, 2016a;Yu et al., 2014a;Rajani and Mooney, 2016b;Ma et al., 2015) also attempted to validate slot types by combining results from multiple systems. Our work is also related to dependency path based relation extraction. The effectiveness of dependency features for relation classification has been reported in some previous work (Bunescu and Mooney, 2005;Zhao and Grishman, 2005;GuoDong et al., 2005;Jiang and Zhai, 2007;Neville and Jensen, 2003;Ebrahimi and Dou, 2015;Xu et al., 2015;. Liu et al. (2015), Cai et al. (2016) and Xu et al. (2015) applied CNN, bidirectional recurrent CNN and LSTM to CONLL relation extraction and demonstrated that the most important information has been included within the shortest paths between entities. Considering that the indicative words may not be included by the shortest dependency path between query and candidate filler, we enrich it to a regularized dependency graph by adding more contexts.
Conclusions and Future Work
In this work, we discussed the unique challenges of slot filling compared with tradition relation extraction tasks. We designed a regularized dependency graph based neural architecture for slot filling. By incorporating local and global attention mechanisms, this approach can better capture indicative contexts. Experiments on relation extraction and Slot Filling Validation data sets demonstrate the effectiveness of our neural architecture. In the future, we will combine additional rules, patterns, and constraints with DNN techniques to further improve slot filling. | 2017-07-04T17:18:50.000Z | 2017-07-04T00:00:00.000 | {
"year": 2017,
"sha1": "26ba5192025b21660f1792c5ab799f4af5350430",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/D17-1274.pdf",
"oa_status": "HYBRID",
"pdf_src": "ACL",
"pdf_hash": "b19cb4af72880098ea2cd70ab75853a92c0aa9c4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
12445518 | pes2o/s2orc | v3-fos-license | Bipartite atlas in a collegiate football player - Not necessarily a contraindication for return-to-play: A case report and review of the literature
Background: Congenital malformations of the posterior arch of the atlas are rare, occurring in 4% of the population. Anterior arch aplasia is extremely rare and often only coexists with posterior arch anomalies, resulting in a split or bipartite atlas. This congenital anomaly is believed to be present in only 0.1% of the population. Case Description: A 19-year-old male collegiate football player presented with neck pain and upper extremity paresthesias after sustaining a tackle that forced neck hyperextension. Computed tomography revealed significant congenital bony anomalies of the cervical spine, with incomplete fusion of the anterior and posterior arches of the atlas; however, there was no evidence for of any acute traumatic injury or fracture. Magnetic resonance imaging revealed increased edema in pre-vertebral soft tissues around C1–C2, with a possible increase in signal within the fibrous ring of the anterior C1 ring. Flexion and extension imaging confirmed reduced range of motion and no instability. Patient was treated non-operatively, and was able to resume normal activity and training regimens, and continued to do well clinically. Conclusion: We describe a rare case of split or bipartite atlas in collegiate football athlete who sustained a neck injury during a tackle. The patient had no atlanto-axial instability or other clinical contraindications and was managed non-operatively, resuming full participation shortly thereafter with a full resolution of symptoms.
INTRODUCTION
The cranio-vertebral junction is a common site for congenital anomalies and can include malformations of the atlas. Clefts or aplasia of the anterior and posterior arches of the atlas are well documented, but occur rarely. [1,[3][4][5][6][7][8]10,11,14,15,19,21,22,28] The prevalence of congenital malformations of the posterior arch of the atlas can range from 4 to 5% of the population. [19,21] Anterior arch aplasia is extremely rare, occurring in approximately 0.1% of the population, and often only coexists with posterior arch anomalies. [19,21] Also referred to as "bipartite atlas," the prevalence of combined anterior and posterior defects is uncertain. [11] Cases of combined anterior and posterior clefts of atlas are either asymptomatic or have minimum symptoms, with most found incidentally on imaging studies performed for other indications. [4,10] Injuries to the cervical spine constitute a common occurrence in those participating in athletic events and can have devastating consequences. These injuries happen primarily to athletes involved in the contact sports of football, wrestling, rugby, and ice hockey, with football injuries constituting the largest number of cases. [2,20] While guidelines for return-to-participation are more evident for those with spinal cord injury or cervical spine instability, [2,12,[23][24][25][26][27] there are no clear data to guide the sports medicine practitioner in making such decisions for those with a congenital cleft of the atlas.
In this report, we describe a collegiate football player with a bipartite atlas, which was discovered during the work-up of a neck injury at practice, who ultimately returned to play. We review our patient's case, work-up, and treatment course, as well as the pertinent literature.
CASE REPORT
A 19-year-old male collegiate football player presented to the emergency department (ED) with persistent neck pain following an injury at practice. The contact occurred during a blocking drill and reportedly forced his neck into hyperextension. Upon impact, he immediately experienced high cervical neck pain localized to the left side of his neck, although over time this pain became more generalized. The patient was removed from play, and on the sidelines, no weakness or sensory abnormalities were noted on his examination. The pain was exacerbated with palpation at the base of the skull and did not radiate. Cervical range of motion with rotation, flexion, extension, and lateral flexion was limited secondary to pain. The patient had persistent pain later in the evening, not relieved with analgesics and rest, and was ultimately instructed to go the ED for further work-up and evaluation.
In the ED, he had continued complaint of neck pain; however, he remained neurologically intact and denied numbness, paresthesias, or weakness on initial evaluation. Of note though, later that evening in the ED, the patient transiently experienced subjective numbness in his left lateral forearm in an ulnar distribution that later spontaneously resolved. He was an otherwise healthy individual with a medical history significant for one "stinger" in the past that completely resolved. Plain films revealed no acute fracture or dislocation [ Figure 1]. A computed tomography (CT) scan of his cervical spine demonstrated absent bony fusion of the anterior midline synchondrosis, as well as the midline posterior arch of the C1 bony ring [ Figure 2a-c]. The osseous components had well-developed cortical margins, strongly suggesting that the midline discontinuities were not the result of trauma. The patient additionally underwent magnetic resonance imaging (MRI) of the cervical spine that demonstrated prominent edema in the pre-vertebral soft tissues extending from the level of the clivus to the vertebral body of C5 [ Figure 3a]. Images at the level of the C1 bony ring demonstrated edema of the paramedian ventral soft tissues at, above, and below the level of the unfused anterior midline synchondrosis [ Figure 3b and c]. No ligamentous disruption was identified.
The patient's overall clinical picture and imaging findings were consistent with a cervical sprain/strain with an incidental finding of a congenital bipartite atlas. Given his persistent neck pain, tenderness, and findings of pre-vertebral soft tissue swelling, the patient was advised to remain in a rigid cervical collar until seen in follow-up 1 week later. He was treated symptomatically with non-steroidal anti-inflammatory drugs (NSAIDs) and a muscle relaxant. He was temporarily removed from contact drills, but was cleared to ride a stationary bike and to do isometric neck strengthening with the athletic training staff, while remaining in the collar. Follow-up 1 week later revealed diminished tenderness and increased range of motion. Flexion/extension radiographs were obtained and demonstrated no evidence of instability or misalignment. The patient was allowed to resume usual activity and his cardiovascular training regimen with symptomatic restrictions. We opted to wait until his MRI findings resolved, prior to resuming contact activity. Repeat MRI of his cervical spine approximately Figure 1: Lateral cervical spine X-ray. Lateral plain film obtained on admission revealed some pre-vertebral soft tissue swelling; however, no acute fracture or dislocation was appreciated 1.5 months following the injury demonstrated complete resolution of the pre-vertebral swelling and hyperintensity, and an unchanged appearance of the unfused anterior midline synchondrosis. Even after having a discussion with the patient regarding the uncertain future risk for cervical spine injury and was demonstrated understanding the potential consequences, he desired to continue playing. Given the absence of any signs of instability, the patient was cleared to fully return-to-play. He resumed contact activity, both practice and gametime play, without further issues and continues to remain asymptomatic.
Development and incidence
The embryological development of the atlas is complex and various developmental anomalies have been reported.
Three ossification centers are responsible for its structural formation: an anterior ossification center which gives rise to the anterior tubercle, and two lateral masses which form the corresponding lateral masses and posterior arch. [5,7,15] A fourth ossification center has been cited on occasion and appears to form the posterior tubercle. [21] During gestation, the two lateral ossification centers extend posteriorly toward the midline, and form early portions of the posterior arch. [21] Throughout early development, these primitive arches continue to advance, eventually fusing around the fourth year of life. [4] Incomplete fusion of the posterior atlas is estimated to occur in 4% of the population, and is believed to be a result of a failure in chondrogenesis rather than a failure in ossification. [9,14,21] Due to the relative heterogeneity of posterior malformations, a classification scheme was developed by Curarrino et al. to categorize them based upon the extent of absence of the posterior tubercle. [6] Of the five categories, two general types exist: median clefts (Type A) and various degrees of posterior arch dysplasia (Types B-E), [6,28] although the former appears to be present most often. [14,21] Fusion defects involving the anterior arch of the atlas are much less common, occurring in approximately 0.1% of patients. [3,8,17,28] Numerous studies have estimated anterior arch ossification and synchondrosis fusion to occur somewhat later than for the posterior arch, typically between ages 6 and 8. [16,18] Recent studies, however, have suggested that ossification may take even longer in some individuals. [13,19] Interestingly, unlike anomalies of the posterior arch, anterior clefts rarely occur in isolation and often coexist with posterior arch defects. [10,21] This results in a split, or bipartite atlas, [9,22] as observed in our patient.
Clinical implications
A review of the current literature yielded a limited number of prior bipartite atlas cases reported. In general, these anomalies are considered relatively benign and most cases are found incidentally in asymptomatic patients. A few other reports have been described in the setting of athletic participation. [4,11] One of the reports made no mention of return-to-play recommendations, but noted that the football player was treated symptomatically and ultimately remained asymptomatic. [4] Jans et al. chose to recommend that their patient refrain from further participation in contact sports and to "adjust his recreational activities;" however, the authors acknowledged a lack of hard evidence on which this decision was based. [11] Of note, the patient in that case report had no studies demonstrating any evidence of instability. [11] It is well established that there is a significant risk of cervical spine injury in athletes participating in American football. [20] The rare nature of this cervical anomaly certainly accounts for the lack of literature available to guide management in these situations. The patient lacked any absolute or relative contraindications to return-toplay as designated by accepted treatment guidelines. [2,12,24] Our patient was asymptomatic up until this point in his life, including many years of playing contact sports, and dynamic imaging with flexion-extension radiographs at the time of his cervical strain/sprain revealed no signs of injury or instability. As such, the patient was treated with conservative supportive therapy and we decided to withhold the patient from contact sports until the MRI hyperintensities resolved. The patient was extensively counseled regarding the unclear potential for risk of injury from further participation in football and the uncertainty of whether his risk was any greater than players without his cervical spine anomaly. He demonstrated understanding about the inability to quantify any actual risk and afterward stated unequivocally that he was willing to accept all risks or potential consequences, and wanted to continue in football without restriction. The patient was eventually allowed to return-to-play and participated in both football practice and game-time play without issue.
CONCLUSION
Our report presents an interesting case of an athlete who suffered a cervical sprain/strain and was incidentally found to have a bipartite atlas on subsequent work-up. Most patients with this congenital anomaly are asymptomatic and only rarely are there issues with cervical spine stability. There is little in the way of medical literature to guide management regarding return-to-play in this situation; however, our case provides an example in which the patient was allowed to return to participation in American football. If the player is asymptomatic and there is no evidence of instability on imaging, a bipartite atlas should not necessarily represent a contraindication for return-toplay. Further research is needed to help define the most appropriate management recommendations for patients with congenital atlas anomalies, particularly bipartite atlas. | 2018-04-03T00:30:48.319Z | 2012-10-13T00:00:00.000 | {
"year": 2012,
"sha1": "1235230626359f8aec1aab22adff8c927b754611",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3513844",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "da70fdf51b8d90223fcfba4792cc170d18692a84",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119212493 | pes2o/s2orc | v3-fos-license | The MUCHFUSS project - Searching for the most massive companions to hot subdwarf stars in close binaries and finding the least massive ones
The project Massive Unseen Companions to Hot Faint Underluminous Stars from SDSS (MUCHFUSS) aims at finding hot subdwarf stars with massive compact companions (massive white dwarfs M>1.0 Msun, neutron stars or stellar mass black holes). The existence of such systems is predicted by binary evolution theory and some candidate systems have been found. We classified about 1400 hot subdwarf stars from the Sloan Digital Sky Survey (SDSS) by colour selection and visual inspection of their spectra. Stars with high velocities have been reobserved and individual SDSS spectra have been analysed. In total 201 radial velocity variable subdwarfs have been discovered and about 140 of them have been selected as good candidates for follow-up time resolved spectroscopy to derive their orbital parameters and photometric follow-up to search for features like eclipses in the light curves. Up to now we found seven close binary sdBs with short orbital periods ranging from 0.21 d to 1.5 d and two eclipsing binaries with companions that are most likely of substellar nature. A new pulsating sdB in a close binary system has been discovered as well.
Introduction
A large fraction of the sdB stars (≃ 50%) are short period binaries (Maxted et al. 2001;Napiwotzki et al. 2004) with periods ranging from only 0.07 d to more than 10 d.Close binary sdBs are most likely formed by common envelope (CE) ejection (Han et al. 2002(Han et al. , 2003)).However, it is difficult to determine the nature of the close companions in sdB binaries.Because most of them are single-lined, only lower mass limits have been derived from the binary mass functions, which are in general compatible with late main sequence stars of spectral type M or compact objects like white dwarfs.Only in rare cases (e.g.eclipsing systems) it is possible to distinguish between these two options.
Subdwarf binaries with massive WD companions turned out to be candidates for supernova type Ia (SN Ia) progenitors because these systems lose angular momentum due to the emission of gravitational waves and shrink.Mass transfer or the subsequent merger of the system may cause the WD to reach the Chandrasekhar limit and explode as a SN Ia.One of the best known candidate systems for the double degenerate merger scenario is the sdB+WD binary KPD 1930+2752 (Maxted et al. 2000;Geier et al. 2007).Geier et al. (2010a,b) analysed high resolution spectra of single-lined sdB binaries.Because the inclinations of these systems are unknown, additional information is required to derive masses.Geier et al. (2010a,b) measured the surface gravities and projected rotational velocities.Assuming synchronised orbits the masses and the nature of the unseen companions was constrained.Surprisingly, some companions may be either massive white dwarfs, neutron stars (NS) or stellar mass black holes (BH).However, the assumption of orbital synchronisation in close sdB binaries was shown to be not always justified and the analysis suffers from selection effects (Geier et al. 2010b).The existence of sdB+NS/BH systems is predicted by binary evolution theory (Podsiadlowski et al. 2002;Pfahl et al. 2003;Yungelson & Tutukov 2005;Nelemans 2010).The formation channel includes two phases of unstable mass transfer and one supernova explosion and the fraction of those systems is consistently predicted to be about 1 − 2%.If the companion were a neutron star, it could be detectable by radio observations as a pulsar.Coenen et al. (2011) searched for pulsed radio emission at the positions of four candidate systems from Geier et al. (2010b) using the Green Bank radio telescope, but did not detect any signals.
We started a radial velocity (RV) survey (Massive Unseen Companions to Hot Faint Underluminous Stars from SDSS 1 , MUCHFUSS) to find sdBs with compact companions like supermassive white dwarfs (M > 1.0 M ⊙ ), neutron stars or black holes (Geier et al. 2011a,b).The same selection criteria that we applied to find such binaries are also well suited to single out hot subdwarf stars with constant high radial velocities in the Galactic halo like extreme population II and hypervelocity stars (see Heber et al. these proceedings; Tillich et al. 2011).
1 Sloan Digital Sky Survey The MUCHFUSS project 3
Colour and RV selection
For the MUCHFUSS project the target selection is optimised to find massive compact companions in close orbits around sdB stars (for details see Geier et al. 2011a).The SDSS spectroscopic database is the starting point for our survey.While the target selection presented in Geier et al. (2011a) includes SDSS Data Release 6, we have now extended the selection to Data Release 7. Hot subdwarf candidates were selected by applying a colour cut to SDSS photometry.All point source spectra within the colours u − g < 0.4 and g − r < 0.1 were selected and downloaded from the SDSS Data Archive Server2 .By visual inspection we selected and classified ≃ 10 000 hot stars.The sample contains 1369 hot subdwarfs.
We excluded sdBs with radial velocities (RVs) lower than ±100 km s −1 to filter out such binaries with normal disc kinematics, by far the majority of the sample.Another selection criterion is the brightness of the stars.Most objects much fainter than g = 19 mag have been excluded.
Survey for RV variable stars
Second epoch medium resolution spectroscopy (R = 1800 − 4000) was obtained using ESO-VLT/FORS1, WHT/ISIS, CAHA-3.5m/TWIN and ESO-NTT/EFOSC2.Up to now we have reobserved 88 stars.Second epoch observations by SDSS have been used as well.We discovered 58 RV variable systems in this way.
The SDSS spectra are co-added from at least three individual "sub-spectra" with typical exposure times of 15 min taken consecutively.Hence, SDSS spectroscopy can be used to probe for radial velocity variations on short timescales.We have obtained the sub-spectra for all sdBs brighter than g = 18.5 mag.From the inspection of these data, we discovered 143 new sdB binaries with radial velocity variations on short time scales (≃ 0.03 d).In total we found 201 new RV variable hot subdwarf stars (see Fig. 1).
In addition 30 He-sdOs show signs of RV variability.This fraction was unexpected since in the SPY sample only 4% of these stars turned out to be RV variable (Napiwotzki 2008).However, it is not yet clear what causes this RV variability.Up to now we were not able to derive the orbital parameters of any such object and prove, that it is a close binary star.
Selection of candidates with massive companions
In order to select the most promising targets for follow-up, we carried out numerical simulations and estimated the probability for a subdwarf binary with known RV shift to host a massive compact companion.We created a mock sample of sdBs with a close binary fraction of 50 % and adopted the distribution of orbital periods of the known sdB binaries.Two RVs were taken from the model RV curves at random times and the RV difference was calculated for each of the 10 6 binaries in the simulation sample.Since the individual SDSS spectra were taken within short timespans, another simulation was carried out, where the first RV was taken at a random time, but the second one just 0.03 d later.Our simulation gives a quantitative estimate based on our current knowledge of the sdB binary populations (for details see Geier et al. 2011a).The extended sample of promising targets including SDSS DR7 consists of 140 objects in total.These objects either show significant RV shifts (> 30 km s −1 ) within 0.03 d (114 stars) or high RV shifts (100 − 300 km s −1 ) within more than one day (26 stars).
Sample statistics
The classification of the hot subdwarf sample is based on existence, width, and depth of helium and hydrogen absorption lines as well as the flux distribution between 4000 and 6000 Å. Subdwarf B stars show broadened hydrogen Balmer and He i lines, sdOB stars He ii lines in addition, while the spectra of sdO stars are dominated by weak Balmer and strong He ii lines depending on the He abundance.A flux excess in the red compared to the reference spectrum as well as the presence of spectral features such as the Mg i triplet at 5170 Å or the Ca ii triplet at 8650 Å were taken as indications of a late type companion.
In total we found 1369 hot subdwarfs, consistent with the preliminary number of hot subdwarfs (1409) found by Kleinman (2010) in SDSS-DR7.983 belong to the class of single-lined sdBs and sdOBs.Features indicative of a cool companion were found for 98 of the sdBs and sdOBs.9 sdOs have main sequence companions, while 262 sdOs, most of which show helium enrichment, are single-lined.
The fraction of close binaries among the hot subdwarf stars in SDSS can be estimated by taking a look at the objects with more than one epoch of spectroscopy.52 stars (34 sdB/sdOB, 7 He-sdO, 11 sdB+MS) from our sample have at least two epochs of observations.53% of the sdBs and sdOBs are RV variable, while only one He-sdO (≃ 14%) and one sdB with a visible companion (≃ 9%) show variability.Due to the small sample size the last two numbers should be regarded as upper limits at most.The binary fraction of the sdB stars is closer to the one found in the SPY project (≃ 40% Napiwotzki et al. 2004) than to the higher fraction of ≃ 70% reported by Maxted et al. (2001).
Spectroscopy follow-up
Follow-up medium resolution (R = 1200 − 4000) spectra were taken during dedicated follow-up runs with ESO-NTT/EFOSC2, WHT/ISIS, CAHA-3.5m/TWIN,INT/IDS, SOAR/Goodman and Gemini-N/GMOS.Orbital parameters of eight sdB binaries discovered in the course of the MUCHFUSS project have been determined so far (Geier et al. 2011b,c).
Since the programme stars are single-lined spectroscopic binaries, only their mass functions f m = M 3 comp sin 3 i/(M comp + M sdB ) 2 = PK 3 /2πG can be calculated.Although the RV semi-amplitude K and the period P can be derived from the RV curve, the sdB mass M sdB , the companion mass M comp and the inclination angle i remain free parameters.Adopting the canonical mass for core helium-burning stars M sdB = 0.47 M ⊙ and i < 90 • we derive a lower limit for the companion mass.
Depending on this minimum mass a qualitative classification of the companions' nature is possible in certain cases.For minimum companion masses lower than 0.45 M ⊙ a main sequence companion can not be excluded because its luminosity would be too low to be detectable in the optical spectra (Lisker et al. 2005).In this case the companion can be a compact object like a WD or a late main sequence star.If the minimum companion mass exceeds 0.45 M ⊙ and no spectral signatures of the companion are visible, it must be a compact object.If this mass limit exceeds 1.00 M ⊙ or even the Chandrasekhar limit (1.40 M ⊙ ) the existence of a supermassive WD or even an NS or BH companion is proven.
The minimum companion masses of seven binaries are similar (0.32 − 0.41 M ⊙ ).From these minimum masses alone the nature of the companions cannot be constrained unambiguously.However, the fact that all seven objects belong to the sdB binary population with the highest minimum masses illustrates that our target selection is efficient and singles out sdB binaries with massive companions (Geier et al. 2011b).
Photometry follow-up
Photometric follow-up allows us to clarify the nature of the companions.Short period sdB binaries with late main sequence or substellar companions show variability in their light curves caused by the irradiated surfaces of the cool companions facing the hot subdwarf stars.If this so-called reflection effect is present, the companion is most likely a main sequence star.If not, the companion is most likely a compact object.In the case of the short period system J1138−0035 a light curve taken by the SuperWASP project shows no variation exceeding ≃ 1%.The companion is therefore most likely a white dwarf (Geier et al. 2011b).We obtained follow-up photometry with the Mercator telescope and the BUSCA instrument mounted on the CAHA-2.2mtelescope.In this way we discovered the first eclipsing sdB binary J082053.53+000843.4 to host a brown dwarf companion with a mass ranging from 0.045 to 0.068 M ⊙ (Geier et al. 2011c).
The very similar eclipsing system J162256.66+473051.1 was discovered serendipituously (see Fig. 2).A preliminary analysis shows that the orbital period is very short (≃ 0.07 d) and the RV semi-amplitude quite low (≃ 47 km s −1 ).The companion is most likely a substellar object as well.The high success rate in finding these objects shows that our target selection not only singles out sdB binaries with high RV-amplitudes, but also systems with very short orbital periods.Low-mass stellar and substellar companions may yet play an underestimated role in the formation of sdB stars (see Geier et al. these proceedings).
Most recently, we detected p-mode pulsations in the sdB J012022.94+395059.4 (FBS 0117+396, Geier et al. 2011a) as well as a longer trend indicative of a reflection effect in a light curve taken with BUSCA.Only a few of the known short-period sdB pulsators (sdBV r ) are in close binary systems.More observations are needed to determine the orbital parameters of this system.
Summary
The MUCHFUSS project aims at finding hot subdwarf stars with massive compact companions.We identified 1369 hot subdwarfs by colour selection and visual inspection of the SDSS-DR7 spectra.The best candidates for massive compact companions are followed up with time resolved medium resolution spectroscopy.Up to now orbital solutions have been found for eight single-lined binaries.Seven of them have large minimum companion masses compared to the sample of known close binaries, which shows that our target selection works quite well.However, it turns out that our selection strategy also allows us to detect low-mass companions to sdBs in very close orbits.We discovered an eclipsing sdB with a brown dwarf companion and a very similar candidate system in the course of our photometric follow-up campaign.These early results encourage us to go on, because they demonstrate that MUCHFUSS will find both massive and substellar companions to sdB stars.
Figure 1 .
Figure 1.Highest radial velocity shift between individual spectra (∆RV) plotted against time difference between the corresponding observing epochs (∆T ).The dashed horizontal line marks the selection criterion ∆RV > 100 km s −1 , the dotted vertical line the selection criterion ∆T < 0.1 d.All objects fulfilling at least one of these criteria lie outside the shaded area and belong to the top candidate list for the follow-up campaign.The filled diamonds mark sdBs, while the open squares mark He-sdOs.
Figure 2 .
Figure 2. Phased light curves of J162256.66+473051.1 taken with BUSCA (UV,B,R,IR-band).Primary and secondary eclipses can be clearly seen as well as the sinusoidal shape caused by the reflection effect. | 2011-12-13T15:40:21.000Z | 2011-12-13T00:00:00.000 | {
"year": 2011,
"sha1": "f09d48ce9d12a7264c1dc7347f757ed8591e8c8b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f09d48ce9d12a7264c1dc7347f757ed8591e8c8b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
254043515 | pes2o/s2orc | v3-fos-license | Back to the Surplus: An Unorthodox Neoclassical Model of Growth, Distribution and Unemployment with Technical Change
The article examines how institutions, automation, unemployment and income distribution interact in the context of a neoclassical growth model where profits are interpreted as a surplus over costs of production. Adjusting the model to the experience of the US economy, I show that joint variations in labor institutions and technology are required to provide reasonable explanations for the behavior of income shares, capital returns, unemployment, and the big ratios in macroeconomics. The model offers new perspectives on recent trends by showing that they can be analyzed by the interrelation between the profit-making capacity of capitalist economies and the political environment determining labor institutions.
Introduction
Over the past 50 years, the US economy experienced a large decline in the labor share, a considerable rise in capital returns, and a surge in income and wage inequality. Isolating for short-run fluctuations, this all occurred almost simultaneously with a fall in the rate of unemployment, an increase in the intensive use of capital, and a low and stable rate of inflation. It also coincided with institutional changes which affected the balance of power between workers and firms reflected, among other things, in a reduction in the incidence of unions, a fall in the real value of minimum wages, and smaller benefits per unemployed relative to average labor productivity.
These empirical regularities have attracted a great deal of attention but, given the complexity of the phenomena, have been studied for the most part in isolation. A case in point is the study of the falling labor share, which has been explained by four main hypotheses: technological change, represented in the declining relative price of capital (Karabarbounis & Neiman, 2014;Piketty & Zucman, 2014); automation and offshoring, normally described by a task-based formalism (Acemoglu & Restrepo, 2018;Grossman & Rossi-Hansberg, 2008); enhancing market power by large firms, usually measured by rising markups (Autor, Dorn, Katz, Patterson, & Van Reenen, 2020;De Loecker, Eeckhout, & Unger, 2020); and the eroding bargaining power of workers (DiNardo, Fortin, & Lemieux, 1996;Stansbury & Summers, 2020;Farber, Herbst, Kuziemko, & Naidu, 2021).
While each one of these hypotheses has been successful in explaining part of story behind the declining labor share, they are inconsistent or silent about other key empirical regularities. Narratives based on technological change, for instance, depend on the existence of gross substitution between labor and capital, which is at odds with the findings of numerous studies; see Chirinko (2008) for a survey review.
The automation hypothesis, though it provides a general equilibrium rationale for the decline in the labor share, cannot account for the rising return of capital. 1 The market power hypothesis, as noted by Stansbury and Summers (2020), is hard to reconcile with the falling rates of unemployment and inflation given that an increasing monopoly or monopsony power is likely associated with less hiring and a higher pass-through of profits to prices. Lastly, the hypothesis of the eroding bargaining power of workers, though it offers a unified explanation for the rise in profitability, the decreasing wage share, the falling rate of unemployment, and the low and stable inflation, cannot account for the increase in the intensive use of capital and provides no endogenous theory for the determination of the rate of return of capital.
The main contribution of this paper is to offer a framework for an endogenous theory of aggregate profits in a competitive general equilibrium environment with which to understand the aforementioned empirical regularities. This approach builds upon Sraffa's (1960) critique of the mainstream interpretation of "costs of production," which is known to be logically inconsistent with the presence of equilibrium prices with uniform rates of return (Garegnani, 1990;Eatwell, 2019). 2 By rejecting the neoclassical 1 Hubmer and Restrepo (2021) extend the task-based framework to include positive markups. In doing so, however, they maintain similar problems as those encountered by the market power hypothesis. 2 It is commonly thought that the criticisms towards the logical consistency of the marginalist approach are confined to the theories of distribution using aggregate production functions (see, for example, Hahn, 1982), when in fact the critiques that followed Sraffa's (1960) contribution show that neoclassical theories which attempt to explain equilibrium prices and income distribution based on supply and demand equations alone are generally indeterminate in competitive environments with a interpretation of costs, the paper restores the view of aggregate profits as a surplus and with it the need of referring to political and institutional factors as key determinants of income distribution. In essence, though the model accepts the premise of competitive-rational behavior of neoclassical economics, it rejects the view that commonly accepted concepts like "aggregate capital," "marginal productivity," and "costs of production" have a coherent economic interpretation independently of how the aggregate surplus is divided in society. 3 Framework. The argument of the paper is divided in three parts. First, I present an environment with a chain of intermediate and final good producers who hire labor services from workers and buy new machines from capital good firms for the production of the final good (net aggregate output). Here the final good can be used for consumption and as a means of production of new capital goods, which helps highlight the principle that capital is generally a produced commodity and that the costs of production of the final good cannot be defined independently of the rate of return of capital. This leads to an indeterminacy problem which is analogous to that described by Sraffa (1960) in the sense that the price equations of the system hold a degree of freedom that cannot be determined by the technology of production.
The second part of the model presents a solution to the indeterminacy problem by introducing a "closure" to the price equations based on the dynamic interaction between unemployment, technical change (described by the mechanization and creation of tasks) and income distribution. This is represented by merging the task-based formalism of Zeira (1998) and Acemoglu and Restrepo (2018), the equilibrium unemployment literature, and the capital adjustment cost theory of investment. The taskbased formalism provides a microfoundation to the cost structure of the final good and it helps examine how the mechanization and creation of tasks is interrelated with the dynamics of unemployment and aggregate profits. The equilibrium unemployment formalism presents a rationale explaining how the labor market interacts with income distribution and establishes clear economic principles based on bargaining processes for the determination of the rate of return of capital. Lastly, the theory of capital cost adjustments is used to highlight the fact that capital is generally owned by firms and that profit maximization problems can be best understood by explicitly acknowledging that production that takes place in time (Kydland & Prescott, 1982;Lucca, 2007).
Third, in order to show how labor market institutions affect the rate of unemployment and the rate of return, I follow Smith (1976) and Becker and Mulligan (1997) and characterize the relative bargaining power of labor in terms of heterogenous discount factors for capitalists and workers. 4 This is formalized uniform rate of return (Eatwell, 1990;Garegnani, 1990). This conclusion also covers the Arrow-Debreu model, which only provides a coherent solution to the price equations by abandoning the effort of determining a long-run equilibrium with a single rate of profit. 3 In this paper, instead of attempting to construct multi-sectoral microfoundations to aggregate production functions like, for example, Baqaee and Farhi (2019), I reject from the start the principle that income distribution can be determined by the neoclassical theory of value based on the equilibrium of demand and supply equations. This does not state that neoclassical models using a single capital good are necessarily inconsistent, but given that the conclusions derived from a single good economy cannot be extended to disaggregated systems, it seems appropriate to start from a foundation which recognizes the logical inconsistencies that arise in general when economic analysis is conducted in isolation of the institutions of society. 4 In discussing the determination of average wages as a result of the negotiation between capitalists and workers, Smith (1976, p. 84) summarized the importance of time preferences noting that "in all such disputes the masters can hold out much longer.
using the principle that bargaining processes can take the form of alternating offers models (Binmore, Rubinstein, & Wolinsky, 1986), which has the desirable property of portraying labor power in terms of endogenous discount factors determined by current institutional and political settings. The model then shows that the relative welfare condition of social classes is central for the determination of bargaining strengths and these are, in turn, key determinants of aggregate profits.
Contributions.
Building on this unorthodox neoclassical synthesis, the model characterizes the restrictions under which the economy reaches a balanced growth path with automation and creation of new tasks (Acemoglu & Restrepo, 2018), equilibrium unemployment, positive rates of return, and falling investment-good prices with less-than-unitary elasticity of substitution (Grossman, Helpman, Oberfield, & Sampson, 2017). A key feature of this characterization of balanced growth paths is that the the economy is represented as a circular flow of values sustained by its capacity of generating aggregate profits. In this respect, it is shown that specific forms of labor institutions and technology are required to guarantee sustainable growth, i.e., it cannot be taken for granted that the economy will always and forever generate sufficiently high profits to reproduce itself in time.
The steady-state general equilibrium offers a rich but tractable framework that illustrates how automation and varying institutional support to workers shape the asymptotic aggregate outcomes of the economy. Under suitable institutional settings, automation reduces the stationary values of employment, wages and the wage share, while it raises the steady-state values of the capital-output ratio and the share of investment expenditures to aggregate output. Respectively, a rising institutional support to workers-expressed by higher unemployment benefits or other related factors which increase the outside options to employment-result in a long-run decline in employment and profitability, but it increases stationary wages, the labor share, the capital-output ratio and the share of investment expenditures to aggregate output.
The theoretical framework also establishes a three-way interaction between labor institutions, technology and income distribution. It highlights, for example, how varying support to workers may indirectly affect income distribution by impacting the assignment of tasks between labor and capital. This provides an endogenous theory showing that if, for instance, labor is "overpriced" because of institutional related factors, firms can respond by investing in new technologies that can effectively replace workers (Acemoglu & Autor, 2011). Additionally, it provides explicit bounds defining the extent to which labor institutions can support workers before they become a threat to the reproduction of capital, suggesting there may exist a dichotomy between the social and economic sphere of citizenship-recognized by elements of the welfare state-and the profit-making capacity of capitalist societies. Lastly, it highlights the key role that institutions can have in protecting workers from the impact of unregulated market forces by relating the potential negative effects of automation to increasing rates of unemployment, lower real wages, and greater income inequality.
From an empirical perspective, the model shows that a combination of institutional and techno-A landlord, a farmer, a master manufacturer, or merchant, though they did not employ a single workman, could generally live a year or two upon the stocks which they have already acquired. Many workman could not subsist a week, few could subsist a month, and scarce any a year without employment. In the long-run the workman may be as necessary to his master as his master is to him; but the necessity is not so immediate." logical factors is required to provide reasonable explanations for the behavior of income distribution, profitability, unemployment and the big ratios in macroeconomics in the postwar US economy. The technology hypothesis, expressed by a reduction in the measure of tasks perform by labor, can account for a large bulk of the decrease in the labor share following the 2000s, but is inconsistent with the surge in profitability and the reduction in the rate of unemployment in the wake of the 1980s. Conversely, the hypothesis based on labor institutions provides a plausible story for the fall in the profit share following the policies of the Great Society in the 1960s by relating the rise in the welfare state with an increase in workers' outside options to employment, and for the reduction in the rate of unemployment and the rise in profitability since the early 1980s by relating these economic outcomes with the conservative retrenchment that followed (Pierson, 1994). The labor institutions hypothesis, however, falls short in explaining the variations of the capital-output ratio, suggesting it is only part of the story describing the main changes in the US economy.
To evaluate the plausibility of each hypothesis, I compare the inferred changes in technology and labor institutions derived from the model and show that they are consistent with the history of welfare and technical change described by Pierson (1994), Noble (1997), Frey (2019), Dechezleprêtre, Hémous, Olsen, and Zanella (2019) and Mann and Püttmann (2021). This not only shows that the model can provide a basis for interpreting some of the key empirical regularities of the US economy, but also that a thorough understanding of macroeconomic trends can strongly benefit from a careful examination of social policies associated with the rules of liberal democracy defining the accord between capital and labor.
Interpretation of the Contributions. The logic behind this article is based on the principle that capitalist economies are "open" systems which cannot be detached from the specific institutional context of society. Formally, this is captured by interpreting profits as surplus over costs of production, since it implies that: (i) firms cannot take profits as given when initiating production; (ii) profits cannot be determined by the technology of firms; and (iii) costs of production cannot be determined independently of the class distribution of income.
Altogether, (i)-(iii) create an analogy of Sraffa's (1960) critique of the neoclassical theory of distribution. It is from this perspective that the model is perceived as unorthodox, even though it maintains the use of production functions and rational agents as a tool for counterfactual analysis. In this interpretation, however, the aggregate production function resulting from the task-based formalism is merely expressing an accounting identity in an economy with time-varying wage-shares and capital-output ratios; 5 meaning that the inherent social element characterizing the creation of profits is not hidden under the shadow of the production function, but is rather treated explicitly as a social outcome resulting from bargaining processes between capitalists and workers. Ultimately, this structure entails that no single contribution of the model can be derived independently of the historic-specific power relations defining the accord between capital and labor-all of which may depend on the rate of unemployment, technical change, minimum wages, union density, austerity policies, etc.
Related Literature. This paper contributes to different areas of the literature. First, the theoretical framework builds from the task models of Zeira (1998), Acemoglu and Autor (2011), Acemoglu and Restrepo (2018) and Nakamura and Zeira (2018). Relative to this literature, this paper proposes a bridge to reconcile the equilibrium unemployment literature with the economic decision of task automation. As part of this contribution, the model treats the return of capital as an endogenous variable determined by bargaining processes between capitalists and workers, similar to Shimer (2005), Hall and Milgrom (2008), Pissarides (2009), Petrosky-Nadeau, Zhang, andKuehn (2018), and others. The combination of these two lines of research provides a basis for understanding how unemployment, automation, and income distribution are jointly determined in a general equilibrium setting.
Second, this work extends on the literature attempting to explain the trends of key macroeconomic variables over the past 50 years (Karabarbounis & Neiman, 2014;Piketty & Zucman, 2014;Farhi & Guorio, 2018;Autor et al., 2020;Barkai, 2020;De Loecker et al., 2020;Stansbury & Summers, 2020;Eggertsson, Robbins, & Wold, 2021). Most closely related to this paper is Stansbury and Summers (2020), who also identify the changes in the bargaining power of labor as a leading cause of the changes in the labor share of income, unemployment and profitability. The main difference with the current literature is that by incorporating automation technologies, the analysis can identify the extent to which the changing trends in macroeconomic variables are caused by technology or institutional related factors. 6 Third, the empirical narrative of this work builds on the literature highlighting the central role of institutions on market related outcomes. Particularly, extending on the works of DiNardo, Hallock, and Pischke (2000), Piketty (2014Piketty ( , 2020 Ahlquist (2017), Farber et al. (2021), among others, the model emphasizes the potential impact that changes in the support to workers can have on income distribution, profitability and employment. Additionally, the paper shows that the impact of automation on the economy can be best understood in the context of specific institutional settings which either enforce or attenuate the displacing effects of technology on labor (Lemieux, 2008;Levy & Temin, 2011).
Outline. Section 2 presents the theoretical structure of the model. In Section 3, I show that the taskbased formalism can be linked to the traditional search and matching model, and that simple bargaining models can be used to determine the rate of return of capital and the class distribution of income in a general equilibrium environment. Section 4 presents the conditions for steady-state growth and the analysis on comparative statics. Next, in Sections 5 and 6, I present the main empirical results of the model and the historical investigations associated with institutional and technical change in the postwar US economy. Finally, I offer some concluding remarks in Section 7.
Notation. The partial derivative of any functiong t +h (x 1t , ..., x nt ) with respect to any x i t (i = 1, ..., n) is denoted asg x i t ,t +h . If a function g (x) depends on a single variable, g denotes its derivative with respect to x. 6 In this paper, I omit referring to the market power hypothesis for two reasons: (a) it is hard to distinguish rising markups with a falling bargaining power of labor (Stansbury & Summers, 2020, p. 6); and (b) a rising markup is the subject matter one is interested in understanding, not the assumption that one should be imposing to justify the changes in the macroeconomy.
Model setup
The model follows a task-based framework along the lines of Zeira (1998) and Acemoglu and Restrepo (2018). There is a chain of intermediate and final good firms producing a final good, which can be consumed by households or used as an input for the production of capital goods. Time is discrete and is indexed by t ∈ N.
2.A Final and Intermediate Goods Production
I consider a closed economy with a single final good produced using the technology where σ > 0 is the elasticity of substitution and y t ( j ) is an intermediate output produced with a task Similar to Acemoglu and Restrepo (2018), the measure of tasks used in production is always equal to 1, meaning that newly-created tasks represent higher productivity versions of the existing ones.
Intermediate outputs are produced with labor or capital using a linear production function y t Here J t represent the available number of mechanized tasks, l t ( j ) is the employed labor, h t represent the number of hours per worker of any type, Γ K t ( j ) and Γ N t ( j ) are capital and labor-augmenting technologies, and k t ( j ) are the units of capital needed in the production of task j .
The unit cost of each task is represented by a linear system The rate of depreciation is δ ∈ (0, 1), P k t is the price of capital units and w t is the wage rate. Throughout, I will use the following assumption on the technical coefficients.
Assumption 2.1(i) says that labor has a comparative advantage in higher-indexed tasks, meaning there is a thresholdJ t such that e αJ t /Γ K t = w t /δP k t . AtJ t , intermediate good producers are indifferent between producing with capital or labor. In particular, for all j ≤J t , tasks will be produced with capital since δP k t /Γ K t < w t e α j . However, if j >J t , intermediate good producers are bounded by the existing technology and will only be able to mechanize tasks up to J t . The unique threshold is consequently given by such that all tasks in [M t −1, J * t ] are produced with capital and the remaining are produced with labor. Assumption 2.1(ii), in turn, implies that an increase in the number of tasks will increase aggregate output (Acemoglu & Restrepo, 2018).
As usual, the demand function for task j is given by , such that the aggregate demand for capital and labor can be expressed as is the measure of tasks produced by labor, m * t is the equilibrium technology measure, and P c t is the price index of costs of production satisfying Replacing the solution of w t and δP k t from (4a)-(4b) in (5), the aggregate production function takes a simple CES form described by total hours of work, and ω k t ≡ (1 − m * t ) 1/σ /ω t is the capital distribution parameter.
2.B Capital Good Producers
Similar to Lucca (2007), I assume that the capital stock increases with the maturity of a large number of symmetric and complementary investment projects. Each investment project of type i at time t is denoted as I t (i ) and it reaches maturity if i ∈ H t ⊆ [0, 1]. Firms choose the desired scale of investment when initiating each project and cannot modify it until the period of maturity, in which case it can start a new project with a new scale of investment the following period.
The time of maturity of investment projects is described by a Poisson-process with arrival rate π I ∈ (0, 1) and the production of investment goods is described by a technology The final good is used in the production of investment projects using a linear technology in which one unit of Y t is transformed into Ψ t units of I t (i ) for all i ∈ [0, 1]. Denoting P t as the selling price of the final good and working in a competitive economy with a uniform rate of return, the price of investment projects satisfies P I t /P t = Ψ −1 t , where Ψ t = Ψ t −1 e z ψ is a non-stationary investment-specific technology with growth rate z Ψ .
Assuming that investment firms minimize the expenditure on investment projects X t ≡ P I t 1 0 I t (i ) d i subject to (7) (see Appendix A): with , Ω I t ,t ≥ 0, Ω I t −1 ,t ≤ 0.
In the limit when π I → 1, the investment expenditure is X t = P I t I t , which can be interpreted as the limiting case where the time-to-build period approaches zero. On the contrary, if π → 0, then X t → ∞ since υ > 1. Intuitively, this represents a scenario with an infinite time-to-build period.
Given the value of investment expenditures, capital producers sell new capital goods to the chain of intermediate and final good firms at a price P k t to maximize the discounted value subject to (8) and X t ≡ X t /P I t . Hereβ c t is the discount factor derived from a utility maximization problem of capitalist households. As usual, P k t = P I t when π I → 1 for all t ≥ 0, which resembles the case of no investment adjustment costs.
2.C Price System
Similar to Sraffa (1960, p. 8), the model presents an interrelation between capital and final good producers showing that costs of production cannot be defined independently of the price of capital and consequently of the price of the final good itself. This link is established using the notion of own-rates of return, described here as In this case, even if we normalize the system by fixing P c t = 1, we still have four unknowns {P k t , P I t , P t , µ t } and three independent equations since P t and µ t both depend on (10). 7 This indeterminacy issue is well captured by the equations of the marginal productivity of capital and labor, which satisfy Equation (11) helps highlight the principle that, in general, costs and marginal productivity equations cannot be measured independently of, and prior to, the determination of rate of return of capital (Sraffa, 1960, p. 9). In this respect, though I use marginal productivity theory as a by-product of the CES aggregator, this cannot be considered the determinant of profitability and the distribution of income without being trapped in a circular argument.
Ultimately, the indeterminacy of the system brings back to the surface the interpretation of the rate of return as a surplus, and with it the importance of referring to political and institutional factors in order to determine how the surplus is divided in society. In the following section I illustrate this principle by describing the rate of return of capital as an endogenous outcome of the dynamic interaction between unemployment and income distribution using an extended version of the search-and-matching model.
Model dynamics
I consider a general equilibrium model with unemployment with three variants. First, profits are a surplus over costs of production. Second, employment and capital dynamics are integrated with a technological unemployment component resulting from the automation of tasks. Third, the relative bargaining power of workers is an endogenous outcome determined by the discount factors of workers and capitalists.
3.A Search, Matching and State Dynamics
In each period a measure 1 of economically active agents are either employed or unemployed workers, and can be hired by capitalists for the purpose of creating profits in exchange for wages. Employed workers are represented by L t and the remaining U t = 1 − L t represent the unemployed. Vacancies are filled via the Den Haan, Ramey, and Watson (2000) matching function G(U t ,V t ) = (U t V t )/(U ι t +V ι t ) 1/ι , with ι > 0. Define θ t ≡ V t /U t as the vacancy-unemployment ratio (labor market tightness). The job finding The real unit cost per vacancy is modeled as Like Petrosky-Nadeau et al. (2018, p. 2215), unit costs per vacancy contain fixed costs (κ 1 ), which capture training and administrative costs of adding workers to the payroll, and proportional costs (κ 0 ), which increase in relation to the expected duration of vacancies, 1/q(θ).
Introducing changes in the mechanization and the creation of new tasks, and assuming that workers can only transit within the working population, the evolution of employment can be described as 8 As usual, λ is an exogenous Poisson rate defining the probability that an employed worker becomes unemployed per unit of time. The displacement and reinstatement effects of mechanization and the creation of new tasks is represented in the last term on the right-hand side of the equation. As noted by Acemoglu and Restrepo (2018), mechanizing existing tasks creates, on one hand, a displacement effect by replacing labor for machines. On the other hand, the creation of new tasks generates a reinstatement effect by expanding the demand for labor, consequently reducing unemployment.
Correspondingly, the dynamics of the value of the capital stock is described as Using equations (4a)-(4b) and (11), technological unemployment is defined as the sum of the displacement and reinstatement effects, such that Similarly, the addition of capital resulting from the mechanization and creation of tasks is given by One of the insights of (13) and (14) is that technological unemployment can be interpreted as the net result of the mechanization and creation of additional tasks. Altogether, the dynamics of aggregate employment and aggregate capital can be expressed concisely as The addition of technological unemployment can be interpreted as introducing an endogenous separation rateλ t = λ + U A L t ,t , which in equilibrium is related to the costs of labor relative to capital. On the capital side,δ t = δ − A K t ,t can be interpreted as the effective depreciation rate, since it balances the value of fixed capital lost from wear and tear, and the value acquired from the automation and creation of tasks.
Workers. I assume a continuum of ex ante identical workers of measure 1 who are perfectly insured against variations in labor income. All employed workers receive w t N t and unemployed workers receive U t b t , where b t is the nominal value of public benefits that unemployed forgo upon employment (Chodorow-Reich & Karabarbounis, 2016).
The worker chooses the consumption of the employed and unemployed, C we t and C wu t , to maximize the expected sum of discounted utility flows subject to (15), U t +1 = 1 − L t +1 and the budget constraint The flow utility of the employed and unemployed are represented by U we (C we t , h t ) and U wu (C wu t , 0), respectively. Expressing the present-discounted value of income streams for an employed and unemployed worker in consumption units by dividing by the marginal utility of consumption, the first-order conditions can be expressed as: These optimality conditions are standard in all respects but on the treatment of the discount factor, which is interpreted as a time- is an institutional, political or economic variable favoring the relative welfare condition of workers, and β w Γ i t < 0 if the contrary is the case. For example, an increase in labor unions, a higher real value of minimum wages, or higher unemployment benefits are variables in Γ t that probably have a positive effect on β w t . In contrast, variables related to globalization, labor outsourcing, automation, lower top-income tax rates, or the increasing hiring of managers with business education may have a negative effect on the discount factor by improving the relative welfare condition of capital relative to labor.
Capitalists.
A representative capitalist chooses the amount of vacancies V t and consumption C c t that maximize the discounted utility flows: subject to (15) and the financial constraint 9 As usual, T t = U t b t are lump-sum taxes used to finance unemployment benefits. Expressing the 9 It is worth noting that all financial constraints are derived from the flows of value of Marx's circuit of capital as formalized by Foley (1986). The connection between Marx's accounting structure and the general equilibrium model in this article is presented in Appendix B.
first-order conditions in consumption units by dividing by the marginal utility of consumption: is the "zero-profit" condition. Equation (21) introduces a time-varying separation rate and shows that technological unemployment lowers the marginal value of an employed worker since it increases the probability of unemployment per unit of time.
Functional Forms. Similar to Chodorow-Reich and Karabarbounis (2016), the preferences of workers and capitalists are described by: Where j = {we, wu, c}, we represents employed workers, wu the unemployed, and c the capitalists.
By assumption h j t = 0 for j = {wu, c} since the labor of employed productive workers is the only one directly relevant for the creation of profits.
3.B Wage Bargaining, the Rate of Return and the Class Distribution of Income
Following the approach of the Classical economists, the rate of return is interpreted as a social outcome resulting from wage-bargaining processes between capitalists and workers. This can be formalized using a variety of game-theoretic models of bargaining; see, e.g., Hall and Milgrom (2008), Gertler andTrigari (2009), andChristiano, Eichenbaum, andTrabandt (2016). Here, however, I use the Nash bargaining solution for its simplicity and interpret the solution of the model as the limiting case of an alternating offers model with heterogenous discount factors. Assuming no side payments per round of negotiation, with Rate of return. Considering the alternating offer bargaining model as a benchmark for an interpretation of the Nash solution, the rate of return of capital is solved using (10), (11), (22), and the first order conditions of (23) with respect to w t /P t , such that and As usual, Z t is the opportunity cost of employment and it represents the sum of real unemployment benefits and the utility differential from nonworking time of unemployed and employed workers.
The first term in (24) depicts the share that capitalists can extract in the production process if they were to pay workers the opportunity cost of employment. The second term shows that the rate of return decreases with a tighter labor market in relation to the the average hiring costs of each unemployed and with a rise in the power of workers. From this term it is also clear that a decline inβ w relative tõ β c generally lowers the negative impact of employment on profitability, since it will tend to reduce the relative bargaining power of workers and will consequently reduce the variations of real wages to changes in θ t . An interpretation of this result is that a tighter labor market is required when the relative bargaining power of workers is low in order to support real wage growth. 12 (24) shows that µ t is negatively affected by a rise in the relative bargaining power of workers and by an increase in the opportunity costs of employment. This juxtaposition between the economic sphere of citizenshiprepresented by welfare related factors raising the outside options to employment-and the profitability of capital is used in the following definition.
Definition 3.1 The corridor of economic and political stability is defined by values of
where is the labor share on costs of production, τ t ≡ T t /(P k t K t ) represents the value of taxes over the capital stock, ζ t ≡ κ t V t /(P k t K t ) is the ratio of vacancy costs to capital, and capital will systematically have an upper hand over labor. 12 This problem was recently noted by Stansbury and Summers (2020) as an explanation of the falling NAIRU.
The lower bound µ min t is set to satisfy the condition that C c t ≥ 0 for all t ≥ 0 in equation (20). Intuitively, if µ t > µ min t , capitalists can use a share of net aggregate profits for consumption and new additions to productive capital. However, if µ t = µ min t , capitalist consumption becomes zero because all retained profits must be used to finance capital outlays, taxes, and vacancy costs. In the opposite extreme, µ max t offers a clear view of the principle that a reduction in the opportunity cost of employment tends to raise the profitability of capital. Equation (25) also shows that in the limit when η w,t = 0, the rate of return does not depend on the conditions of the labor market, meaning that a tighter labor market will generally have a lower negative pressure on µ t as η w,t → 0.
The corridor of economic and political stability has major implications from a political economy perspective. It shows, on one hand, that policies raising the support to workers and promoting full employment conditions may reduce the return of capital to the point that µ t ≤ µ min t -making the economy unsustainable. On the other hand, policies that severely harm the bargaining power of workers may lead to politically fragile societies, which may manifest in different forms such as democracies favoring populist movements (Frey, Berger, & Chen, 2017;Frey, 2019, p. 130), social instability and political unrest (Dal Bó & Dal Bó, 2011;Caprettini & Voth, 2020), or acts of desperation reflected in a rise of alcoholism, suicide and drug addiction (Hobsbawm, 1996, p. 204;Case & Deaton, 2021). 13 Class Distribution of Income. Using (11) and (24), the labor share on gross and net aggregate income can be expressed as 14 Here the class distribution of income is a function of automation and the rate of return. All factors contributing to an increase in the rate of return or to an increase in the mechanization of tasks will, holding everything else equal, decrease the labor share on aggregate income. This result generalizes the commonly posited explanations of the declining share of wages based on technological change and rising monopoly power (see, e.g., Acemoglu & Restrepo, 2018;Autor et al., 2020;De Loecker et al., 2020) by explicitly considering the role institutional and political factors in the distribution of income. In this respect, the model is capable of reconciling the evidence of an eroding bargaining power of workers found by Ahlquist (2017) and Stansbury and Summers (2020), among others, as an additional explanation of the declining labor share.
3.C Aggregation and Equilibrium
Aggregate consumption is defined as a weighted average of the corresponding variables for capitalists and workers: 13 The rise of populist movements and social unrest can be interpreted as specific political choices where workers exercise their "voice" to make a change in society. The rise of alcoholism, suicide and drug addiction, in turn, can be seen as an "exit" to the precarious conditions exerted by society (Hirschman, 1970).
Summing over the financial restrictions of workers and capitalists, together with the profits of capital good producers, the aggregate resource constraint satisfies P t Y t = P t C t + X t + κ t V t since it is assumed that unemployment insurance benefits are entirely financed by lump-sum taxes on capitalists.
Equilibrium: Definition and Characterization. The equilibrium properties of the economy are described by the following definition.
Definition 3.2 A recursive equilibrium is a solution for (a) a list of functions {Φ
the properties of the system, in the next section I provide a complete characterization of the steady-state equilibrium and some key results of comparative statics.
Steady-State Growth Analysis and Comparative Statics
The analysis in this section extends on Uzawa's (1961) seminal paper and the more recent works of Grossman et al. (2017) and Acemoglu and Restrepo (2018), showing that the economy can reach an equilibrium growth path while allowing for falling investment-good prices and less-than-unitary elasticity of substitution between capital and labor.
4.A Steady-State Growth
To clarify terms and set the basis for the analysis, it is convenient to start with the following definition. (possibly zero), and the capitalists savings rate s t ∈ (0, 1) for all t ≥ 0.
Definition 4.1 A balance growth path for the economy is a path along which
This definition of balance-growth paths is intended to describe an economy capable of producing sufficiently large profits so that capitalists can finance their consumption, expenses, and leave a positive remnant for the continuous expansion of capital.
To ensure balance growth, I impose some additional structure on the model using the following assumption.
Assumption 4.2 (i) The capital-augmenting technology satisfies
where m * t is the equilibrium measure of automation.
Assumption 4.2 (i) is meant to satisfy the condition that the value of capital measured in units of the final output is constant in equilibrium. Combining Assumptions 4.2 (i)-(ii) presents a purely laboraugmenting technological change, which is necessary for balance growth paths (Uzawa, 1961). Lastly, Assumption 4.2 (iii) imposes the condition that the creation of tasks evolves in time at the same rate that the equilibrium mechanization of tasks.
Before proceeding to the key results of the section, it is useful to introduce a modified version of Lemma A2 of Acemoglu and Restrepo (2018) to characterize the effects of automation as a function of the rate of return of capital.
Lemma 4.3 Suppose that Assumption 4.2 (i) holds. Setting γ
The initial statement setting γ k = B −1 δP k t (0) is used to guarantee thatm(µ) ≥ 0 for all µ ≥ 0. 15 Lemma 4.3 has the intuitive appeal of linking the effects of automation to the rate of return of capital, and consequently on the institutional variables which may affect µ t . This is well portrayed in Figure 1, where it can be deduced that policy measures leading to a reduction in the rate of return of capital may have the unintended consequence of making automation a viable option for the reduction of unit labor costs.
Ultimately, Lemma 4.3 shows that the effects of automation on the economy always depend on the specific institutional arrangements of society and cannot be properly understood independently of the rate of return of capital.
The minimum and maximum values of m are defined by the the corridor of economic and political stability in equation (25). These bounds rule out the possibility of an equilibrium where m = 1, which is reasonable given that it is meaningless to refer to capitalist societies without capital. Correspondingly, the viable values of µ also rule out an equilibrium where m = 0, since without the institution of wagelabor there is no basis for determining aggregate profits in capitalist societies.
Theorem 4.4 Suppose that Assumption 4.2 holds. Given an initial value of capital assets K 0 P k 0 , and a bargaining power η w,t ∈ (0, η U w,t ), the economy admits a balance growth path with: (A) An equilibrium growth rate equal to: Figure 1: Automation regions. Notes-The capital-augmenting technology is expressed using Assumption 4.2 (i).
where r t ≡ Π t /(P k t K t ) is the aggregate rate of profit and P k t K t is the money value of capital assets.
(B) A steady-state equilibrium satisfying: Equation (29) characterizes the equilibrium rate of growth under the assumption that all savings are made by capitalists, which is a first order approximation of reality intended to identify the sources of income by the role that individuals play in the production process of commodities. The decomposition of equation (29) shows that changes in the equilibrium rate of growth must act through changes in the equilibrium rate of return and the share of retained profits recommitted in the form of capital outlays (Foley, 1986). That is, the sources of growth are found in the expansion of the value of capital outlays in the process of production and by how much this value is recommitted as productive capital.
A distinctive feature of Theorem 4.4 (A), which is not always explicit in balance growth-path analyses, is that the existence of an equilibrium rate of growth depends on specific institutional settings allowing the reproduction of sufficiently large profits. Particularly, it is necessary that η w,t ∈ (0, η U w,t ) to obtain equilibrium aggregate profits that surpass the value of capitalist expenses. The key matter in this respect is that the relative bargaining power of workers is not determined by technology or preferences, but rather by institutional and political factors. The steady-state equations in (30a)-(30c), in turn, provide valuable information for understanding the results on comparative statics in subsection 4.B and the empirical findings in subsection 5.B.
Labor Market Equilibrium. The "closure" for determining the steady-state equations in Theorem 4.4 is obtained using the equilibrium of the labor market. In this setting, the Nash solution in (24) replaces the usual labor supply equation since it draws a negative relation between profitability and the vacancyunemployment ratio. Correspondingly, the first order conditions of capitalists in (21) can be used as the labor demand equation since it presents a positive relation between labor market tightness and the rate of return of capital. Expressing both equations in steady-state form, it follows that: Under fairly general conditions the intersection of µ D and µ S defines a unique equilibrium of µ and θ which can then be used to identify all other variables in the economy.
4.B Comparative Statics
We now study the long-run implications of permanent changes in technology and labor institutions. For this purpose I will consider the effects of a decline in m, which represents a situation where automation runs ahead of the creation of new tasks; a permanent reduction inb = (b t /P t )e −αJ * t , which corresponds to lower public benefits that unemployed forgo upon employment relative to labor productivity; and a decline in β w , representing permanent institutional changes worsening the relative welfare conditions of workers.
The next proposition characterizes the long-run impacts of technological and institutional changes on employment, profitability, income distribution and big ratios in the economy. • For m >m(µ), a decrease in m lowers the asymptotic stationary values of w t , θ t , L t and Ω c t . Respectively, a permanent reduction in m raises the asymptotic stationary values of X t /(P t Y t ), and K t /Y t . The effects on the rate of return of capital µ t depend on the model parameters.
Panel B: Beveridge curve (ii) (Unemployment benefits and relative welfare of workers) A reduction inb or β w raises the asymptotic values of µ t , θ t , L t , and Y K t . Correspondingly, lower values ofb or β w reduce the asymptotic
Effects of automation.
Starting with the effects of automation in Figure 2, it is clear that if m >m(µ), a permanent reduction in m creates a negative effect on employment from two fronts: it lowers the labor supply equation since, by Lemma 4.3, wages decrease relative to the long-run expansion of the economy; and, it raises the demand for labor because for any given vacancy-to-unemployment ratio, firms will be able to extract a greater surplus over wages. Graphically, the steady-state travels from (a) to (b), with the resulting equilibrium of µ depending on the model parameters, but ultimately leading to an increasing rate of unemployment in an amount which depends on the form of the Beveridge curve.
The equilibrium in the capital market is also affected by changes in automation. Drawing on the results of Theorem 4.4, Figure 3 shows how the demand for capital changes in relation to variations in m. Particularly, for a given value of µ, a higher automation rate increases the asymptotic stationary capital-output ratio since new tasks are produced using capital when m * = m. The increase in capital intensity (K /Ŷ ) leaves the equilibrium marginal productivity of capital unaltered, so the ultimate effects of automation on the capital market are a reduction in the equilibrium rate of profit, an increase in the capital cost share and a rise in the investment expenditure to output ratio; see the transition from (a) to (d ) in Panels A and B of Figure 3.
The contrasting behavior between the marginal productivity of capital and the rate of profit is a matter of significant importance and deserves special attention. The difference between these two variables is well represented in Panel B of Figure 3, where it can be observed that the marginal productivity of capital generally differs from the rate of profit. Additionally, Figure 3 shows that whereas an increase in the automation of tasks reduces the rate of profit because it increases the value of capital outlays relative to the cost of final output, the marginal productivity of capital stays the same because the asymptotic stationary relative price of capital does not depend on m. In this respect, the argument not only shows that the rate of profit is generally not a proxy for the marginal productivity of capital, but also that their behavior may differ depending on the factors causing their changes in time. The bottom line is that changes in institutions can act as powerful methods that alter the balance of power between capital and labor. In some cases, as shown by the reductions in unemployment benefits, governments can increase the profit-making capacity of the economy by worsening the relative condition of workers. 16 The extent to which each one of the technology and institutional factors have altered the structure of the US economy is an empirical matter that I explore in the follow section.
Empirical Analysis
This section presents an empirical exercise to measure the effects of technological and institutional changes on the steady-state equilibrium of income shares, capital returns, the rate of unemployment, and the big ratios in macroeconomics previously explored by Farhi and Guorio (2018) and Eggertsson et al. (2021). The empirical analysis is divided in two parts. First, I employ a baseline calibration on some model parameters using US data and related literature. Second, I show that the data calls for specific changes in institutions and technology in order to match the time averages of some key variables in the postwar US economy.
5.A Parameterization
All parameters are calibrated in monthly frequency. The growth parameters δ, g and z Ψ are set to match a 10 percent annual depreciation rate, a 2 percent annual growth rate of labor productivity and a 2 percent decline in the relative price of investment. Following the empirical findings summarized in Chirinko (2008) and Grossman and Oberfield (2021), the elasticity of substitution is equal to 0.6. Similar to Altug (1989) and Lucca (2007), I set π I = 0.12, implying that the average time required for completing investment projects is close to three quarters. The complementarity of investment projects is set to 5.84, which is not only the elasticity of substitution parameter across products estimated by Christiano, Eichenbaum, and Evans (2005), but it also implies a steady-state marginal productivity of capital in line with the estimates of Caselli and Feyrer (2007) of about 0.14 (see Figure 12 in Appendix E.1). The capital-augmenting parameter γ k is obtained according to Lemma 4.3 and α is calibrated so that real wages are close to 1. (2009)
I follow Hall
The time-varying elements of the equation portray changes in policy behavior related to real UI extensions. These changes may occur as a consequence of recessions (Chodorow-Reich et al., 2019), high periods of inflation (Pierson, 1994, p. 118), or active policies seeking to reduce the burden of welfare costs (Pierson, 1994, p. 116;Noble, 1997, p.120). 17 The calibration of κ 0 is significantly higher than the values normally used in the literature. However, as shown in Appendix E.3.1, this is necessary in order to satisfy the conditions for steady-state growth paths in Theorem 4.4. Table 2 shows that the rate of return of capital obtained from the usual calibrations in the literature are too low in relation to the data (e.g., Panel A Figure 5) and do not even satisfy the minimum requirement for steady-state growth, which is that µ t > µ min t . 18 The regression in (32) uses the same data as Chodorow-Reich and Karabarbounis (2016), but unlike them the dependent variable is normalized with respect to the current value of labor productivity. The results of (32) are reported in Figure 4 using the reduced form parameters , which represent the relevant information for the steady-state analysis below. Here, in particular, (32) is transformed tô
5.B Quantitative Results
In the remaining part of the section I evaluate the extent to which automation, measured by changes in m t , and labor institutions, represented by variations in the discount factor of workers β w t , explain the changes in the steady-state equilibrium of income shares, capital returns, unemployment and the investment-output ratio in the postwar US economy.
Data. The data used for the analysis comes from the BEA-BLS integrated industry-level production account (Eldridge et al., 2020), the Fixed Assets Accounts Tables, and the Bureau of Labor Statistics (BLS). 19 To keep the empirical exercise consistent with the structure of the theoretical model, I exclude all "imputed" outputs that are not actually marketed and realized as money revenue. Essentially, I fol- 1950, 1963, 1980, 1998, and 2010. Correspondingly, to test if changes in automation or in labor institutions can individually describe the behavior of the postwar US economy, I vary the parameters associated with each hypothesis and leave the remaining ones unaltered. For example, to evaluate the hypothesis that automation explains the decline of the labor share following the 1980s, I set m to its adjusted value from 1996-2001 and leave the remaining parameters equal to the calibrated values of the period 1978-1983.
The results associated with the institutions hypothesis in Figure 5 (Panels A, B, and E) are broadly aligned with the empirical findings of Stansbury and Summers (2020), who show that changes in the bargaining power of workers offers a unified explanation for the behavior of the labor share, capital returns, and the rate of unemployment. In Appendix E.1, I show that the adjusted equilibrium values of the model are also consistent with the changes of the vacancy-unemployment ratio. deposits rates, which has no direct relation with the production of goods and services in the economy. Parts of the payments to professional and business services can also be regarded as costs of reproduction of society, rather than direct contributions to net output. Ultimately, the inclusion of FIRE and other related sectors to GDP is the result of convention, not of clear and uncontroversial economic reasoning. The clear differences between the blue and green lines in Figure 5 (Panels C and D), however, weaken the predictive power of the labor institutions hypothesis. Contrary to what is reported by the data of the capital cost share and the investment-output ratio, the model with a constant m undervalues the rise of investment since periods of declining labor shares should generally reduce, not rise, the participation of capital on aggregate output (see Proposition 4.5 (ii)).
The rise in the rate of automation presents a plausible explanation for the increase in the capital cost share and the investment-output ratio, together with the fall in the labor share since the 1980s. A simple inspection of Panels C and D in Figure 5 shows that the red and green lines are almost perfectly aligned, meaning that the main variations in technology are described by changes in automation. Appendix E.1 presents the data of the net investment-output ratio, the capital-output ratio and the rate of profit, and shows that an adequate representation of these variables requires a growing rate of automation consistent with the data in Figure 8 below. 21 Moll, Rachel, and Restrepo (2022) reach similar conclusions in an exercise of transitional dynamics, though they attribute most of the increase in the capital share to a surge in the return to wealth, rather than a rise in the capital-output ratio. Like most models using a task-based framework (e.g., Acemoglu & Restrepo, 2018;Hémous & Olsen, 2022), Moll et al. (2022) work with the assumption that labor is inelastically supplied at full employment, and that factor prices are determined by their corresponding marginal products. This is diametrically opposed to the approach of this paper, and it leads to predictions which are at odds with the data of the rate of unemployment, on one hand, and with the data of the labor share and capital returns before the 1980s, on the other.
The behavior of the savings rate in Figure 5 (Panel F) can be understood using Theorem 4.4 (A). 22 Essentially, if the long-run rate of growth of the economy is approximately constant, the savings rate will tend to move in the opposite direction to the rate of return of capital. This presents a simple mechanism that can explain the puzzle of a rising wealth-to-GDP ratio and a decreasing private savings rate since the 1980s found by authors like Eggertsson et al. (2021).
The changes of the capitalist savings rate have major implications from a political economy perspective. Particularly, it reveals that-because the growth rate of capitalist economies is bounded by the rate of profit 23 -the system will probably find bottlenecks which impede its reproduction when the two rates get closer together. This means that, though there may not exist a negative relation between growth and, say, a widening institutional support to workers (see, e.g., Figures 11.12 and 11.13 in Piketty (2020)), if these changes lower the profitability of capital it is likely that the system will try to adjust itself through economic crises or political manifestations that end up favoring capital over labor. The next section ex- 21 The model slightly underestimates the capital-output ratio, which implies that the depreciation rate chosen in the calibration is probably too high. However, given that reducing δ also implies a higher rate of savings in equilibrium (see Theorem 4.4), I preferred not to change the calibration and simply point out that the model can improve its predictive power of technical change by lowering δ. A more detailed explanation of this problem can be found in Appendix E.1. 22 Let us remember that the model works with the assumption that capitalists finance all capital investment from retained earnings, so it is not a surprise that the savings rate in the model is generally higher than in the data. The important point here is that-regardless of the level-all empirical measures of the savings rate show a positive trend before the 1980s, when the rate of return of capital was falling, and a negative trend after the 1980s when capital returns were rising. 23 This is a well known result that can be traced back to Neumann (1945). (Eldridge et al., 2020). Panel E uses the non-farming unemployment rate data of Petrosky-Nadeau and Zhang (2021). Each savings rate express the ratio of undistributed corporate profits after taxes to Corporate Profits after taxes. plores how this conclusion and the main empirical results outlined in Figure 5 are consistent with and provide an analytically foundation to some of the historical events of the postwar US economy.
A Brief Historical Analysis of the US Economy
This section focuses on two key questions. First, I explore how the history and data of welfare and labor institutions is associated with changes in capital returns and the reproduction capacity of the system, and whether this historical evidence can be reconciled with the predictions of bargaining power derived from the model. Second, I evaluate to what extent is the data and history of technical change consistent with the model predictions of automation based on equation (30a), and how some of these changes may have been prompted by institutional changes in the US economy. The narrative of these events can be traced back to the Great Depression and the legacy of New Deal institutions, which paved the way for the introduction of federal cash and work relief programs in 1933, social insurance in 1935, a legal framework for collective bargaining in 1935, minimum wages in 1938, federal regulation of working conditions in 1938, and tax hikes on high income earners throughout the 1930s (Noble, 1997, p. 54;Piketty, 2020). 25 By and large, these policies continued in the postwar era. The Employment Act of 1946, for example, challenged the idea that the economy should be regulated by competitive forces alone and assigned direct responsibility to the state in determining the level of employment (Bowles & Gintis, 1982, p. 66). In doing so, however, the government drew limits on the capacity of labor in forming strikes, lock-downs or in organizing for disrupting production and investment decisions. The Taft The combination of wage adjustments in relation to inflation and labor productivity, together with the rise of fringe benefits resulting from wage bargaining agreements related to the Treaty of Detroit, provides a basis for understanding why the relative bargaining power of workers increased during the 1950s as shown Figure 6. This is all the more convincing when noting that the COLA principle was incorporated in more than 50 percent of union contracts by the early 1960s (Lichtenstein, 2002, p. 123), and that in this period union density was close to its historical high (see Panel B in Figure 7). 26 24 The average value of η w in Figure 6 is close to the calibrations of Hagedorn and Manovskii (2008) and Petrosky-Nadeau et al. (2018), but it deviates considerably from the values chosen by authors like Shimer (2005), Pissarides (2009), andGertler andTrigari (2009), who fix η w ≥ 0.5. Theorem 4.4 casts some doubts on calibrations setting high values of η w given that capitalist economies cannot grow without positive net aggregate profits. For instance, even if we set ζ = 0 in (25) and use the model calibration of 1978-1982-which corresponds to the period with the highest value of η w -the maximum power of workers η U w is about 0.12; way below 0.5. The bottom line in this respect is that high values of η w are implausible, not because they cannot match specific features of the data, but because they fail to satisfy the minimum requirement of capitalist economies which is that the system can reproduce itself at an increasing scale. 25 The crash of 1929 also opened a window for reforming the financial sector. It gave way, e.g., to the Glass-Steagall Act, which separated commercial banking from investment banking, and to the Securities and Exchange Commission, which was intended to reign over financial excesses (Eichengreen, 2014, p. 11). 26 Recently, Taschereau-Dumouchel (2020) presented a clear theoretical argument showing how the possibility of unioniza-26 1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 The notion of citizen wages holds a close connection with the broad definition of UI and non-UI tion distorts the behavior of non-union firms, and can have significant effects in the economy even if the actual density of unions is not outstanding. 27 As a political compromise, Kennedy passed the Public Welfare Amendments of 1962 allowing southern Democrats some flexibility in the implementation of welfare in their states and enforcing stricter restrictions on benefits and eligibility (Noble, 1997, p. 92). 28 It is important not to overstate the role of welfare in the US economy in spite of the important results in poverty reduction during the 60s and 70s. By international standards, the US assigned a relatively small share of GDP to social programs, such as unemployment, sickness, and maternity benefits (Rose, 1989).
6.A Historical analysis of bargaining power and profitability
benefits of Chodorow-Reich and Karabarbounis (2016) reported in Panel C of Figure 7. The consistency of the two definitions not only establishes a bridge between the policies reported in Figure 6 with the data of unemployment benefits to labor productivity, but also provides a credible story explaining why the bargaining power of workers probably improved from the early 1960s to the late 1970s-as predicted by the model.
The extension of the welfare state and the economic sphere of citizenship from the late 1940s to the mid 1970s ultimately coincided with the decrease in the profitability of capital and a rising rate of unemployment, revealing there probably exists a juxtaposition between the principles of liberal democratic societies and those of capitalism. The basic problem, as shown formally in Theorem 4.4 and represented graphically in Figure 6 with the corridor of political and economic stability, is that capitalism requires specific institutional arrangements allowing the expansion of capital at an increasing scale. Liberal democracy, by attaching rights on people rather than property (Bowles & Gintis, 1982), may confront the requirement of capital reproduction by improving the bargaining power of workers and by raising average wages through an increase in the citizen-wage. In this respect, societies may be incapable of reproducing the social relations forged by liberal policies and the profits that maintain the accumulation of capital.
This contradictory nature of the welfare state with capitalism was well understood by the conservative movement that finally materialized in the late 1970s. The extension of the COLA principles was filibustered under Carter's watch in 1978, making it harder for workers to join unions and easier for employers to resits them (Noble, 1997, p. 108). Reagan took these initiatives to a different level and proposed severe budget cuts in means-tested assistance and social service programs. Though many of these initiatives did not materialize to the degree they were intended, the Omnibus Budget Reconciliation Act (OBRA) of 1981 managed to make significant reductions in food-stamp spending, AFDC assistance, and UI extensions by tightening the criteria for benefits eligibility (Pierson, 1994, pp. 116-119). 29 The political difficulty of a frontal reduction of welfare meant that the government had to search for indirect measures for cutting social expenditures. The pinnacle of Reagan's reforms crystallized with the Economic Recovery Tax Act (ERTA) of 1981; characterized for introducing substantial tax breaks to business and regressive cuts in personal income tax rates (see Panel D of Figure 7). Combining incometax cuts and increasing military spending, Reagan drove deficits to record highs and managed to shift the concerns for social provisions for the poor and unemployed to a different one based on the need of achieving balanced budgets (Noble, 1997, p. 123). A second indirect policy of social reform came with Volcker's tight-money shock. The high real interest rates of the early 1980s increased the global demand for US securities, which ultimately crippled the recovery of employment in production sectors like manufacturing and boosted the expansion of the financial sector following the recession of 1981 (Levy & Temin, 2011).
In addition to the the aforementioned policies of the first conservative retrenchment, it should be 29 Reagan's initial cut proposal on social spending was roughly twice as large as those ultimately accepted by Congress. Yet, according to Patterson (2000, p. 206), in two years OBRA increased poverty by 2 percent, restricted eligibility to approximately 408,000 families, and eliminated benefits to other 300, 000. 1950 1960 1970 1980 1990 2000 noted-as shown in Figure 7 (Panels A and B)-that the reduction of federal real minimum wages and union density accelerated considerably in the wake of the 1980s. DiNardo et al. (1996), Card and DiNardo (2002) and Lemieux (2008) present compelling evidence showing that much of the increase in wage inequality in the 1980s can be attributed to the fall in minimum wages. The continuous fall in union density also helps explain the rising wage inequality after the 1990s, since unions not only protect the income of low paying jobs, but, as shown by DiNardo et al. (2000), Rosenfeld (2006), and Lemieux (2008), also reduce the rents of management, executive and capital owners.
It should come as no surprise then that Figure 6 depicts a considerable decline in the bargaining power of labor following the 1980s. This decline was partly offset in the late 1990s by the Clinton government, who-in spite of replacing AFDC for the more restrictive program of Temporary Assistance for Needy Families (TANF)-also took measures of redistribution by expanding the Earned Income Tax Credit (EITC), increasing the minimum wage, and rising the top income-tax rate (Levy & Temin, 2011, p. 376 which adopted a strategy of tax reduction for business and the wealthy (see Panel D of Figure 7).
The consequences of the conservative retrenchment just described are widespread and have been perfectly evident by the growing distrust of democratic institutions (Diamond, 2015), the rise of deaths of despair (Case & Deaton, 2021), the challenge against free trade (Dorn, Hanson, Majlesi, et al., 2020), among other related manifestations. Though it would be far-fetched to put all the weight of these polit- 1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 Notes-The steady-state measure of automation is obtained from (30b) by using the calibration in Table 1 and the data of the capital cost share and the rate of return of capital in Figure 5. The cost-minimizing automation measure follows from Lemma 4.3.
ical reactions to the decline of the bargaining power of labor, it is a basis from which to understand why people-especially those that have been negative affected by the turns of the economy over the past four decades-have reasons to demand radical changes in society.
6.B Historical analysis of automation
One of the hypothesis of this paper is that changes in technology and their lasting effects in the economy can be best understood in the context of historic specific institutional environments. This view is well represented by the events of the US after the 1950s, when a renewing concern over the adverse effects of automation in the workplace resurfaced in the national debate. The government's approach at the time, in agreement with Taft-Hartley and McCarran, was that it should provide public assistance when needed for social dislocations generated by technological unemployment, but should not restrict the use of machines or even dispute the desirability of automation (Frey, 2019, p. 180). By these standards, the computer revolution and similar technological improvements found fertile grounds for creating potentially disruptive effects on the labor market after the 1980s. According to the calibration of the model, by 1980 1 −m t was about twice as large as 1 − m t , meaning that the system lied far above the boundary dividing regions 1 and 2 in Figure 1 and that even small changes in automation could have generated significantly negative effects on employment and real wages. The expansion of labor replacing technologies throughout the two decades from the early 1980s to the late 1990s was, however, relatively small according to all measures in Figure 8. One of the reasons for this may have been the conservative retrenchment that followed after the late 1970s, which reduced the expansion of labor costs relative to the costs of capital.
The 2000s saw a second revival of labor replacing technologies. Consistent with the findings of Autor and Salomons (2018), Figure 8 depicts a strong increase in the steady-state automation measure which moves along the data of Dechezleprêtre et al. (2019) and Mann and Püttmann (2021). A peculiar characteristic of this period is that it coincided with a fall in the labor share, suggesting that the rise in automation was probably not prompted by rising labor costs but rather by technology improvements or tax cuts for capital. Acemoglu, Manera, and Restrepo (2020) present evidence suggesting that the adoption of labor replacing technologies may have resulted from changes in the US tax system favoring the adoption of capital over labor. If Acemoglu et al. (2020) are correct, the fall in the cost-minimizing automation measure was likely lower than was is reported in Figure 8, meaning that the disruptive effects of automation probably continued to be important even after 2010. 31
Conclusions
This paper provides a framework that helps understand how institutions, automation, unemployment, and profitability interact in the dynamic setting of a general equilibrium analysis. At the center of the model is the notion that profits are a surplus over costs of production, which brings back to light the importance of political and institutional factors in the determination of income distribution.
Among the attractive features of the paper is that it creates a link between task-based models, the surplus approach of the Classical economists, and the literature on equilibrium unemployment. The merger of these approaches introduces three important theoretical contributions to the literature. First, it establishes an endogenous theory of profits based on the relations of power between capitalists and workers. Second, it formalizes how unemployment and wage dynamics are directly affected by the assignment of tasks between capital and labor. Third, it establishes how institutions can directly affect the task allocation of factors by intervening in the relative prices of labor and capital.
The empirical strength of the model is captured using two complementary strategies. First it is shown 31 It is likely that COVID-19 exacerbated the effects of automation on the labor market since one of the defining characteristics of the pandemic was the unprecedented increase in unemployment benefits and tight labor markets following the lock-downs. In this scenario it can be expected that labor replacing technologies will be potentially effective in reducing production costs of firms, which may help explain the growing concerns in the media over the effects automation.
that the behavior of income shares, capital returns, the rate of unemployment, and the big ratios of macroeconomics in the American economy can be explained by specific changes in labor institutions and technology. The second part supports this conclusion by showing that the implied changes in the equilibrium measures of worker power and automation are consistent with the history of welfare and technical change in the US. Ultimately, this strategy highlights that there is much to be gained by locating the abstract reasoning of economic theory into the historical context of a specific society.
From a political economy perspective, the article presents a framework to understand how capitalism interacts with the institutions of liberal democracy. For instance, one of the conclusions drawn from the analysis is that there may exist a conflict between the social and economic sphere of citizenship with the profit-making capacity of the capitalist system. The empowerment of workers can increase the outside option to employment and lead to a reduction of the rate of return which may ultimately threaten the reproduction of capital at an increasing scale. In this respect, one of the key policy implications of the paper is that the search for more equitable societies must consider the constraint imposed by profitability, since a steady-state growth path cannot be sustainable with declining rates of profit. It is equally important to note that governments play an active in protecting workers from the disruptive effects of unregulated markets-some of which are portrayed by the negative consequences that automation can have on employment and wages. The bottom line in this respect is that the balance between more progressive societies and a more capital friendly system is, to a large extent, a political choice resulting from the particular time and place in the process of history.
The paper sets the stage for at least four avenues of future research. The first is a study of the dynamic behavior of the model by introducing stochastic variables and evaluating how the economy reacts to shocks at the business cycle time scale. A second avenue is to introduce credit in order to understand how interest payments are determined when-consistent with business accounting-interest rates are not counted as part of costs of production but are rather paid from the surplus extracted from the process of production. This could serve the purpose of showing how "capital costs" are separated from "pure profits," and could consequently help understand the changes of capital, labor and profit shares in the past 50 years in the US; see, e.g., Karabarbounis and Neiman (2019) and Barkai (2020). The model can also benefit from the introduction of bargaining models offering a more accurate representation of the conflicting interests between capitalists and workers in wage negotiation processes. Lastly, it is important to introduce the government sector as an active agent so that institutional and political factors are themselves an endogenous response to the economic outcomes of the system. where This is precisely what is obtained from (8).
A.2 First-order Conditions of Workers and Capitalists
The Lagrangian associated with the optimization problem of workers is Where φ w i ,t are the Lagrange multipliers. Using the first order conditions it follows that: Expressing all variables in consumption units by dividing by the marginal utility of consumption, t ,t , the marginal value of an employed and unemployed worker satisfy , which completes the result in (19).
The Lagrangian associated with the capitalists optimization problem is As usual, φ c i ,t are the Lagrange multipliers. Expressing the first order conditions in consumption units using φ c 0,t = U c C c t ,t /P t , it follows that: t ,t and the value of an additional employed labor for the capitalist is Λ L t +1 ,t +1 = φ c 1,t /β c t +1 . The shadow cost of capital in consumption units can be obtained from the maximization problem of capital good producers. The Lagrangian in this problem can be formulated as Using the first order conditions: Using the marginal productivity equation of capital, it follows that the demand for capital satisfies An important implication of (A6) is that, because P I t = (1 + µ t )P c t Ψ −1 t , capital demand is a positive function of current and future rates of return. If, for instance, capitalists receive news that in the future labor institutions will provide further support to workers, it can be expected that they will reduce the current demand for capital.
B Accounting Structure of the Model
The analysis in the text is carried out on the basis of an accounting structure where capitalists finance all capital investments and labor costs using retained earnings. This assumption is made in order to highlight the nature of profits as a surplus before complicating the model with the introduction of interest payments and rents. Essentially, given that in the Classical tradition interests and rents are paid from the aggregate surplus of society in the circulation process of capital, the first logical step is to understand how profits are reproduced before introducing exogenous sources of liquidity to the model.
The underlying accounting structure of the model is well represented in Figure 9 using Marx's circuits of capital as formalized by Foley (1986). At stage (a), the chain of intermediate and final good producers use a share of their money funds to hire labor services from workers and pay for new units of capital to capital good producers.
The aggregate flow of capital outlays is committed to a production process which results in a flow value of unsold finished output (P c t Y t ). As noted by numerous authors (e.g., Haavelmo, 1960), not all components of capital outlays are transferred as finished output after the same length of time in the production process, so the relation between C t = w t N t + P k t I t and P c t Y t can be accounted for by a convolution (Foley, 1982): where A (t − t ; t ) ≥ 0 for t − t ≥ 0 represents the distributed shares in the value of capital outlays and t t =−∞ A (t − t ; t ) = 1. Intuitively, (B1) says that finished goods valued at costs (since it is unsold output) is equal to the weighted sum of the value of all previous capital outlays committed to the process of production.
To simplify the mathematical analysis, I assume that each unit of capital stays in the production process for a time period T P t and then emerges all at once as a finished good. This is represented, similar to Foley (1986, p. 70), as That is, the value of the final good measured at cost prices must be equal to the value of capital outlays entering the production process in period t − T P t . Given Assumption 4.2, the current value of capital outlays is discounted exponentially with a discount rate g t , denoting the growth rate of the value of the capital stock, such that The value of the final good valued at costs (P c t Y t ) subtracts from the value of the stock of productive capital in the transition from (a) to (b). The value of sales results from the sum of transition flows (c) and (d), such that P t Y t = P c t Y t + Π t . In the transition from (d) to (f ), capitalists pay taxes and vacancy expenses out of their realized profits. From this, final good producers decide how much to retain in the circuit of capital with a value of s tΠt , s t being the rate of savings, and how much to consume, with a value equal to (1 − s t )Π t .
The two main assumption in the description of the circuit of capital in Figure 9 are that: (i) commodities are sold immediately after they are produced; and (ii) that there is a zero time-lag between the moment the final good is sold and the moment firms demand new capital outlays. Formally, assumption (i) implies that there is no variation in commercial capital (i.e., there are no variations in final good inventories). Similarly, assumption (ii) states that the variations of financial capital (K F t ) are equal to zero, that is This last equation states that savings (s tΠt ) are equal to net investment (P k t (I t − δK t )). This relation is also used to derive the financial constraint of capitalist households (20). Using flows (f), (g) and (h) in Figure 9, it follows that Financial capital Productive capital (K t = P k t K t )
Commerical capital
Final good capitalist consumption Expenses ( Using the financial constraint of workers and the definition of profits of capital good producers in (9), we can derive aggregate consumption in (28) and the aggregate resource constraint as Ultimately, though I preferred to omit all discussions about the circuit of capital in the main text, it is clear that all relevant accounting relations and financial constraints are obtained from Figure 9.
C Aggregate Production Function and the Accounting Identity
The aggregate production function in the main text can be written as alternatives to interpret a fall in the wage share. First, as noted by Felipe and McCombie (2013, p. 85), it may decline if capital and labor are gross substitutes and K t grows faster than N t . This is the approach used by Piketty and Zucman (2014) and others to "explain" the fall in the wage share. Second, the share of labor may fall if m t declines-regardless of the value of σ. This is one of the innovations of the production function with automation, since it presents an alternative that can account for the falling wage share without assuming gross substitution between labor and capital.
One of the problems here is that g Ω c may decrease as a result of, say, a lower bargaining power of workers, but by interpreting this fall using marginal productivity equations one may wrongly associate it with, say, increasing automation. Furthermore, one may confuse the direction of causality by thinking that changes in the distribution of income are caused by the components of the production function when these may go the other way: the fall in the wage share may explain the changes in the parameters of the production function.
The bottom line here is that it is generally questionable to assign claims of causality to aggregate production functions when these may just be representing an accounting relation. The model in the paper partly addresses this issue in two ways. First, by creating a relation between the aggregate production function and the aggregate costs of production, rather than aggregate income, the model is capable of introducing institutional and political factors as determinants of the labor share. Secondly, by restricting σ < 1, the model can identify changes in technology with automation if these are also associated with increasing unemployment, lower steady-state real wages, and a higher capital-output ratio.
Starting with ∂(1 + µ S )/∂θ < 0, it is useful to note thatẐ =b + ( 1 /(1 + 1 )hŶ N andb = b β 0 + b β 1 (1 − L). Thus, the Nash solution in steady-state can be written as If b β 1 = 0, we would get an equation close to the usual Nash solution, which is known to have a negative slope. However, if b β 1 > 0, we can have ∂(1+µ S ) ∂θ > 0, specially for small values of θ given that L θ increases as θ → 0. This explains why the slope of µ S is initially positive in Figure 2.
Correspondingly, the partial derivative of µ D with respect to θ is which is positive given the assumption of decreasing marginal returns and q (θ) < 0. Thus, a stable equilibrium in the labor market can be obtained if b β 1 is sufficiently low.
Part B
Marginal productivity of capital. Using (11), we know that the marginal productivity of capital satisfies Expressing this equation in its stationary form it follows thatŶ K t = Y K t Ψ t = δP k t Ψ t /P c t . Now, given the first order conditions of capitalists in Appendix A: where I assume (without any loss of generality) that β c = 1. In the limit, since which is the result in (30a).
Rate of profit.
The rate of profit, unlike the rate of return of capital, is the ratio of a flow over a stock.
By definition, r t ≡ Π t /(P k t K t ). Making use of equation (B2) in Appendix B: Where τ t ≡ T t /K t is the share of taxes on capital value, ζ t = κ t V t /K t is the share of vacancy costs to capital, and T P t is the average production time of the final good defined in Appendix B. Given that in Given Assumption 4.2 (ii) and the assumption that the final good is produced after an average time lag T P t , it must hold that P k
such that
Now, sinceÎ t ≈ (δ + g t )K t , the last equation becomes .
Investment-output ratio
The function Ω t can be written as Ω t = π 1+υ 1−υ Given that X t =X t Ψ t e αJ * t , in equilibrium it holds that SolvingK /Ŷ from (11) we obtain (30c).
D.2 Proof of Lemma 4.3
This part of the proof follows from Lemma A2 of Acemoglu and Restrepo (2018). The main difference here is that the automation measure function depends on the rate of return of capital and is bounded in regions strictly inside (0, 1).
We can begin by noting that at the boundary of region 2 in Figure 1, it must be true that δP k t /γ k = w t e −αJ * t =ŵ t . Using this condition in the ideal price index, it follows that: The sign ofm(µ t ) can be deduced using a Taylor expansion on exp( The assumption that γ k = δP k t (0) guarantees thatm(µ t ) > 0 since γ k < δP k t (µ t ) for all µ t > 0. Furthermore, given thatŶ k is an increasing function of µ, thenm(µ) is also an increasing function of the rate of return.
D.3 Proof of Proposition 4.5 D.3.1 Automation
Wages. The first part is analogous to the proof of Lemma A2 of Acemoglu and Restrepo (2018). Defininĝ w t = w t e −αJ * t and using the ideal price index condition By implicit differentiation, it follows that 32 Using the calibration in Table 1, the errors of this approximation are at most 5%. ŵ t /ŵ t = Repeating the same exercise with the labor demand equation: That is, ∂µ D ∂m θ=θ * < 0 ifŵ t > 0. Given that µ S is a decreasing function and µ D is increasing close to the initial equilibrium, it follows that the labor market reaches a new equilibrium θ * * > θ * following an increase in m if m >m(µ). The increase in the vacancy/unemployment ratio it follows that L t rises with a higher m.
The final effect on µ t depends on the model parameters and cannot be determined a priori. It is most likely, however, that ∂µ/∂m ≈ 0 given that the labor supply and demand equations move in opposite directions. In what follows I will work with this assumption.
Labor Share on Costs of Production. Using 30a, the share of labor on costs of production satisfies The sign of ∂Ω c ∂m µ=µ * is positive if m * = m, which is the case when m >m(µ). Investment Expenditure to Output ratio. Starting with the steady-state investment-output ratio in (30c), it readily follows that ∂X t /Ŷ t ∂m µ=µ * is negative if m * = m.
D.3.2 Unemployment Benefits
This part of the proof is equivalent for unemployment benefits and the discount factor of workers. 33 Similar to the previous part of the proof, it is convenient to start with the steady-state equation of wages.
Wages. Using the ideal price condition, we can express the steady-state value of wages aŝ The partial derivative ofŵ with respect to is then determined by ∂µ/∂b given that ∂Ŷ k /∂µ > 0.
Hours. Using again the Nash solution with SEP preferences: As before, given that ( Similarly, given the same assumptions: Joining the effects on the labor supply and labor demand equations, it follows that µ ↓, θ ↓, L ↓,ŵ ↑ and h ↑. Labor share on costs of production. Using (11) it is straightforward to show that < 0 and σ < 1.
E.1 Data Description
I use data from the BEA-BLS integrated industry-level production account from 1947 to 2016 with the intention of creating a mapping between the model in the paper and the sectors of the economy that is meant to represent. Particularly, given that the paper focuses on the profit-making capacity of the economy, it makes sense that it concentrates on the specific sectors that are contributing to the direct creation of aggregate profits. Though this is a controversial topic that was largely submerged with the rise of neoclassical economics-which interpreted all potentially marketable activities to be production activities-it is instrumental for understanding the conditions allowing the reproduction of capital at an increasing scale. Here I take on the view that new wealth results from the creation of aggregate profits, and that these are the outcome of a social relation joining capitalists and workers in the production process of tangible output (goods or services). Taking a practical approach to this problem I concen- trate exclusively on what Basu and Foley (2013) denoted as "value-adding sectors", which consist on the following industries: 34 34 Sectors not included in the list, like the financial industry ( Finance, Insurance" and Real Estate-FIRE), Education and Health Services, and Professional and Business Services, share the characteristic that national accounts impute value added onto them to make it equal to the incomes generated. The BEA, for example, calculates the value added of the banking sector from interest rate spreads between lending and deposits rates, which has direct relation with the production of goods and services in the economy. Basing the analysis on these sectors and the theoretical argument of the model, the rate of return of capital is measured where P t Y t is nominal gross output minus nominal intermediate input.
To obtain the value of depreciation I use
E.2 Additional Data and Figures
This subsection compares the labor shares by sectors in the US economy using the BEA-BLS integrated industry-level production account (Eldridge et al., 2020). In the first row of Figure 10 I compare the labor shares of the total non-farm economy with the measure proposed of productive sectors. As noted above, the sectors excluded from the analysis are those with questionable assignments of value added.
By employing this modification we find two important changes: the labor share is generally higher than the total non-farm sector; and the fall in the labor share is much clearer in the productive sectors right after the early 1980s. This is an important difference given that a variety of measures of the share of labor only exhibit a clear fall after the 2000s.
The data in Figures 10 and 11 Figures 10 and 11. Even without counting manufacturing, which shows the steepest decline in the wage share since the late 1970s, we see that sectors like productive services, utilities and information, retail, transportation and warehousing, and wholesale trade all exhibit a considerable decline in the wage share since 1980s.
These differences in the sectors of the economy are of great importance for understanding the changes in income distribution. Particularly, one of the conclusions that can be drawn from this data is that different forces may be shaping the behavior of income distribution in each sector depending on the role they play in the production of wealth in the economy. Figure 10: Labor shares. Notes-The productive service sector in the sum of Administrative and waste management, arts, Entertainment, and recreation, Accommodation and food services, and Other services, expect government. All the data is from the BEA-BLS integrated industry-level production account. Figure 11: Labor shares. Notes-The unproductive service sector is the sum of Professional, scientific, and technical services, Management of companies and enterprises, Educational services, and Health care and social assistance . All the data is from the BEA-BLS integrated industry-level production account.
Additional Empirical results Figure 12 shows some results which complement the empirical results in the main text. Panel A corroborates the results of Panel E in Figure 5, showing that the model does capture the main movements of the labor market through time. The results of net investment-output ratio, rate of profit, and the capitaloutput show a similar pattern as that of the investment-output ratio in Figure 5. Essentially, these results indicate that the calibration of the depreciation rate was probably too high, given that with a lower δ, the fit of the model in Panel C of Figure 5 would have required a higher capital-output ratio, and this would have implied higher values of P k K /P Y , (P I X − δP k K )/P Y (panel B en Figure 12), on one hand, and a lower value for the rate of profit, on the other A lower δ, however, would have increased the steady-state values of the savings rate. The reason for this is that-according to Theorem 4.4-in the steady-state equilibrium: s ≈ g (1 − Ω c )/(δµ − (1 − Ω c )(τ + ζ)). In the paper I made the decision to underpredict the value of P k K /P Y to maintain relatively low rates of savings, even though I am aware that the model should overpredict the value of s given the assumption that all investments are financed from retained earnings.
Having cleared out why the model underpredicts the capital(investment)-output ratio, we may turn our attention to Panel C, which displays the rate of profit (measured as profits over the capital stock), the rate of return (measured as profits over costs of production), and the implied equilibrium rate of profit from equation (30b). One of the key results here is that, unlike the rate of return, the rate of profit was kept relatively constant after the 1980s. This is explained by the increase in the rate of automation which, in accordance to Proposition 4.5 and Figure 3, reduces the rate of profit because it increases the value of capital relative to the final good. This negative effect on r is balanced with the increase in µ caused by the deterioration of the bargaining power of labor.
The data of the marginal productivity of capital is obtained using (30a) and estimatinĝ From this perspective, the marginal productivity of capital is directly obtained by the rate of return of capital, which-in turn-is determined as a social outcome resulting from the bargaining process of wages between capitalists and workers. Stated differently, the marginal productivity of capital is here a meaningless concept if it is detached from the social elements determining the return of capital. Note, in addition, that the marginal productivity of capital-unlike the rate of profit-increases considerably after the 1980s, which is precisely what is expected from Proposition 4.5 and Figure 3.
Finally, Panel F shows a clear positive correlation between the wage premium and the rate of return of capital. Though the wage premium is not a topic which is directly treated in the paper, the data in Figure 12 provides some evidence to the hypothesis that the increase of the marginal productivity of skilled workers is responding to external factors like rising sales and profits, all of which may be the result of political changes favoring top income earners (see, e.g.,Piketty (2014)) . The role that, e.g., managers and CEOs play in the creation of profits is a topic which is well worth studying in future research.
E.3 Equation (32)
The public policy equation in (32) is estimated using a Kalman filter and the following priors on the model: (Eldridge et al., 2020). Panel A uses the non-farming vacancy data of Petrosky-Nadeau and Zhang (2021). The wage-premium is normalized to the 1981 value of the rate of return. As in Figure 5, the green diamonds are the result of adjusting m and β w to the time averages of the capital cost share and the labor share. Notes-The minimum rate of return is obtained by setting aggregate profits equal to the sum of vacancy costs plus taxes as in equation (26).
Time-varying models often require information priors to narrow down the uncertainty of the parameters. Here I set β b 0 = 0 p+1×1 to avoid including any bias in the sign of the parameters and set M 0 = 0.5 × I p+1×p+1 to convey the message that the initial values are probably close to zero. Working with the assumption that the time-varying parameters are not too far from the fixed parameter solution, I set s β 0 = (s β 0 −1)×[0.1 2 , 0.01 2 , 0.01 2 , 0.05 2 ] and ν β = 5. That is, the expected of the variance of the parameters is equal to two times the vector on the right-hand side of s β 0 . Similarly, using the sample variance of UI extensions over labor productivity is about 0.00018, I set s 0 = (ν 0 /2 − 1) × 0.01 2 with ν 0 = 5.
Given these priors it is quite simple to estimate the posterior distribution of the parameters using a Kalman filter and a Gibbs sampler. Further computational details can be found in Prado and West (2010).
E.3.1 Analysis of vacancy costs
It is important to note that the data of rates of return and of labor market tightness require vacancy costs which are significantly higher than those commonly reported in the literature. Using the calibration in Table 1 and the results in Figure 5, the proportional costs of hiring are on average 4.7 times greater than the average productivity of labor. Though this may be an implausibly large value in light of the empirical studies of Silva and Toledo (2009), who show that recruiting costs are about 14 percent of quarterly pay per hire, and the estimates of Merz and Yashiv (2007), showing that the marginal costs of hiring are about 1.5 times the average productivity of labor, it is necessary if the model is to satisfy the conditions for steady-state growth paths in Theorem 4.4. Table 2 reports some commonly used calibrations for vacancy costs relative to average labor productivity and shows that under no circumstance can any of the models of the equilibrium unemployment literature satisfy the condition where capitalists are capable of paying taxes, vacancy costs and have a remnant for financing their own consumption. Search and matching models generally avoid this problem by working under the assumption that all households share the ownership of capital, so it makes no difference whether consumption is financed from wages or profits. This, however, merely hides the problem since it does not address the key issue of showing that-given the assumptions of the modelthe economy can expand at an increasing scale through time. | 2022-11-29T06:43:08.096Z | 2022-11-28T00:00:00.000 | {
"year": 2022,
"sha1": "311664561d13b48e5c3a4659cf20e6e5c6b9520c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "311664561d13b48e5c3a4659cf20e6e5c6b9520c",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
17942939 | pes2o/s2orc | v3-fos-license | Circumstellar molecular line emission from S-type AGB stars: Mass-loss rates and SiO abundances
The main aim is to derive reliable mass-loss rates and circumstellar SiO abundances for a sample of 40 S-type AGB stars based on new multi-transitional CO and SiO radio line observations. In addition, the results are compared to previous results for M-type AGB stars and carbon stars to look for trends with chemical type. The circumstellar envelopes are assumed to be spherically symmetric and formed by a constant mass-loss rate. The mass-loss rates are estimated from fitting the CO observations using a non-local, non-LTE radiative transfer code. Once the physical properties of the circumstellar envelopes are determined, the same radiative transfer code is used to model the observed SiO lines in order to derive circumstellar abundances and the sizes of the SiO line-emitting regions. We have estimated mass-loss rates of 40 S-type AGB stars and find that the derived mass-loss rates have a distribution that resembles those previously derived for similar samples of M-type AGB stars and carbon stars. The estimated mass-loss rates also correlate well with the corresponding expansion velocity. In all, this indicates that the mass loss is driven by the same mechanism in all three chemical types of AGB stars. In addition, we have estimated the circumstellar fractional abundance of SiO relative to H2 in 26 of the sample S-type AGB stars. The derived SiO abundances are, on average, about an order of magnitude higher than predicted by stellar atmosphere thermal equilibrium chemistry, indicating that non-equilibrium chemical processes determines the abundance of SiO in the circumstellar envelope. Moreover, a comparison with the results for M-type AGB stars and carbon stars show that for a certain mass-loss rate, the circumstellar SiO abundance seems independent (although with a large scatter) of the C/O-ratio.
Introduction
The final evolutionary stage of low-to intermediate-mass stars, as they ascend the asymptotic giant branch (AGB), is characterized by an intense mass loss. The stellar wind builds up gradually and creates a circumstellar envelope (CSE), carrying gas and dust from the star into the interstellar medium. The molecular setup and grain types in CSEs are to a large extent determined by the C/O-ratio in the photosphere of the central star. The AGB stars are normally divided into two distinct spectral types: M-type stars, with C/O<1 and carbon stars, with C/O>1. Abundance analysis has also revealed stars with photospheric C/O-ratios close to unity (within approximately 5 %), so called S-type AGB stars (Scalo & Ross 1976). The spectra of S-type AGB stars are dominated by ZrO bands (as opposed to spectra of M-type AGB stars which are dominated by TiO bands) indicating that the S-type AGB stars have an enhancement in elements formed through the slow neutron capture process.
Due to having a C/O-ratio close to unity, it is tempting to identify S-type AGB stars with a brief transitional phase as the star evolves from an oxygen-rich M-type AGB star into a carbon star. Dredge-up of carbon from He-shell Send offprint requests to: S. Ramstedt burning would change the spectral type of the star sequentially: M-MS-S-SC-C. As possible transition objects, S-type AGB stars might very well help to achieve a deeper understanding of the chemical evolution as a star ascends the AGB, as well as shed light on the mass-loss mechanism(s), which is (are) not yet fully understood in detail. Several surveys of CO emission from their CSEs have been performed (Bieging & Latter 1994;Sahai & Liechti 1995;Bieging et al. 1998;Groenewegen & de Jong 1998;Ramstedt et al. 2006). Circumstellar molecular line emission from molecules other than CO has previously been searched for, and detected, only in a handful of objects and only for HCN and SiO (Bieging & Latter 1994;Bieging et al. 1998).
The aim of this work is to thoroughly investigate the circumstellar physical and chemical properties of a sample of S-type AGB stars. The physical properties of the CSEs, in particular the mass-loss rates that produced them, are determined from the CO data using detailed, non-LTE, radiative transfer modelling which self-consistently calculates also the gas kinetic temperature. Detailed studies of the mass-loss properties of samples of carbon stars (Schöier & Olofsson 2001;Schöier et al. 2002), Mtype AGB stars ; González Delgado et al. , and S-type AGB stars (Ramstedt et al. 2006) have previously been performed.
Lately, the mechanisms driving mass loss in Mtype AGB stars have been much debated (Woitke 2006;Höfner & Andersen 2007;Höfner 2008), and for the S-type AGB stars this is a matter of a long-standing debate (e.g., Willems & de Jong 1988;Sahai & Liechti 1995). The supposed lack of any free oxygen and/or free carbon to form dust (due to a C/O-ratio close to unity) has lead to the suggestion that the S-type AGB stars would not be able to drive a wind as efficiently as the M-type AGB stars and the carbon stars. However, the results of Ramstedt et al. (2006) show no apparent differences between the mass-loss rate distribution of the S-type AGB stars and those found for carbon and M-type AGB stars.
Once the physical properties are known, from the CO analysis, they can be used to estimate abundances of other molecules in the CSE. The analysis of the S-type AGB stars in Ramstedt et al. (2006) was based on observations of CO(J = 3→ 2) data gathered at the APEX telescope supplemented with data collected from the literature. When examining the published data, Ramstedt et al. (2006) found a rather large scatter in the reported line intensities [especially for CO(J = 2→ 1) data] for individual objects, even when observed with same telescope. Since then, we have obtained more observational data, which warrants a re-analysis of the Ramstedt et al. (2006) work, and the results are reported here.
We have, as a first step after the CO observations, searched for circumstellar SiO radio line emission in several rotational transitions. Using the physical structure of the CSEs derived from the CO modelling, we estimate the abundance of SiO in the same sample of S-type AGB stars based on a detailed excitation analysis. There exists strong evidence that the circumstellar SiO emission carries information on the region where the mass loss is initiated and where the dust formation takes place (Reid & Moran 1981;Reid & Menten 1997;Schöier et al. 2006a) making it a particularly interesting molecule to study. Schöier et al. (2006b) modelled circumstellar SiO line observations from a sample of carbon stars and when comparing this to a similar analysis performed by González Delgado et al. (2003) for a large sample of M-type AGB stars, they found no apparent distinction between their circumstellar SiO abundance distributions. For the carbon stars, the derived abundances are several orders of magnitude higher than expected from thermal equilibrium stellar atmosphere chemistry. A possible explanation for the high SiO abundances derived for the carbon stars is the influence of a shock chemistry in the inner part of the wind (Cherchneff 2006). With the analysis performed in this work, the S-type AGB stars can be added to this comparison (see Sect. 6.3).
The sample of S-type AGB stars and the observational data are presented in Sects 2 and 3. The radiative transfer modelling is described in Sect. 4. The results are given and discussed in Sects 5 and 6, respectively. Finally, the conclusions are given in Sect. 7. presents a sample of 124 S-type stars from the list of Chen et al. (1995) providing cross identifications between the General Catalogue of Galactic S stars, the IRAS Point Source Catalogue (PSC), and the Guide Star Catalogue. Only stars having flux densities of good quality in the 12, 25, and 60 µm bands in the IRAS PSC were retained in their sample. We have chosen the stars in the sample of Jorissen Groenewegen & de Jong (1998) and Sahai & Liechti (1995) [selected from the S-type catalogs of Stephenson (1984Stephenson ( , 1990], and one star was added from the sample of Bieging & Latter (1994) [selected from the list of Jura (1988)]. These six stars have also been detected previously in circumstellar CO emission. Only S-type stars with Tc lines detected in their spectra and with detectable infrared excess are safely identified as intrinsic thermally pulsing AGB stars. S-type stars with no Tc lines and no infrared excess are most likely extrinsic S-stars, i.e., they are part of binary systems and their chemical peculiarities are due to mass transfer across the system. Jorissen For the Miras (M), bolometric luminosities are estimated using the period-luminosity relation of Whitelock et al. (1994), and periods are taken from the General Catalogue of Variable Stars (Kholopov et al. 1999). For the semiregular (SR, SRa, and SRb) and irregular variables (Lb), a luminosity of 4000 L ⊙ was assumed in accordance with Olofsson et al. (2002). Corrections for the interstellar extinction, based on their galactic latitude, were applied for each star (Groenewegen et al. 1992). Distances are derived by fitting the spectral energy distribution (SED) calculated with DUSTY (Sect. 4.2), to observed fluxes (Sect. 3.4). The derived distances, and adopted periods and luminosities are presented in Table 1, together with the variability type. The derived distances from the SED modelling agree well with distances from Hipparcos parallaxes when available (for 7 stars). Fig. 1 shows the distance distribution of our sample in 150 pc bins. The distance distribution is overlayed with a distribution assuming that the stars are evenly distributed, that the surface density of carbon stars is 40 kpc −2 with a scale height of 200 pc, and that there are 1/3 as many S-type stars as carbon stars in the solar neighbourhood (Wing & Yorka 1977;Jura 1990). A direct fit to our sample within 600 pc, gives a surface density of 20 kpc −2 with a scale height of approximately 150 pc (dashed line, Fig. 1). Considering the uncertainties involved in the distance estimate, classification, etc., we find the results to be consistent and believe that our sample is representative of masslosing S-type AGB stars and complete to about 600 pc. The sample of carbon stars studied by Schöier & Olofsson (2001) is representative of C-type AGB stars and complete to about 500 pc, while the completeness of the M-type AGB star samples of Olofsson et al. (2002) and González Delgado et al. (2003) have not been thoroughly investigated.
Observational data
When modelling circumstellar molecular line emission it is essential to observe as many different transitions as possible of the molecule under study. The derived mass-loss rate (when assuming a spherically symmetric, smooth wind) should be considered as the average mass-loss rate that created the CSE probed by the observed line emission. Different transitions probe slightly different regions in the CSE depending on their excitation requirement and thus the average is taken over a larger part of the CSE if more transitions are obtained. The same is true when estimating the radial abundance distribution and, in particular, when estimating the size of the emitting region of a specific molecule. In this work, we have tried to obtain observations of at least three transitions of CO and SiO for as many of the sample sources as possible (Tables A.1-B.2).
Observations of circumstellar CO and SiO radio line emission
The first analysis of CO radio line emission from this sample was published in Ramstedt et al. (2006). There, data from the literature (mainly J = 1 → 0 and 2 →1) was analysed together with J = 1 → 0 data collected with the Onsala Space Observatory (OSO) 20 m telescope 1 (January 2006), and J = 3 → 2 data of 18 sources collected with the APEX 12 m telescope 2 (from August to October 2005). The CO data base has since then been substantially extended. New observations of J = 1 → 0 and 2 → 1 line emission were performed at the IRAM 30 m telescope 3 using the AB SIS re-1 The Onsala 20 m telescope is operated by the Swedish National Facility for Radio Astronomy, Onsala Space Observatory at Chalmers University of Technology, with support from the Swedish Research Council. 2 The Atacama Pathfinder Experiment (APEX) is a collaboration between the Max-Plank-Institute für Radioastronomie, the European Southern Observatory, and the Onsala Space Observatory. 3 IRAM is supported by CNRS/INSU (France), the MPG (Germany), and the IGN (Spain). ceiver combination observing both polarizations simultaneously in August 2006. Further observations of J = 3 → 2 line emission were performed at APEX using APEX-2A during the autumn of 2006 and at the James Clerk Maxwell Telescope 4 (JCMT) in April 2007. At the JCMT we used HARP due to problems with the B receiver. As spectrometers, the VESPA autocorrelator at IRAM, the FFTS at APEX, and the ACSIS digital autocorrelator at the JCMT were used.
New SiO observations of J = 2 → 1 line emission were performed at OSO using an SIS receiver and the autocorrelator spectrometer in the low-resolution mode in January 2006. Observations of J = 2 → 1 and 5 → 4 line emission were performed at the IRAM 30 m telescope in August 2006, and further observations of J = 6 → 5 and 8 → 7 line emission were performed at JCMT during the autumn of 2006 and spring 2007. The observations at IRAM and the JCMT were obtained simultaneously as the CO observations using the same receiver and spectrometer setups. During 2006 and 2007 observations of J = 8 → 7 line emission was performed at APEX using the same setup as for the CO observations (all these lines are in the v = 0 state). Simultaneously as the J = 2 → 1, v = 0, observations at OSO, the J = 2 → 1, v = 1 line (which is of maser origin) was observed. We observed SiO(J = 2→1, v=1) line emission in 24 of our sample stars. For four of the stars, WY Cas, V386 Cep, TV Dra and R Gem, this is the first detection of SiO maser emission.
At OSO the observations were performed using the dual beam-switch mode. The observations at IRAM were performed using wobbler switching and a dual-polarization mode. At APEX position-switching was used with the reference position located at +2 ′ in azimuth. The JCMT observations were also performed using position-switching with a 2 ′ throw. Regular pointing checks were performed during all observations and typically found to be consistent within ≈3 ′′ of the pointing model.
The data was reduced using CLASS, Starlink, and XS 5 by subtracting a first order polynomial baseline fitted to the emission-free channels, and then binned to improve the signalto-noise ratio. The typical spectral resolution of the reduced data is 1 km s −1 . The raw spectra are stored in the T ⋆ A scale, where T ⋆ A is the antenna temperature corrected for the atmospheric attenuation using the chopper-wheel method. The intensity scale is subsequently given in main-beam brightness temperature, T mb = T ⋆ A /η mb , where η mb is the main-beam efficiency. The adopted main-beam efficiencies are given in Table 2 together with frequency, the energy of the upper level, and main beam FWHM for the respective transition. The uncertainty in the absolute intensity scale is estimated to be about ±20%. Velocities are given with respect to the Local Standard of Rest (LSR).
CO line profiles
All new spectra are presented in Appendix A, and the integrated intensities and peak main-beam brightness temperatures are given in Tables A.1-A.3. Our previous analysis of CO emission lines from S-stars (Ramstedt et al. 2006) was based on new APEX observations in combination with data from the 4 The James Clerk Maxwell Telescope is operated by the Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the Netherlands Organisation for Scientific Research, and the National Research Council of Canada. 5 XS is a package developed by P. Bergman to reduce and analyze a large number of single-dish spectra. It is publicly available from ftp://yggdrasil.oso.chalmers.se literature. When examining the previously published data, we found a large scatter in the reported CO(J = 2→ 1) intensities and consequently these data were not used in the final analysis. The new CO(J = 2→ 1) data is generally stronger than the data discarded in Ramstedt et al. (2006). As an example, the new CO(J = 2 → 1) line intensities obtained at IRAM are, on average, a factor of about three stronger than those reported in Sahai & Liechti (1995).
The observed line profiles are in general of good quality (S/N ∼ 5 or larger) and can, to a first approximation, be reconciled with those expected from a smooth spherically symmetric outflow. There is no evidence for any "detached-shell" sources in the sample (see e.g. the carbon star sample results of Olofsson et al. 1993). A closer inspection shows that some of the stars (e.g., W And, W Aql, DY Gem, and R Gem) have line profiles with weak wings extending beyond the parabolic line profile. This might be indicative of recent mass-loss-rate modulations in these sources or of asymmetric outflows. The six observed lines toward T Cet all show an asymmetry with stronger emission on the red-shifted side, similar to what is observed due to strong self-absorption (Huggins & Healy 1986). However, the star has a derived mass-loss rate of only 4×10 −8 M ⊙ yr −1 and all lines are found to be optically thin in the model. Consequently, the model does not reproduce the observed line profiles since strong selfabsorption requires optically thick lines. Most likely, this is an indication that T Cet has an asymmetric outflow.
SiO line profiles
A total of 26 stars were detected in circumstellar SiO line emission. All new spectra are presented in Appendix B, and the integrated intensities and peak main-beam brightness temperatures are given in Tables B.1-B.3. The observed spectra are generally of good quality (S/N ∼ 3 or larger). As previosly discussed by González Delgado et al. (2003), the SiO line profiles are often found to be narrower than the CO line profiles. However, many of the SiO lines show weak wings extending beyond the main emission feature and the total velocity widths are similar to those of the CO lines. It should be kept in mind that for poorer quality data, the uncertainty in the line width can be several km s −1 .
Dust continuum emission
The SEDs were constructed using J, H, and K band data (2MASS) and IRAS fluxes.
The circumstellar model
The CSE is assumed to be spherically symmetric and formed by a constant mass-loss rate. It is assumed to be expanding at a constant velocity, derived from fitting the CO line widths, and to have a micro-turbulent velocity distribution with a Doppler width of 1.0 km s −1 . In addition, a thermal contribution to the local line width is added, based on the derived kinetic temperature of the gas. The density structure is obtained from the conservation of mass.
Dust emission modelling
The dust radiative transfer was solved using the publicly available code DUSTY 6 . Amorphous carbon grains (Suh 2000) or amorphous silicate grains (Justtanont & Tielens 1992) were assumed based on the IRAS low-resolution spectra (LRS) classification according to Volk & Cohen (1989). Three parameters are adjustable when fitting the model SEDs to the observed flux densities; the dust optical depth at 10 µm, τ 10 , the dust temperature at the inner radius of the dust envelope, T d (r i ), and the stellar temperature, T ⋆ . One large grid per dust type was calculated. The optical depth at 10 µm ranges from 0.01 to 3.0 in steps of 10 %, the inner dust temperature is varied from 500 to 1500 K, and the stellar temperature from 1800 to 2400 K, both in steps of 100 K. Once the grid is calculated, the solution can be scaled to the luminosity and distance of any star and a fit to the observed flux densities can be found (Ivezic & Elitzur 1997). For simplicity, the dust grains are assumed to be of the same size with a radius of 0.1 µm, and have a density of 2 g cm −3 (carbon grains) or 3 g cm −3 (silicate grains).
Physical properties of the gas content in the CSEs
The adopted method for the radiative transfer analysis of the circumstellar CO radio line emission has been described in detail in previous articles [Schöier & Olofsson (2001) for carbon stars, and Olofsson et al. (2002) for M-type stars] and therefore only a short description is given here.
The non-LTE radiative transfer code is based on the Monte Carlo method. The code has been benchmarked to high accuracy against a wide variety of molecular-line radiative transfer codes (van Zadelhoff et al. 2002;van der Tak et al. 2007). In the excitation analysis of the CO molecule, 41 rotational levels are included in each of the two lowest vibrational states (v=0 and v=1). Energy levels and radiative transition probabilities, as well as collisional rate coefficients for collisions between CO and H 2 , are taken from Schöier et al. (2005) 7 . When weighting together collisional rate coefficients for CO in collisions with ortho-and para-H 2 an ortho-to-para ratio of 3 is adopted. Radiation from the central source (assumed to be a blackbody) and the cosmic microwave background is included. When the dust optical depth could be constrained in the dust radiative transfer analysis (see Sect. 5.1), i.e., there is a significant amount of themal dust grains present in the wind, the corresponding modification to the radiation field is also included in the molecular excitation analysis.
The energy balance equation for the gas is solved selfconsistently. Cooling is due to line cooling from CO and H 2 and the adiabatic expansion of the gas. The dominant gasheating mechanism is collisions between the H 2 molecules and the dust grains. Photoelectric heating is also included and it is mostly important in the outer parts of the CSE. The free parameters describing the dust; the dust-to-gas mass-lossrate ratio, Ψ, the average density of an individual dust grain, ρ g , and the dust grain radius, a g , are combined in the parameter h [see Schöier & Olofsson (2001) for a definition].
The h-parameter thus gives a measure of how efficiently the gas is heated due to collisions with dust grains. Following Schöier & Olofsson (2001), we have adopted an average efficiency factor for momentum transfer, which is constant throughout the CSE, Q rp † =3 × 10 −2 . 6 http://www.pa.uky.edu/∼moshe/dusty/ 7 http://www.strw.leidenuniv.nl/∼moldata The radial CO abundance distribution is estimated using the model presented in Mamon et al. (1988). The initial abundance of CO relative to H 2 , is assumed to be 6 × 10 −4 . The inner radius of the CSE was taken from the SED fitting when the model could be constrained, and is otherwise assumed to be 5 r ⋆ . A change in the inner radius with a factor of two will affect the resulting line intensities by less than 15% (Schöier & Olofsson 2001).
The mass-loss rate and the h-parameter are the remaining free parameters in the CO line modelling.
SiO line radiative transfer modelling
The same non-LTE Monte Carlo radiative transfer code is used for the analysis of the SiO radio line emission, where 41 rotational levels are included in each of the two lowest vibrational states (v=0 and v=1). Energy levels and radiative transition probabilities, as well as collisional rate coefficients for collisions between SiO and H 2 , are taken from Schöier et al. (2005). The physical properties of the CSEs (mass-loss rates and radial gas temperature distributions) are taken from the results of the CO modelling. All other assumptions are the same. Also here, a decrease in the inner radius by a factor of two will result in a small change (≤10%) of the model line intensities. The only free parameters when fitting the SiO line emission is the SiO abundance at the inner radius, and, when several lines were observed, the outer radius of the SiO emitting region (Sect. 4.5). These two parameters can be used to test chemical models of the inner CSE and photodissociation models for the outer envelope, respectively.
The radial SiO abundance distribution
The radial SiO abundance distribution, i.e., the ratio of the number densities of SiO molecules to H 2 molecules, f = n(SiO)/n(H 2 ), is assumed to be described by a Gaussian For 18 out of 26 detected stars, too few transitions are observed in order to constrain both the abundance and the size of the emitting region. For these stars the extent of the SiO envelope is assumed to scale with the density of the CSE according to the scaling law found by González Delgado et al. (2003), whereṀ is the mass-loss rate and v e the gas expansion velocity of the wind, both found from the CO line modelling. For a discussion about the validity of Eq. 2 see González Delgado et al. (2003), Schöier et al. (2006b), and Sect. 5.3.
Dust optical depth and radial dust temperature distribution
As known since previous studies of S-stars on the AGB (e.g., Sahai & Liechti 1995;, the dust content of S-star CSEs is low. The dust radiative transfer analysis performed here show that the dust optical depth can be constrained for only 12 out of 40 sample stars. For the other stars the SED is dominated by the emission from the central star and DUSTY has been used to determine only the stellar effective temperature (in addition to the distance/luminosity of the source). The best-fit model is found by minimizing where F λ is the flux density and σ λ the uncertainty in the measured flux density at wavelength λ. The summation is done over all N independent observations. The reduced χ 2 for the best-fit model is given by where p (the number of adjustable parameters) is 3 in our case. Table 3 shows the LRS classification, the assumed dust type, the stellar radius, r ⋆ , the inner radius of the CSE, r i , and the stellar temperature, T ⋆ . The inner radius is assumed to be 5 r ⋆ for all stars where the dust optical depth could not be constrained. Table 3 also gives the temperature at the inner radius of the dust shell, T d (r i ), the dust optical depth at 10 µm, τ 10 , the dust-to-gas mass-loss-rate ratio, Ψ, and the dust mass-loss rate,Ṁ d , for the stars where this could be calculated. The dust mass-loss rate is given bẏ where χ gr,ν is the grain cross section per unit mass at a given frequency ν, v d,∞ is the dust expansion velocity, and τ dν is the dust optical depth at a given frequency. A χ gr,10µm of 1.1×10 3 and 3.5×10 3 has been used for the carbon stars and M-type stars, respectively. The dust expansion velocity is calculated from the gas expansion velocity derived in the CO line modelling and the drift velocity between the dust and gas particles, v dr , given by (Kwok 1975).
Mass-loss rates, radial gas temperature distribution and kinematics
All results from the CO radiative transfer analysis, mass-loss rates, h-parameters, gas expansion velocities, stellar velocities, v LSR , χ 2 red for the best-fit models, and the number of observational constraints, N, are given in Table 4.
For slightly more than half of the stars, enough data (N ≥ 3) is available to constrain both h andṀ. Models are calculated for a large number of mass-loss rates and h-parameters and the best-fit model is found by minimizing where I mod and I obs are the integrated intensities of the modelled and observed lines, respectively. σ is in most cases dominated by the calibration uncertainties assumed to be 20%. For observations with poorer S/N, σ is set higher according to the quality of the data (although never above 30%). The reduced χ 2 is given by Eq. 4, where p = 2. For the remaining stars, we have assumed the value of h depending on the stellar luminosity; h=0.2 for L ⋆ < 5000 L ⊙ and h=0.5 for L ⋆ ≥ 5000 L ⊙ [in accordance with Schöier & Olofsson (2001) and Olofsson et al. (2002)]. This is indicated by a colon in Table 4 and the reduced χ 2 is given by Eq. 4 with p = 1.
Generally, the mass-loss rates are well constrained, and the observed CO lines are well-reproduced by the model. For many of the stars, it is not possible to find an upper limit to the hparameter. This is not surprising, since the excitation in these low-mass-loss-rate stars is dominated by the radiation from the central star and is not very sensitive to the temperature structure of the gas and the adopted dust parameters. For some stars the χ 2 red of the best-fit model is rather large. The derived χ 2 is very sensitive to the line ratios, and any calibration errors in one line (larger than our estimated uncertainties) might result in a large χ 2 red (see Fig. 2, CO(J = 2→ 1)). Therefore it is difficult to determine whether inadequacies in the model or calibration errors are responsible for the poor fit in some cases. We find no systematic trends with particular lines or telescopes. Figure 2 shows the best-fit model from the radiative transfer analysis overlayed on the observed line profiles for χ Cyg. The χ 2 -map is also shown with the innermost contour representing the 1σ-level. This fit is good, but not perfect, e.g., in the details of the line shapes, and the main reason for this is not easily identified.
Compared to the mass-loss rates found in Ramstedt et al. (2006), the larger quantity of data makes the new mass-lossrate estimates more reliable. In particular, it is possible to determine the h-parameter for a larger number of stars. A comparison shows that the inclusion of the new data in the analysis has changed the estimated mass-loss rates by less than a factor of two. For seven stars the change is slightly larger. There is no systematic change upwards or downwards.
SiO abundances and radial distribution
All results from the SiO radiative transfer analysis, SiO fractional abundances, f 0 , the size of the SiO emitting regions (as derived from Eq. 2), r e , SiO gas expansion velocities, v e (SiO), χ 2 red for the best-fit models, and the number of observational constraints, N, are given in Table 5. Again, the best-fit model is found by minimizing Eq. 7 and χ 2 red is defined by Eq. 4, where p = 1 when r e is given by Eq. 2. The derived abundances range from 4 × 10 −7 to 1.4 × 10 −4 . The median value is 6 × 10 −6 . Figure 3 shows a histogram of the circumstellar SiO abundances for the S-type stars compared to those found for M-type stars (González Delgado et al. 2003) and carbon stars (Schöier et al. 2006b). The spread of the SiO abundance distribution for the Stype stars [as measured by the ratio between the 90th percentile and the 10th percentile] is about a factor of 25.
The SiO gas expansion velocity is found to be, on average, 20% smaller than the CO gas expansion velocity (for 19 out of 26 stars). González Delgado et al. (2003) that the SiO emission probes the very outer parts of the gas acceleration zone and that this would explain the discrepancy in the line width. SiO typically probes regions closer to the star by about a factor of 5-10 compared to CO. A trend of narrowing of the SiO lines with higher frequency might then also be expected, since rotational transitions involving higher energy levels (Table 2) tend to probe warmer and denser regions closer to the star. In addition, the line width can be reduced due to strong self-absorption on the blue-shifted side of the SiO lines that are optically thick. We do not find a trend with narrowing of higher frequency lines nor with the mass-loss rate [as would be expected if self-absorption was the predominant effect] and can therefore not firmly conclude that the SiO emission probes the acceleration zone. Table 6 shows the results (with 1σ-errors) of the best-fit models when r e is left as a free parameter. In this case, the modelling generally gives a good fit to the observed SiO lines (see Fig. 4) with χ 2 red on the order of unity. From the χ 2 -analysis we find that the derived SiO abundances are generally determined within a factor of ≈ 4 (except for R And, Fig. 4). S Cas has a large χ 2 red -value and the same can be observed for the result from the CO model (Table 4). This might indicate that S Cas is not well reproduced by a spherically symmetric wind model. The same can be noted for the SiO model of χ Cyg. However, both models are based on rather noisy SiO(J = 2 →1) lines from OSO, and the line ratios between this line and the other available lines are not reproduced. Thus, we can not decide whether the bad fit is due to the model being incorrect for these stars, or observational uncertainties. For T Cet, only the SiO(J = 8→7) line was observed (at APEX). A fit to the width of the line gives v e (SiO) = 16.0 km s −1 . Since only one line is available and the data is rather noisy (see Fig B.1), we have chosen to model the star with the expansion velocity derived from the CO model (N=6), v e =5.5 km s −1 . For IRC-10401 enough lines are available to try to fit the size of the SiO envelope and the abundance simultaneously. However, we are not able to constrain r e for this object and have chosen to use r e from Eq. 2.
To be able to compare our results to what has previously been found for M-type stars (González Delgado et al. 2003) and carbon stars (Schöier et al. 2006b) we have decided to derive all SiO abundances using Eq. 2 for the size of the SiO emitting region in order to be consistent. Figure 5 shows a comparison between the radii derived when r e is left as a free parameter and those using Eq. 2. The error bars correspond to the 1σ limits. Only results from models with a χ 2 red < 2.5 and where the SiO(J = 8→7) line was available are shown in Fig. 5. We conclude that our results are consistent with Eq. 2 and that the size of the SiO emitting region does not differ significantly between the different chemical types. This is in agreement with the results of Schöier et al. (2006b). rate distributions for the three different samples are very similar. The median value for the S-type stars using the new data is 2.7×10 −7 M ⊙ yr −1 , and the M-type and carbon stars both have a median of 3.0×10 −7 M ⊙ yr −1 . There are possibly fewer S-type stars with high mass-loss rates and due to our sample selection criteria (see Sect. 2) S-type stars with no or very little mass loss will be missed. The derived mass-loss rates depend on the adopted CO fractional abundance, which differs for the three chemical types in accordance with stellar atmosphere models. Figure 6b shows the gas expansion velocity distribution derived from the CO line widths for the three samples. The median gas expansion velocity for the S-type stars is 8 km s −1 , and 7.5 and 11 km s −1 for the M-type and carbon stars, respectively, indicating that the carbon stars have CSEs with higher gas expansion velocities. Finally, Fig. 6c shows the relation between the massloss rates and the expansion velocities for the three samples. We find no apparent difference depending on chemistry (apart from the higher gas expansion velocities in the carbon star sample) and suggest that this points to a mass loss that is driven by the same mechanism(s) in the S-type, M-type and carbon AGB stars. Another indication of this can be seen in Fig. 7 where the mass-loss rate (a and b) and expansion velocity (c and d) are plotted against the stellar variability period. As already discussed by Schöier & Olofsson (2001) for the carbon star sample, the mass-loss rate increases with the period of the star for all chemistries. The same trend can be observed for the expansion velocity. If the efficiency for driving a wind by radiation Table 6. Circumstellar SiO fractional abundance, f 0 , and the radius of the SiO line-emitting region, r e , with 1σ-errors, from the best-fit model. The reduced χ 2 for the best-fit model and the number of observational constraints, N, are also given.
pressure on dust was weaker in the S-type stars, due to, e.g., a low dust-to-gas ratio or a dust type incapable of driving a wind, the S-type stars would occupy a different parameter space than the M-type stars and the carbon stars in these plots. However, it is not possible to distinguish between the three different types.
Dust-to-gas ratio
The average of the derived dust-to-gas mass-loss-rate ratios is about 2.8 × 10 −3 for the S-type AGB stars in Table 3 [V386 Cep excluded; this is likely the same type as GX Mon discussed in Ramstedt et al. (2008)]. Groenewegen et al. (1999) derived dust-to-gas ratios of 48 M-type stars and found an average of 5.8 × 10 −3 , and Ramstedt et al. (2008) derived dust-to-gas ratios of four high-mass-loss-rate M-type stars resulting in, on average, 2.8 × 10 −3 . For the carbon stars the average dust-to-gas ratios might be slightly lower [2.5 × 10 −3 , , 2 × 10 −3 , Ramstedt et al. (2008)]. We conclude that the apparently low dust content in S-type AGB stars reflects the fact that they, for the most part, are low mass-loss rate objects. Their dustto-gas ratios seem to be in agreement with what is derived for AGB stars of other chemical types, implying that the dust formation efficiency is similar in all three chemical types.
Circumstellar SiO abundances and constraints on chemical models
The SiO abundances derived for the sample of S-type AGB stars range between 4 × 10 −7 and 1.4 × 10 −4 . All stars (except RZ Sgr, which has the highest wind density) have abundances above or well above the expected thermal equilibrium value for C/O = 1 (6.4 × 10 −7 , Cherchneff (2006)). The median value of 6 × 10 −6 is almost an order of magnitude larger than the equilibrium value. Cherchneff (2006) derives an SiO abundance at 5 r ⋆ of 3.3×10 −5 for C/O = 1 in a non-equilibrium shock-chemistry model. The SiO abundance derived in a shock-chemistry model is sensitive to the specific parameters of the star, such as the pulsational period and the shock velocity. The periods of the S-type AGB stars in our sample range between about 100 − 600 days but we find no obvious correlation with the derived SiO abundance. However, in a real, dynamic stellar atmosphere the chemistry is most likely much more complicated than in the present chemical models, and simple dependencies on single parameters, like the period, may not exist. We find no correlation between the LRS class and the derived SiO abundance for the S-type stars.
The derived circumstellar SiO abundances for the sample of S-type AGB stars are compared to those obtained for carbon stars (Schöier et al. 2006b) and M-type AGB stars (González Delgado et al. 2003) in Fig. 3. The three distributions are very similar suggesting that the abundance of SiO in a circumstellar chemistry is not very sensitive to the C/O ratio. In Fig. 8 the estimated abundances are plotted against a measure of the density of the wind (Ṁ/v e ). For a specific density the SiO abundance can vary by up to two orders of magnitude. From the χ 2 -analysis we find that the abundances derived when also the size of the emitting region is a free parameter, are determined within a factor of ≈ 4. Eq. 2 is derived using a relatively large number of stars with a large range in mass-loss rates and might give a statistically more accurate abundance. Similar investigations of abundances of circumstellar molecules, for which the results are more in line with what would be expected in equilibrium chemistry (e.g., SiS and HCN), show a smaller range in the derived abundances for different stars .
For AGB stars with low density envelopes, such as the majority of the stars discussed here, condensation of SiO molecules onto dust grains is not very effective and most probably not the cause for the observed scatter in the derived SiO abundances (see Sect. 6.5). At chemical equilibrium a spread in the abundances is also to be expected depending on the C/O ratio and the temperature of the star. Around C/O = 1, the change in the amount of SiO formed is rather drastic, and between 0.95 and 1.05 the amount of SiO can change with up to two orders of magnitude (Markwick 2000). If the temperature is varied between 2200 and 2600 K for a given C/O-ratio, the SiO abundance will change less than an order of magnitude 8 . For the S-type stars alone it is not possible to conclude whether the spread in the derived abundances is indicative of a non-equilibrium chemistry or due to a spread in the C/O-ratio around 1. However, given the results for all three chemical types, we conclude that the spread in the derived circumstellar SiO abundances most probably is real and indicative of a shock-chemistry in the formation zone of SiO.
The derived SiO envelope sizes (see Table 6 and Fig. 5) can be used to test photochemical models of the circumstellar envelope. Using a simple photochemical model (Lindqvist et al. 2000;González Delgado et al. 2003, and references therein) with typical stellar, circumstellar and dust parameters for our sample of S-type AGB stars, and adopting an unshielded photodissociation rate of 2.5 × 10 −10 s −1 (González Delgado et al. 2003), we obtain the relation between SiO envelope size (r e ) and density measure (Ṁ/v e ) shown in Fig. 5 (dashed line). These predictions agree well with the observed values and the relation described by Eq. 2 (solid line in Fig. 5) used in the abundance estimates.
SiO maser emission
There are several indications that SiO maser emission originates from close to the stellar photosphere (Reid & Moran 1981). The energy of the masing state is about 1800 K (v=1) or higher, and VLBA observations confirm that the masers are formed within a few stellar radii of the star (see for instance Cotton et al. 2006, and references therein). This corresponds to the region where the dust is formed (Salpeter 1974;Danchi et al. 1994;Reid & Menten 1997). Observations of SiO masers can therefore provide information on whether there is SiO present in the dust formation region, and hence whether it can be one of the constituents of the dust formed. The non-detection of SiO maser emission toward carbon stars is thought to be indicating that the SiO molecules are formed further out in the wind than in M-type AGB stars.
SiO maser emission has been searched for in 30 of our sample stars, and has previously been detected in nine . The results of our SiO(J = 2→ 1, v=1) observations are given in Table B.3, and the detected lines are shown in Fig. B.3. In four stars the SiO maser has previously not been detected, and in another four stars we can not confirm previous detections (R Cyg, R Lyn, Y Lyn and EP Vul), most likely a result of time variability. In Table B.3 we give distance independent upper limits for the sources observed. Unfortunately the reason for the non-detections is not possible to identify but time variability may play a role.
Out of the 12 S-type stars in our sample classified as showing silicate emission in their LRS spectra, eight were observed and six were detected. Nine stars are classified as being featureless and out of these, six were observed and one was detected. Out 8 astrochemistry.net-SiO, H2, 19 Jan 2009 of the three stars classified as not showing any IR excess, two were observed and none was detected, and finally, out of the 13 unclassified stars, seven were observed and two were detected in the SiO(J = 2→ 1, v=1) maser line. These results are in line with a situation where the silicate stars have a C/O-ratio slightly less than one, and hence have enough SiO in their photospheres to produce SiO masers, while the others, in general, have C/Oratios slightly larger than one, and hence too low photospheric SiO abundances to produce SiO masers.
SiO chemistry and implications for dust formation
The formation of SiO in AGB winds with shocks is discussed by Cherchneff (2006) for different C/O-ratios. It is described how the formation of SiO is linked to the presence of OH and how the lack of OH in the inner wind of carbon stars explain their inability to form silicate dust despite their high SiO abundances further out in the CSE.
For the S-type AGB stars, the situation is more complicated. The exact C/O-ratio is not known for the stars in our sample, but it is most likely a spread around 1. The C/O-ratio of course has implications for both the chemistry and the dust formed. Cherchneff (2006) finds that the chemistry of the inner winds of stars with C/O = 0.98 is very similar to the M-type chemistry, while the chemistry of stars with C/O = 1.01 is very similar to that of carbon stars. In our comparison (Figs 3 and 8) it is not possible to distinguish between the different chemical types in terms of their circumstellar SiO abundances. Given also the similarity of the mass-loss-rate distributions for the different chemical types, the most likely explanation is that the S-type stars in our sample have a spread in their C/O-ratio where some stars are more M-type like (forming silicate dust) and some are more carbon-star like (forming carbon dust). However, we are not able to exclude a scenario where two components of dust have formed in stars with a C/O-ratio close to unity.
For M-type AGB stars (González Delgado et al. 2003) and carbon stars (Schöier et al. 2006b) there is a clear trend that the circumstellar SiO abundance gets lower as the density of the wind increases (Fig. 8), indicative of adsorption of SiO onto dust grains. For the sample of S-type AGB stars only an indication of such a trend may be seen, possibly due to the lack of high mass-loss rate objects. The dashed line in Fig. 8 shows a depletion curve based on a simple model presented in González Delgado et al. (2003). A relatively large scatter around the curve is expected since the condensation is sensitive to stellar, circumstellar, and dust characteristics. Further support for a depletion scenario comes from recent interferometric observations. Schöier et al. (2006a) modelled interferometric observations of SiO(J = 5→ 4) emission and infrared observations of rovibrational transitions of the extreme carbon star IRC+10216, and found that a two-component radial abundance distribution (a compact high abundance, pre-condensation, component combined with a more extended low abundance, post-condensation, component) is needed in order to explain the observations. Models of interferometric observations of M-type AGB stars (Schöier et al. 2004) also show that a two-component radial abundance distribution gives a better fit to the observed data. The observations at hand for the S-type AGB stars do not allow a proper investigation of a scenario in which SiO molecules are incorporated into grains (see discussion in Schöier et al. (2007)). Instead, the abundances derived here should be considered as post-condensation abundances. Delgado et al. 2003), and carbon stars (red triangles) (Schöier et al. 2006b). The horizontal lines mark the abundances predicted from equilibrium chemistries (Cherchneff 2006). The dashed line shows the expected postcondensation abundance, f (∞), (scaled to 3.0×10 −5 , roughly the expected fractional abundance at 5 r ⋆ for low mass-loss rates when C/O=1) from a model including adsorption of SiO onto dust grains (see González Delgado et al. 2003, for details).
Conclusions
We have modelled multi-transitional CO radio line data from 40 S-type AGB stars. We have also modelled multi-transitional SiO data of 26 stars from the same sample. The results are compared to similar samples of M-type AGB stars and carbon stars (Schöier & Olofsson 2001). We arrive at the following conclusions: -We find that the mass-loss rate distributions are very similar for the three chemical types (Fig. 6a), and so are the relations between mass-loss rate and expansion velocity of the stellar wind (Fig. 6c). Further, it is not possible to distinguish between the different chemical types when examining how the mass-loss rate correlates with the pulsation period of the star (Fig. 7). The most likely explanation for the observed trends is a mass loss that is driven by the same mechanism(s) in all three chemical types. The only apparent difference is that the carbon star CSEs have higher gas expansion velocities (on average), most likely an effect of a higher acceleration efficiency of the carbon grains. -The derived dust-to-gas mass-loss-rate ratios for 11 of the sample stars show an agreement with what has previously been found for M-type AGB stars and carbon stars, implying that the dust formation efficiency is similar in all three chemical types. -The median value of the estimated circumstellar SiO fractional abundances in S-type AGB stars is almost more than an order of magnitude higher than predicted by thermal equilibrium chemistry, suggesting that shock-induced non-equilibrium chemical processes are important in regulating the chemistry in the inner wind. -The derived circumstellar SiO abundances for the S-type AGB stars range from 4 × 10 −7 to 1.4 × 10 −4 . For a specific wind density (Ṁ/v e ) there is a scatter of almost two orders of magnitudes, in accordance with what has previously been found for the other chemical types. In terms of their distribution of SiO circumstellar abundances, it is not possible to distinguish between the three different chemical types, and we propose that, although there is a large scatter, the circumstellar SiO abundance is independent of the C/O-ratio for a given mass-loss rate and expansion velocity. The formation efficiency of SiO in a shock chemistry will be sensitive to other specific parameters of the star, like pulsational period and shock velocity. We believe that the scatter obtained in the SiO abundance estimates is real and indicative of a shock chemistry in the outer atmosphere. -Previous analysis of M-type AGB stars (González Delgado et al. 2003) and carbon stars (Schöier et al. 2006b) show a clear trend in that the circumstellar SiO abundance decreases as the density of the stellar wind increases. The trend is indicative of adsorption of SiO onto dust grains. The same trend can be suspected for the sample of S-type AGB stars, however, the number of high-mass-loss-rate S-type AGB stars is low and no firm conclusion can be drawn. | 2009-03-10T13:21:18.000Z | 2009-03-10T00:00:00.000 | {
"year": 2009,
"sha1": "aa570c0060670b72803514fcf6d262ec0e763695",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2009/20/aa11730-09.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0d86ecdcd22fbc7b539dfccfbef820f6b5ad1ba4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
2046492 | pes2o/s2orc | v3-fos-license | A global satellite-assisted precipitation climatology
Accurate representations of mean climate conditions, especially in areas of complex terrain, are an important part of environmental monitoring systems. As high-resolution satellite monitoring information accumulates with the passage of time, it can be increasingly useful in efforts to better characterize the earth’s mean climatology. Current state-of-the-science products rely on complex and sometimes unreliable relationships between elevation and station-based precipitation records, which can result in poor performance in food and water insecure regions with sparse observation networks. These vulnerable areas (like Ethiopia, Afghanistan, or Haiti) are often the critical regions for humanitarian drought monitoring. Here, we show that long period of record geo-synchronous and polar-orbiting satellite observations provide a unique new resource for producing highresolution (0.05) global precipitation climatologies that perform reasonably well in data-sparse regions. Traditionally, global climatologies have been produced by combining station observations and physiographic predictors like latitude, longitude, elevation, and slope. While such approaches can work well, especially in areas with reasonably dense observation networks, the fundamental relationship between physiographic variables and the target climate variables can often be indirect and spatially complex. Infrared and microwave satellite observations, on the other hand, directly monitor the earth’s energy emissions. These emissions often correspond physically with the location and intensity of precipitation. We show that these relationships provide a good basis for building global climatologies. We also introduce a new geospatial modeling approach based on moving window regressions and inverse distance weighting interpolation. This approach combines satellite fields, gridded physiographic indicators, and in situ climate normals. The resulting global 0.05 monthly precipitation climatology, the Climate Hazards Group’s Precipitation Climatology version 1 (CHPclim v.1.0, doi:10.15780/G2159X), is shown to compare favorably with similar global climatology products, especially in areas with complex terrain and low station densities.
Introduction
Systematic spatial variations in climate have been studied since at least the first century AD, when Ptolemy's Geographia identified the earth's polar, temperate, and equatorial temperature zones.Analysis of these climatological surfaces continues to be an important aspect of environmental monitoring and modeling.In the 1960s, computers enabled the automatic interpolation of point data, and several important algorithms such as Shepard's modified inverse distance weighting function (Shepard, 1968) and optimal surface fitting via kriging (Krige, 1951;Matheron, 1963) were devel-oped.The value of spatially continuous ancillary data, such as elevation, was soon recognized (Willmott and Robeson, 1995) and the current state-of-the-science climatologies all use background physiographic indicators combined with in situ observations.The most widely-used current global climatologies, such as those produced by the University of East Anglia's Climatological Research Unit (CRU) (New et al., 1999(New et al., , 2002)), and the Worldclim (Hijmans et al., 2005) global climate layers, typically base their estimates on elevation, latitude, and longitude.Daly et al. (1994) used locally varying regressions fit to the topographic facets, while the CRU Published by Copernicus Publications.
and Worldclim climatologies use thin-plate splines (Hutchinson, 1995) to minimize the roughness of the interpolated field, with the degree of smoothing determined by generalized cross validation.The Global Precipitation Climatology Centre (GPCC) generates their climatology products based on the interpolation of a very large database of precipitation normals (Becker et al., 2013;Schneider et al., 2014).
In Africa, Climate Hazards Group (CHG) scientists have demonstrated the utility of satellite fields as a source of ancillary data for climatological precipitation and air temperatures surfaces (Funk et al., 2012;Knapp et al., 2011).This new approach combines satellite fields, gridded physiographic indicators, and in situ climate normals using local moving window regressions and inverse distance weighting interpolation.Expanding from our work in Africa, we have produced a global 0.05 • monthly precipitation climatology, the Climate Hazards Group Precipitation Climatology version 1 (CHPclim v.1.0,http://dx.doi.org/10.15780/G2159X).This paper summarizes our statistical approach and modeling results, and presents a validation of the resulting data set.The CHPclim version 1, Worldclim version 1.4 release 3 (Hijmans et al., 2005), CRU CL 2.0 (New et al., 2002(New et al., , 1999)), and the GPCC CLIM M V2015 (doi:10.5676/DWD_GPCC/CLIM_M_V2015_025,Becker et al., 2013;Schneider et al., 2014) climatologies are compared with independent sets of station normals for Colombia, Afghanistan, Ethiopia, The Sahel, and Mexico.The climatologies are also compared with each other, and with a gridded validation data set in Ethiopia.
Precipitation normals
Two sets of monthly precipitation normals (long-term averages) were used to create the CHPclim.The first set was a collection of 27 453 monthly station averages obtained from the Agromet Group of the Food and Agriculture Organization of the United Nations (FAO).This extensive collection has a fairly detailed level of representation in many typically data-sparse regions, but suffers from a limitation.The FAO database does not provide the period of record used to calculate the long-term averages, although most observations roughly correspond to averages over the 1950s through the 1980s.This data set, therefore, was augmented with 20 591 station climate normals taken from version two of the Global Historical Climate Network (GHCN) (Peterson and Vose, 1997).We compensated for the FAO database's varied coverage in time by supplementing it with averages from a less dense but more temporally consistent information source -the GHCN.The more extensive FAO normals were used to build the preliminary climate surfaces (as described below in Sect.3).The differences between this surface and GHCN 1980-2009 averages were then estimated and inter-polated, and then used to adjust the final monthly surfaces to a 1980-2009 time period.
Monthly means of four satellite products were evaluated as potential background climate surfaces: Tropical Rainfall Measuring Mission (TRMM) 2B31 microwave precipitation estimates (Huffman et al., 2007), the Climate Prediction Center morphing method (CMORPH) microwave-plus-infrared based precipitation estimates (Joyce et al., 2004), monthly mean geostationary infrared (IR) brightness temperatures (Janowiak et al., 2001), and Land Surface Temperature (LST) estimates (Wan, 2008).The TRMM and CMORPH precipitation estimates are based primarily on passive microwave observations from meteorological satellites in asynchronous orbits.The monthly mean infrared brightness temperatures, on the other hand, are derived from a combination of multiple geostationary weather satellites.The LST estimates are derived from multispectral observations from Moderate Resolution Imaging Spectrometers (MODIS) aboard the Terra and Aqua satellites.The LST fields are global, while the CMORPH, TRMM, and IR brightness temperatures span 60 • N/S.For each month, for all available years (typically ∼ 2001-2010), the satellite data were averaged.All four products were convolved to a common 0.05 • grid.A fifth predictor was created based on the average of the CMORPH and TRMM precipitation fields.
Topographic and physiographic surfaces
Mean 0.05 • elevation, compound topographic index, flow accumulation, aspect, and slope were calculated from global 30 arcseconds GTOPO30 elevation grids following the methodology developed for the HYDRO1K (Verdin and Greenlee, 1996).While the utility of all the topographic fields was explored, only elevation and slope were used in the final analysis because they proved to be the most robust predictors.Latitude and longitude were also included as potential predictor variables.
Methods -the CHG climatology modeling process
The modeling methodology involved three main steps that were repeated for each month for a set of 56 modeling regions.The extent of the regions was based on (a) station density, (b) homogeneity of predictor response, and (c) availability of the predictor fields.The first step used a series of moving window regressions (MWR) to create an initial prediction of a 0.05 • precipitation grid.The second step calculated the at-station residuals from step 1 (station observations minus regression estimates), and then interpolated these values using a modified inverse-distance weighting (IDW) interpolation scheme to create grids of MWR model residuals.The gridded MWR estimates and gridded residuals were combined to create an initial set of climatological surfaces based on the FAO normals.In the third step, these surfaces were then adjusted using the 1980-2009 GHCN station averages.
The differences (ratios) from 1980-2009 GHCN climate normals were computed and used to produce final surfaces corresponding to a 1980-2009 baseline period.
Localized correlation estimates
Our process relies heavily on local regressions between our target variable and background field.We begin by explaining the bivariate standardized case of this process, which corresponds to a localized correlation.At a certain location we can sample a number of points and background variables that fall within a certain distance (d max ) and calculate their distance weighted (localized) correlation.The localized correlation process finds a set of n neighboring points (within d max ), and estimates their weighted correlation.This study uses a cubic function of the distance (d) and a user-defined, regionally variable, maximum distance (d max ).
These weights are then used to estimate a localized correlation. (2) The localized correlation (rx, y) at some location (x, y) corresponds with the standardized cross-product of the neighboring points, weighted by their distance.This process can be used to generate correlation maps (Fig. 1).Typically, the direct physical relationship between the station normals and a satellite field, such TRMM/CMORPH precipitation, results in a stronger correlation pattern than that which is produced by an indirect physiographic indicator such as elevation.Figure 1 provides an example of this by contrasting the local correlations between station precipitation, elevation and TRMM/CMORPH precipitation.
Localized moving window regressions
The core of the CHG climatology modeling process is based on a series of local regressions between in situ observations and spatially continuous predictor fields.For each location, a set of neighboring observations is obtained, and a regression model constructed using weighted least squares, with the weight of each observation determined by its distance from the regression centroid (Eq.1).For each region and month, a grid of center points is defined on a regular 1 • grid over land-only locations.Figure 2 shows the modeling regions.At each center-point, station values within the radius (d max ) are collected, and a regression model is fit based on weights determined by Eq. ( 1).The d max values are defined individually for each model region, varying from 650 km for the larger or data-sparse regions (e.g.Australia, northwest Asia) to 300 km for Central America and the Galapagos.
Model fitting
For each modeling region and month, regression models were determined through a combination of automated regression subset selection and visual inspection of the output.In some cases, visual inspection indicated that a combination of statistically powerful predictors produced obvious artifacts.In these cases, the selection pool was reduced by hand.Based on the boundaries of the interpolation window, certain predictors were omitted (TRMM, CMORPH, IR) because the satellite range did not extend northward or southward enough for these areas.
Interpolation of model residuals
Following the MWR modeling procedure, at-station anomalies (the arithmetic difference between the FAO station normals and the nearest 0.05 • regression estimate) are calculated and interpolated using a modified IDW interpolation procedure.For each 0.05 • grid cell, the cube of inverse distances is used to produce a weighted average of the surrounding station residuals, r.This value is then modified based on a local interpolation radius, d IDW and the distance to the closest neighboring station (d min ).
This simple thresholding procedure forces the interpolated residual field to relax towards zero, based on the distance to the closest station.The d min values were defined by modeling region, and ranged from 350 to 100 km, based on station density.All tiles were allowed to overlap with their neighbors, and locations within these areas of overlap were blended based on weights that were linear functions of the distances from tile edges.This helped to produce smooth transitions from tile to tile.
Rescaling by GHCN ratios
In the final stage, for each month, the regional tiles are composited on a global 0.05 • grid and compared with 1980-2009 GHCN climate normals.The ratio of the GHCN and gridded climatology is calculated at each station location.These ratios are capped between 0.3 and 3.0, and interpolated to a 0.05 • grid for each month.The values were capped to limit the potential influence of poor station data.A modified IDW procedure, similar to Eq. (3), is used, but instead of relaxing to zero, the interpolation is forced to a ratio of 1 (no change) as the distance to the minimum neighbor reaches d IDW .This ratio grid is multiplied against the sum of the MWR and interpolated residuals, producing the final CHG Climatology field.
Cross-validation
Selection bias can inflate the estimated accuracy of statistical estimation procedures, producing artificial skill (Michaelsen, 1987).To limit such inflation, this study uses crossvalidation.This technique removes 10 % of the station data, fits the model using the remaining 90 % of the values, and evaluates the accuracy for the withheld locations.This process is repeated ten times, eventually withholding all of the data, to produce a robust estimate of the model accuracy.
Independent validation studies
As additional validation, high quality climatology data sets were obtained for five focus regions: Afghanistan, Colombia, Ethiopia, Mexico, and the Sahel region of western Africa (Senegal, Burkina Faso, Mali, Niger and Chad).Means, spatial R 2 values, mean bias errors (MBE [mm]), mean absolute errors (MAE [mm]), percent MBE, and percent MAE statistics were evaluated.These regions (as opposed to the continental United States or Europe) were chosen to represent challenging estimation domains.
Model fitting results
Figure 2 shows the best predictor for each individual modeling region and the FAO station locations.For regions between 60 • N and 60 • S, the combined CMORPH and TRMM field tended to be the most useful predictor.The TRMM-only precipitation was selected, however, for southern Africa.Regions beyond 60 • N and 60 • S could not be modeled with the TRMM or CMORPH means.These regions were generally best fit with LST, slope, or elevations from a digital elevation model (DEM).Figures 3 and 4 show the proportion of modeled cross-validated variance for the MWR and interpolated residuals components for each of the modeling regions.These results are averaged across the 12 months.For most regions, the MWR accounted for over 80 % of the total variance.The interpolated residuals typically accounted for another 10-25 %.Most regions of the globe had average monthly percent errors of between 15 and 25 % (Fig. 5). Figure 6 shows monthly mean CHPclim precipitation fields.As discussed later, these seem generally quite similar, in most places, to the GPCC M V2015, the CRU CL v2.0, and the Worldclim version 1.4 release 3 products.The blending of the overlapping tiles creates generally smooth transitions from tile to tile.These products will be compared more closely later in this paper.
Validation studies
We next present results from our validation studies for Afghanistan, Colombia, Ethiopia, Mexico, and the Sahel (Senegal, Burkina Faso, Mali, Niger, and Chad).In each case, additional high-quality gauge data were obtained from national meteorological agencies (Table 1).These data were screened, and only values not in the FAO or GHCN archive were retained.Table 1 summarizes the number of independent stations and presents the monthly validation statistics, averaged across all 12 months.For each validation sta- tion, the closest CHPclim, CRU, or Worldclim grid cell was extracted.The CHPclim percent biases were substantially smaller in magnitude than the CRU or Worldclim biases, ranging between −2 to +5 %, as compared to −28 to +16 % (CRU) or −16 to 0 % (Worldclim) or −1 to −17 % (GPCC).Note that the GPCC gauge observations were corrected for systematic under-catch errors (Becker et al., 2013;Schneider et al., 2014).While all the climatologies did well in regions with a large number of stations (e.g.Mexico and Colombia), CHPclim's performance was substantially better in data-sparse areas like the Sahel, Ethiopia, and Afghanistan.Averaged across these study regions, the CHPclim/CRU/Worldclim/GPCC data sets had overall mean absolute error (MAE) values of 16, 26, 20 and 20 mm month −1 , respectively.The average spatial R 2 values for the four climatologies were 0.77 (CHPclim), 0.58 (CRU), 0.67 (Worldclim), and 0.51 (GPCC).Overall, the CHPclim compared favorably to the CRU, Worldclim and GPCC data sets.
Plotting the monthly validation statistics provides more temporal information.Figure 7 shows monthly time series of the MAE values for each region and for each set of climatological estimates.In Afghanistan, data were only obtained for the rainy season.The low spatial correlations with the CRU and Worldclim estimates (Table 1) translate into high MAE scores (Fig. 7).In Colombia, the spatial R 2 (Table 1) and MAE time series of the CHPclim and Worldclim are similar -both perform well.In Ethiopia, the Worldclim and CRU MAE peak in concert with the seasonal rainfall max- ima, while the CHPclim values remain substantially lower.This pattern is recreated for the Sahel and, to a lesser extent, for Mexico.We postulate that the CHPclim performance benefits from the fact that satellite precipitation estimates do a good job of representing heavy convection in these countries during the heart of the precipitation season.Conversely, the thin plate spline fitting procedure, combined with low gauge density in Ethiopia and the Sahel, may make it difficult to statistically represent precipitation gradients in these countries, degrading the performance of the CRU and Worldclim climatologies.Thin plate splines fit polynomial surfaces through point data, creating a generalized surface fit to latitude, longitude, and elevation.The suitability of this fitting process may be problematic when the density of the gauge data is very low.Later in our paper we compare different climate products over Ethiopia.
Figure 8 shows similar time series for the spatial R 2 statistics.In Afghanistan, Ethiopia, and the Sahel, the CHPclim appears substantially better at representing spatial gradient information.In Colombia and Mexico, CHPclim and Worldclim performance is similar.This may relate to the number of climate normals available in each region (cf.Fig. 2).In Colombia and Mexico, relatively dense gauge networks result in similar Worldclim and CHPclim performance.In regions with fewer stations, the correlation structure of the satellite precipitation data (Fig. 1) probably helps boost the relative performance of CHPclim.
Product comparisons
Here we briefly examine differences between quasi-global total annual precipitation from the CHPclim, GPCC M V2015, CRU CL 2.0 and Worldclim version 1.4 release 3 (Fig. 9) and their global and continental averages (Table 2).The left hand panels show differences between the CHPclim and the three other products.The largest differences appear over the north half of South America, where annual precipitation is very high (Fig. 6).These differences may arise from the local influence of the satellite rainfall fields, which are well correlated with station observations in this region (Fig. 1).Note that the GPCC, CRU, and Worldclim also vary substantially amongst themselves in this area.In Euwww.earth-syst-sci-data.net/7/275/2015/ Earth Syst.Sci.Data, 7, 275-287, 2015 rope, northern Asia, North America, and Australia, the differences are fairly limited, most likely due to the high station density in these regions.There are large differences near the Himalayas.The CHPclim appears to be producing more precipitation across the Himalayan plateau and less precipitation on the south-facing mountain slopes.More research will be required to evaluate if this is appropriate or not.CHPclim also appears to be substantially drier over some parts of Africa.A recent study in Mozambique (Toté et al., 2015) of the Climate Hazards group Infrared Precipitation with Stations (CHIRPS, Funk et al., 2014b), which is based on the CHPclim, found low bias over that country.Stations in Africa tend to be biased towards wet locations, and the use of satellite fields as guides to interpolation may help limit this bias.
We explore this idea in more detail in the next section, which focuses on an Ethiopia test case.Before proceeding to that analysis, we note that the global (excluding Antarctica) and continental averages from our four products are in quite close agreement (Table 2), even in Africa.The two outlier's appear to be the GPCC M V2015 averages for Asia (688 mm) and for the globe (880 mm).The global GPCC M V2015 value of 880 mm is close to the 850 mm figure reported in Schneider et al. (2014).The discrepancy between the GPCC results and the other prod- 2007) agree quite well with that study's reported values (110 units CRU; 112 units GPCP).We found the CHPclim precipitation resulted 120 units.This difference may relate to CHPclim's interpolation procedure in northern South America and the Maritime Continent the CHPclim is wetter (Fig. 9, Table 2), perhaps because of guidance provided by satellite observations (Fig. 1).
An Ethiopian validation study
In February of 2015 one of the co-authors led a rainfall gridding workshop in Addis Ababa, in collaboration with lead scientists from the Ethiopian National Meteorological Agency (NMA).This workshop used the GeoCLIM tool to blend CHIRPS satellite rainfall estimates with 208 qualitycontrolled gauge observations (Figs.dent.Nonetheless, the 35 years of 208 NMA rain gauge observations have not been included in the CHPclim, and hence provide a valuable validation data set, especially within the areas with good gauge density. Figure 10 shows the mean 1981-2014 annual rainfall totals based on the gridded NMA data, and similar maps from the CHPclim, GPCC M V2015, CRU CL 2.0, and Worldclim version 1.4 release 3. Also shown are elevation, annual totals of CMORPH/TRMM precipitation and annual average MODIS LST.These fields were used in the CHPclim modeling process.Annual mean MODIS Normalized Difference Vegetation Index (NDVI) values are also shown as an independent proxy for moisture availability.All the precipitation products and the NDVI agree on the broad patterns of spatial rainfall variability, which are extreme.The wettest regions receive more than 2 m of rainfall each year while the driest receive less than 200 mm.The CMORPH/TRMM satellite observations seem to capture these dry areas well -with no ground data at all, i.e. the brown areas in the CMORPH/TRMM agree quite closely with the NMA vali-dation data.The CMORPH/TRMM fields delineate dry area effectively.Within wet areas, the discriminatory power of the satellite observations seems to diminish, indicating (incorrectly) that northwest Ethiopia is as wet as southwest Ethiopia.The similarity between the completely independent NDVI and NMA/CHPclim fields is quite compelling.Many subtle features, such as the humid highlands in north-central, east-central, and southeastern Ethiopia appear well demarcated by these precipitation fields.These seem fairly well captured by the Worldclim and CRU as well.
Note that there are important differences between, on one hand, the elevation and similar LST field and, on the other, the NMA/CHPclim precipitation and NDVI mean fields.While there are certainly some important correspondences, there are also critical differences, such as in north-central Ethiopia which is high and cool, but dry.Conversely, northwest Ethiopia is relatively wet, but relatively low.There are times and locations when elevation is a poor indicator of mean precipitation.10 that the Worldclim data follows the NMA data quite closely.The GPCC, CRU and Worldclim all underestimate precipitation in the blue regions in the northwest and southwest of these maps, which are relatively low areas.The CMORPH/TRMM finds rainfall in these areas (Fig. 10), and the CHPclim MBE in these areas is quite modest (Fig. 11).Conversely, dark brown areas in the bottom panels of Fig. 11 denote areas where rainfall is substantially overestimated in the GPCC, CRU, and Worldclim.This appears to be of gravest concern in the center and center-east of the country, which has high elevations and extremely steep rainfall gradients.While not perfect, the CMORPH/TRMM (Fig. 10) seems to capture these gradients with reasonable fidelity, and building on these gradients produces a CHPclim with low bias in these areas.
We explore this topic more fully in Fig. 12, which shows transects of our data sets at 10 and 7 • N. We have multiplied the NDVI data by 1500 and divided the elevation data by 5 to facilitate visualization.Begin by noting in the top two panels the similarities between the mean NMA data, the CMORPH/TRMM, and the NDVI.This reinforces the utility of the TRMM/CMORPH, and that the NMA fields are an effective representation of the "true" climatology.The CRU and Worldclim seem to follow the NMA transect quite well, with some substantial deviations shown in the bottom panels.Some of these errors appear to coincide with areas having extreme elevation changes, such as 36.5 • E, 37.5 • E and 40 • E at 10 • N. At 37 • E, 7 • N, the CRU, GPCC and Worldclim substantially underestimate rainfall.The CHPclim, assisted by the CMORPH/TRMM, which is quite wet in this region, captures the rainfall well.In the eastern part of the country, where we find the largest percent discrepancies, we find overestimates at 41 • E, 10 • N and 41.5 • E, 7 • N. Estimates of rainfall gradients in these poorly instrumented regions are very difficult based on just station data.The CMORPH/TRMM, however, seems to capture these gradients well, and the CHPclim builds on this local gradient information.
Discussion
This paper has introduced a new climatology modeling process developed by the CHG to support international drought early warning and hydrologic modeling.While this process has been applied to African rainfall and temperatures (Funk et al., 2012;Knapp et al., 2011), we report here for the first time global results, and evaluate the relative accuracy of the CHPclim v1.0 (http://dx.doi.org/10.15780/G2159X).The CHPclim is one part of the CHG's overall strategy to provide improved drought early warning information (Fig. 13).Working closely with early warning scientists from the US Geological Survey's Center for Earth Resources Observation and Science (EROS), the CHG develops improved earth science tools to support food security and disaster relief for the US Agency for International Development's Famine Early Warning System Network (FEWS NET).
These activities fall into two main categories: analytic studies focused on understanding the relationship between local climate variations and large-scale climate drivers (Funk et al., 2008(Funk et al., , 2014a;;Hoell and Funk, 2013a, b;Liebmann et al., 2014), and the development of integrated data sets and tools supporting agro-climatic monitoring in the developing world.While early precipitation efforts focused on the use of a model (Funk and Michaelsen, 2004) to represent orographic precipitation (Funk et al., 2003), the potential issues produced by spurious model-based trends led us to focus on the use of high-resolution climatologies as proxies for orographic precipitation enhancement (Funk et al., 2007).The global 0.05 • CHPclim presented here is the global expansion of that work.
CHPclim provides the first component of our global precipitation monitoring system, which is built on the Climate Hazard Group Infrared Precipitation with Stations (CHIRPS, Fig. 13).The monthly CHPclim fields, described and evaluated here, have been temporally disaggregated to pentadal (5-day) means.These pentadal mean fields are then combined with 1981-near present 0.05 • 60 • S-60 • N IR brightness (Janowiak et al., 2001;Knapp et al., 2011) precipitation estimates to produce the Climate Hazards Group Infrared Precipitation fields (CHIRP).A modified inverse distance weighting procedure is then used to blend these fields with global precipitation gauge station data to produce the CHIRPS (Funk et al., 2014b).These data, which benefit from the high-resolution CHPclim climatology, can be used to drive a gridded crop Water Requirement Satisfaction Index model (WRSI) (Verdin and Klaver, 2002), force a special Land Data Assimilation System developed for the US Agency for International Development's FEWS NET (the FLDAS), or populate interactive early warning displays like the Early Warning eXplorer (EWX, http://earlywarning.usgs.gov:8080/EWX/index.html).Improved background climatologies can enhance the efficacy of crop models, increasing their drought monitoring capacity.
Ongoing efforts are being directed towards linking seasonal forecast information with historical CHIRPS archives (Shukla et al., 2014a, b).In East Africa, for example, daily 0.05 • rainfall values are used to force a hydrologic model.These results can then be combined with precipitation forecasts that translate large-scale climate conditions into regionspecific predictions of CHIRPS rainfall.These rainfall forecasts can be used to drive crop and hydrologic models.In this way, for some high-priority regions like East Africa, CHG scientists hope to combine the climatological constraints described by high-resolution climatologies like the CHPclim, historic precipitation distributions (Husak et al., 2013), the latent information contained in the land surface state as represented by land surface models (Shukla et al., 2014b(Shukla et al., , 2013)), and the foreshadowing of future weather provided by climate forecasts (Funk et al., 2014a;Shukla et al., 2014a, b).The CHPclim, described here, has been designed to provide a good foundation for this, and similar, hydrologic modeling and monitoring systems.The CHPclim data and CHIRPS data sets are available at http://dx.doi.org/10.15780/G2159X and http://dx.doi.org/10.15780/G2RP4Qand http://chg.geog.ucsb.edu.
Figure 2 .
Figure 2. Best predictor, by model region, with station locations.
Figure 3 .
Figure 3. Percent of variance explained by cross-validated moving window regression.
Figure 4 .
Figure 4. Percent of variance explained by cross-validated inverse distance weighting.
Figure 5 .
Figure 5. Percent standard error explained by cross-validation.
10 and 11, top left) to generate monthly 1981-2014 grids of precipitation.In this section we compare the 1981-2014 average of these blended CHIRPS/NMA station data to the CHPclim, GPCC, CRU and Worldclim data sets.We acknowledge that since the CH-Pclim is used in the CHIRPS as a background climatology the NMA and CHPclim data sets are not completely indepenwww.earth-syst-sci-data.net/7/275/2015/ Earth Syst.Sci.Data, 7, 275-287, 2015
Figure 10 .
Figure 10.Total annual rainfall, elevation, NDVI and LST for Ethiopia.Rainfall totals are from the Ethiopian National Meteorological Agency (NMA), CHPclim, the GPCC M V2015 climatology, the CRU CL v2.0, the version 1.4 release 3 Worldclim climatology, and the blended CMORPH/TRMM data used in the CHPclim modeling process.
Figure 11
Figure11shows the differences from NMA validation data.Also shown, to support analysis, are the NMA mean precipitation and elevation data.Purple lines have been drawn showing transects plotted in Fig.12.The CHPclim follows the NMA climatology closely.The GPCC, CRU, and Worldclim all exhibit substantial (> | 300 mm |) deviations, with the Worldclim performing substantially better than the GPCC and CRU.This helps to confirm the visual impression from Fig.10that the Worldclim data follows the NMA data quite closely.The GPCC, CRU and Worldclim all underestimate precipitation in the blue regions in the northwest and southwest of these maps, which are relatively low areas.The CMORPH/TRMM finds rainfall in these areas (Fig.10), and the CHPclim MBE in these areas is quite modest (Fig.11).Conversely, dark brown areas in the bottom panels of Fig.11denote areas where rainfall is substantially overestimated in the GPCC, CRU, and Worldclim.This appears to be of
Figure 11 .
Figure 11.Total annual NMA rainfall, elevation and MBE maps based on the NMA minus CHPclim, the NMA minus GPCC, the NMA minus CRU and the NMA minus Worldclim.
Figure 12 .
Figure 12.The top panels show transects of total annual rainfall at 7 and 10 • N. Also shown are transects of elevation in meters divided by 5 and annual mean NDVI, multiplied by 1500.The bottom panels show MBE transects based on CHPclim, GPCC, CRU and Worldclim minus the NMA data.These bottom panels also show elevation in meters divided by 5.
Figure 13 .
Figure 13.Schema of CHG analysis and prediction activities.
Figure 6.CHPclim monthly means for January, April, July and October.While CHPclim is global, we show 50 • S-50 • N images to facilitate visualization.
Table 2 .
Comparison on annual total precipitation [mm] for different regions. | 2018-01-25T07:25:45.723Z | 2015-10-13T00:00:00.000 | {
"year": 2015,
"sha1": "0c93bbe85288e5734162209ee7982348fad89177",
"oa_license": "CCBY",
"oa_url": "https://www.earth-syst-sci-data.net/7/275/2015/essd-7-275-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0c93bbe85288e5734162209ee7982348fad89177",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
234800654 | pes2o/s2orc | v3-fos-license | Elevated Expression Levels of Lung Complement Anaphylatoxin, Neutrophil Chemoattractant Chemokine IL-8, and RANTES in MERS-CoV-Infected Patients: Predictive Biomarkers for Disease Severity and Mortality
The complement system, a network of highly-regulated proteins, represents a vital part of the innate immune response. Over-activation of the complement system plays an important role in inflammation, tissue damage, and infectious disease severity. The prevalence of MERS-CoV in Saudi Arabia remains significant and cases are still being reported. The role of complement in Middle East Respiratory Syndrome coronavirus (MERS-CoV) pathogenesis and complement-modulating treatment strategies has received limited attention, and studies involving MERS-CoV-infected patients have not been reported. This study offers the first insight into the pulmonary expression profile including seven complement proteins, complement regulatory factors, IL-8, and RANTES in MERS-CoV infected patients without underlying chronic medical conditions. Our results significantly indicate high expression levels of complement anaphylatoxins (C3a and C5a), IL-8, and RANTES in the lungs of MERS-CoV-infected patients. The upregulation of lung complement anaphylatoxins, C5a, and C3a was positively correlated with IL-8, RANTES, and the fatality rate. Our results also showed upregulation of the positive regulatory complement factor P, suggesting positive regulation of the complement during MERS-CoV infection. High levels of lung C5a, C3a, factor P, IL-8, and RANTES may contribute to the immunopathology, disease severity, ARDS development, and a higher fatality rate in MERS-CoV-infected patients. These findings highlight the potential prognostic utility of C5a, C3a, IL-8, and RANTES as biomarkers for MERS-CoV disease severity and mortality. To further explore the prediction of functional partners (proteins) of highly expressed proteins (C5a, C3a, factor P, IL-8, and RANTES), the computational protein–protein interaction (PPI) network was constructed, and six proteins (hub nodes) were identified. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1007/s10875-021-01061-z.
Introduction
The COVID-19 pandemic caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has gained significant attention in the medical and scientific communities. The Middle East Respiratory Syndrome coronavirus (MERS-CoV) has caused considerable medical and health issues in many countries, particularly in Saudi Arabia. The prevalence of MERS-CoV in the Kingdom of Saudi Arabia (KSA) remains significant. MERS-CoV cases are still being reported in Saudi Arabia, and a high prevalence of MERS-CoV in dromedary camels and direct contact with infected camels have been linked to human infections [1][2][3][4][5]. MERS-CoV is a single-stranded RNA virus of the Betacoronavirus genus. It was first reported in the KSA (Jeddah City) in 2012. As of December 27, 2020, 2564 laboratory-confirmed cases and 881 associated deaths (case-fatality ratio, 34.4%) were reported in 27 countries worldwide, of which 2121 cases (82.7%) were reported in Saudi Arabia. The majority of the fatalities (37.1%, 788 deaths) also occurred in Saudi Arabia [4]. An excessive inflammatory response is a prominent phenotype associated with MERS-CoV infection, which leads to lung immunopathology, disease progression, and poor clinical outcome. MERS-CoV infections are characterized by dysregulation in both the innate and adaptive immune systems [5,6]. Several inflammatory cytokines and chemokines (IL-1β, IL-2, IL-6, IL-7, IL-8, IL-10, G-CSF, GM-CSF, IP-10, MCP-1, MIP-1α, IFN-γ, TNF-α, CCL2, and CCL3) are significantly associated with severe MERS-CoV infection and higher fatality rates [5,7,8]. Elevated inflammatory cytokine and chemokine levels during SARS-CoV-1, MERS-CoV, and SARS-CoV-2 infections are significantly associated with massive infiltration of immune cells into the lungs and poor disease outcome [5,[8][9][10]. The complement system consists of a multiprotein network belonging to both the innate and adaptive immune systems [11]. Depending on the manner of activation, the complement cascade operates by three pathways: the classical pathway, the lectin pathway, and the alternative pathway [12,13]. Cross-talk between the complement and coagulation systems plays a crucial role in vascular endothelial damage and thromboinflammation [14]. A number of viral infections are associated with complement activation and coagulation dysfunction [11,15,16]. Over-activation of pulmonary and systemic complement plays a key role in inflammation, endothelial cell damage, thrombus formation, and intravascular coagulation, which results in multiple organ failure and eventually death [12,15,17]. This over-stimulation leads to the formation of the complement anaphylatoxins, C3a and C5a. C5a is a chemoattractant for neutrophils, monocytes, eosinophils, and T cells [15,18].
The role of complement in MERS-CoV disease immunopathology and complement-modulating treatment strategies during MERS-CoV infection has received limited attention. There are many important unanswered questions regarding complement, MERS-CoV interactions, and disease outcome. In addition, little is known regarding pulmonary complement activation during MERS-CoV infection, the manner in which complement activation affects disease severity or the association of complement response with viral load and mortality.
In this study, we performed a comprehensive investigation of the pulmonary complement proteins, IL-8 (CXCL8) and RANTES (CCL5) expression in MERS-CoV-infected patients in addition to viral load determination. We also assessed the correlation between these factors and the fatality rate. To our knowledge, this is the first study demonstrating a relationship between lung complement proteins and complement regulatory factors in MERS-CoV-infected patients.
Patient Selection, Sample Collection, and Preparation and Analysis
A total of 31 MERS-CoV-positive patients and 15 MERS-CoV non-infected group were enrolled in this study. Lower respiratory samples (bronchoalveolar lavage (BAL) or tracheal aspirate (TA)) were collected. The mean time from the onset of symptoms to hospital arrival, seeking medical attention, and hospital admission was 4.3 days. All respiratory samples were collected in less than one week after symptom onset (Early phase of infection) within 24 h of hospital admission. Although healthy control individuals in this study presented with respiratory symptoms, their MERS-CoV RT-PCR test was negative and were considered as the healthy non-infected control group. In this study, we used the remaining volume of MERS-CoV non-infected subject samples that had been collected for clinical diagnosis. To exclude the effects of antiviral therapy on the expression of complement proteins, IL-8 and RANTES, all samples were collected before the administration of any antiviral treatment. Samples were centrifuged at 1000 rpm at 4 °C for 5-10 min. The cell-free supernatants were used for the analysis of complement proteins, inflammatory chemokines, RANTES, and MERS-CoV viral load. The exclusion criteria were as follows: (1) patients coinfected with other respiratory pathogens, (2) immunocompromised patients, (3) patients under treatment with anti-inflammatory and/ or immunosuppressive drugs, (4) patients with chronic diseases, and (5) patients with preexisting autoimmune diseases. These exclusion criteria were selected to exclude any possible effects on the expression of the clinicopathological factors listed above. This study was reviewed and approved by the Institutional Review Board at King Fahad Medical City (IRB register number 019-053).
MERS-CoV Viral Load Detection
The MERS-CoV viral loads of the lower respiratory samples were detected by real-time RT-PCR after viral RNA was extracted using QIAamp Mini kit (Qiagen) according to the manufacturer′s instructions. The viral open reading frame regions (orf1a) gene was detected with a commercial kit according to the manufacturer's instructions (Real-Star® MERS-CoV RT-PCR Kit 1.0). The RT-PCR analysis was performed using an ABI Prism® 7500 (Applied Biosystems).
Quantification of Serum Regulatory Complement Component (factor) Levels
The concentrations of four human complement regulatory factors, factors P (properdin), I, C4-binding protein (C4-BP), and H, were quantified in the lung using ELISA kits (ab222864, Abcam, Cambridge, UK; ab195460, Abcam, Cambridge, UK; ab222866, Abcam, Cambridge, UK; HK342 Hycult Biotech, Uden, Netherlands). The ELISAs were done following the manufacturer's instructions. The concentrations of each factor were calculated using standard curves.
Quantification of Serum Chemokine RANTES (CCL5) Levels
We determined the local RANTES chemokine levels in MERS-CoV-infected patients (n = 30) and MERS-CoV noninfected individuals (n = 18). Lung RANTES levels were quantified using the human RANTES ELISA Kit (R&D Systems, Minneapolis, USA) following the manufacturer's protocol. RANTES concentrations were calculated using standard curves, and the results were expressed as pg/ml.
Measurement of Pulmonary Pro-inflammatory Cytokine and Chemokine Profiles using ELISArray
The concentrations of major human pro-inflammatory cytokines and chemokines were measured in the respiratory samples of 30 MERS-CoV-infected patients and 18 MERS-CoV non-infected individuals using the multi-analyte ELISArray (Qiagen, Germantown, MD, USA) following the manufacturer's protocol. The absorbance of the ELISArray was measured at 450 nm. The concentrations were calculated using a standard curve. Cytokine/chemokine levels were expressed as pg/ml.
Protein-protein Interaction (PPI) Network Construction and Identification of Hub Proteins
To further explore the potential interplay among the differentially expressed proteins (C3a, C5a, factor P, IL-8, and RANTES) in the lung of MERS-CoV infected patients with the potential interactors. Protein-protein interaction networks were constructed using bioinformatics resources of the Search Tool for the Retrieval of Interacting Genes/ Proteins database (STRING version 11.0). This bioinformatics resource provides known and predicted protein-protein interaction networks. We used multiple proteins (C3a, C5a, factor P, IL-8, and RANTES) as an input (seed proteins). Active interaction sources include; databases, co-occurrence of genes, homology of proteins, experiments (biochemical/ genetic data), co-expression of gene and text mining as well as species limited to "Homo sapiens", confidence score > 0.4 and maximum interactors (= 20) were used to construct the STRING networks. The STRING resource is available online at https:// string-db. org/. The constructed STRING PPI network was exported to Cytoscape software (version 3.8.2 http:// apps. cytos cape. org/ apps/ mcode) for visualization and additional analysis of functional protein-protein interaction.
Statistical Analysis
Statistical analyses were performed using the GraphPad 5.0 software (GraphPad Software, San Diego, CA, USA). Data were assessed using a t-test. The correlations between complement proteins, inflammatory factors, and chemokines were assessed using Pearson's correlation test. The results were presented as means ± standard deviation (SD) unless otherwise specified. A P-value of < 0.05 was considered statistically significant.
Basic Patient Characteristics
A total of 31 MERS-CoV-infected patients (22 males and 9 females) and 15 MERS-CoV non-infected group without underlying diseases/pre-existing conditions (9 males and 6 females) were included in this study ( Table 1). The ages ranged from 30 to 100 years. The majority of MERS-COVinfected patients were 60-75 years old. The overall mean and median ages were 68.26 ± 16.04 and 73 years, respectively. The mean ages in the non-survival and survival patients were 74.5 ± 11.3 and 53 ± 15.8 years, respectively. Twenty-two (70.97%) patients died, 9 recovered (29.03%), and 25 (80.6%) required intensive care unit (ICU) admission (Fig. 1). The age is one of factors that could cause physiologic changes within the immune system, and may affect both the innate and adaptive immune arms of immune system, particularly T cell immunity [19,20]. However, we did not observe any significant modulation or differences in complement inflammatory mediators and cytokines/ chemokines expression levels between aged MERS-CoV infected patients and younger MERS-CoV infected patients, but this finding requires further evaluation as the exact picture and details about immune physiologic changes with aging is still emerging. In a study of small numbers of MERS-CoV-infected patients, a similar MERS-CoV-specific cellular immune response was observed among all age groups [21].
In this study, most deaths occurred in elderly patients aged > 70 years (68%) followed by 60-70 years (22%), 50-60 years (4.5%) and 40-50 years (4.5%). We found that the increased age was associated with mortality in MERS-CoV infected patients. As with COVID-19, we observed that with increased age (60-70 and > 70 years old), case fatality was increased from 22 to 68%, suggesting that older peoples are at high risk for death. The mean MERS-CoV viral load, ± SD and (median) were 25.7 ± 5 (26), respectively (Table 1). We also found that the mean viral load was significantly higher in male patients (24.5) than in female patients (28.7) (P-value 0.032) ( Table 1).
Complement Anaphylatoxin Expression Levels are Significantly Increased in MERS-CoV-infected Patients Compared with MERS-CoV Non-infected Group
Over-activation of the complement system is associated with immunopathology and tissue damage. Therefore, we hypothesized that the elevated lung complement protein levels occurring during MERS-CoV infection are involved in the massive infiltration of immune cells into the lungs and result in poor disease outcomes. To assess whether lung complement anaphylatoxins (C3a and C5a) were increased in MERS-CoV-infected patients, we examined complement components in lower respiratory lung samples using ELISA. As shown in (Fig. 2), the levels of complement anaphylatoxins (C3a and C5a) and C1q were significantly higher in the lungs of MERS-CoV-infected patients compared with MERS-CoV non-infected group ( Fig. 2A, 2B, and 2G). Together, these results show that the high levels of complement anaphylatoxins in the lungs of MERS-CoV-infected patients are significantly increased, suggesting a significant pulmonary complement activation. In addition, the elevated levels of lung complement anaphylatoxins suggest a role of these factors in lung tissue damage, immunopathology, ARDS development, and mortality of MERS-CoV-infected patients.
MERS-CoV Infection is Associated with Positive Regulation of the Complement Response
Elevated levels of complement anaphylatoxins prompted us to evaluate the changes in other immunoregulatory proteins. Complement regulatory proteins function by inhibiting complement over-activation to avoid inflammation and tissue damage. We quantified factor P (properdin) levels in the lung of MERS-CoV-infected patients. As shown in (Fig. 2C), the levels of positive regulatory factor P were significantly higher in the MERS-CoV-infected patients compared with the MERS-CoV non-infected group, suggesting that the levels of factor P may enhance complement activation during MERS-CoV infection. Factor P measurement in serum may provide evidence for the involvement of the alternative complement pathway since factor P represents an important factor in the activation of the alternative pathway.
MERS-CoV Infection is Associated with Distinct Levels of Negative Regulatory Complement Proteins
The measurement of negative regulatory proteins in the lung of MERS-CoV-infected patients may provide evidence for complement system regulation. The quantification of several complement negative regulatory factors (factor I, C4-BP, and factor H) revealed that the levels of factors I and H were increased in the lower lung respiratory samples ( Fig. 2D and 2E). These negative regulatory proteins play an important role in complement system regulation. In contrast, the levels of C4-BP were unchanged, and there was no statistical significance between the levels in the MERS-CoV-infected patients compared with MERS-CoV non-infected group (Fig. 2F). This result indicates that MERS-CoV may suppress and inhibit the function of C4-B. The measurement of negative regulatory proteins in the lung of MERS-CoV-infected patients may provide evidence for complement system regulation. The quantification of several complement negative regulatory factors (factor I, C4-BP, and factor H) revealed that the levels of factors I and H were increased in the lower lung respiratory samples ( Fig. 2D and 2E). These negative regulatory proteins play an important role in complement system regulation. In contrast, the levels of C4-BP were unchanged, and there was no statistical significance between the levels in the MERS-CoV-infected patients compared with MERS-CoV non-infected group (Fig. 2F). This result indicates that MERS-CoV may suppress and inhibit the function of C4-B.
The measurement of negative regulatory proteins in the lung of MERS-CoV-infected patients may provide evidence for complement system regulation. The quantification of several complement negative regulatory factors (factor I, C4-BP, and factor H) revealed that the levels of factors I and H were increased in the lower lung respiratory samples ( Fig. 2D and 2E). These negative regulatory proteins play an important role in complement system regulation. In contrast, the levels of C4-BP were unchanged, and there was no statistical significance between the levels in the MERS-CoV-infected patients compared with MERS-CoV noninfected group (Fig. 2F). This result indicates that MERS-CoV may suppress and inhibit the function of C4-B.
Pulmonary Neutrophil Chemoattractant Chemokine IL-8 (CXCL8) and RANTES (CCL5) Levels are Elevated in MERS-CoV-infected Patients
To examine the levels of chemokines in the lung during MERS-CoV infection, we quantified IL-8 and RANTES in the lower respiratory tract of MERS-CoV-infected patients and the MERS-CoV non-infected group. As shown in Fig. 2, IL-8 and RANTES levels were significantly higher in the lungs of MERS-CoV-infected patients compared with that in the MERS-CoV non-infected group ( Fig. 2H and 2I).
Lung Complement Anaphylatoxins (C3a and C5a), IL-8, and RANTES Levels are Associated with ARDS Development and the Fatality Rate
The Pearson correlation analysis revealed a significant association of C3a, C5a, IL-8, and RANTES with ARDS development and a higher fatality rate ( Table 2). Higher levels of these mediators were significantly positively correlated with ARDS and a higher fatality rate. These immune mediators increase the risk of developing ARDS and mortality in MERS-CoV-infected patients. Elevation of lung C3a, C5a, IL-8, and RANTES may represent biomarkers for ARDS development and a predictor for in-hospital mortality.
Lung Complement Anaphylatoxins (C3a and C5a) and their Correlation with Factor P, IL-8, and RANTES
We further examined whether there were correlations between lung complement anaphylatoxins (C3a and C5a), factor P, IL-8, and RANTES. The results indicated that lung complement C5a was positively correlated with factor P, IL-8, and RANTES, whereas complement C3a was positively correlated with IL-8 and RANTES (Table 2). A positive correlation between RANTES, IL-8, and complement factor P was also observed (Table 2). Similarly, IL-8 was positively correlated with complement factor P ( Table 3).
Protein-protein Interaction (PPI) Network Analysis and Hub Proteins
To understand the protein-protein interaction between the complement proteins and cytokines/chemokines and predicted interactors, we performed data mining and curation to construct and map the network of protein-protein interactions using the STRING databases. C3a, C5a, factor P, IL-8 and RANTES were used as seeds for network analysis. The PPI network showed 20 proteins that have the highest interaction scores with C3a, C5a, factor P, IL-8, and RANTES (Supplemental Fig.4). For C3a, C5a, IL-8, and RANTES, individual protein-protein interaction was also created (Supplemental Figs.5 , 6, and 7). The names of proteins, predicted functional partners, and actions are shown in (Supplementary Table1 ). This network comprises several types of interactions including; gene neighborhood, gene fusions, co-occurrence, coexpression, text-mining, protein homology, and experiments (biochemical/genetic data). The physical (direct) or functional (indirect) interactions were also shown. The constructed network contained 25 nodes, 149 edges, average node degree 11.9 and the PPI enrichment P-value was < 1.0e-16. Cytoscape analysis was shown in (Supplementary Table 2). CytoHubba analysis identified the top 6 proteins (hub nodes) based on Betweenness, Closeness, Degree, EcCentricity, EPC, MCC, and MNC calculation methods (Supplementary Tables 3 and 4). These hub nodes are involved in the inflammatory response and molecular binding interaction as well as ligand-receptor interaction. These results indicate that predicted proteins (interactors) and hub proteins may also have a role in MERS-CoV disease severity.
Discussion
An excessive inflammatory response is a major characteristic of MERS-CoV infections and results in disease progression and poor clinical outcomes. MERS-CoV infections are characterized by dysregulation of the innate and adaptive immunity systems. Several inflammatory cytokines/ chemokines and over-activation of complement proteins are significantly associated with severe MERS-CoV infections and a higher fatality rate [5,6,8]. In this study, we sought to determine the levels of pulmonary complement proteins, neutrophil chemoattractant chemokine IL-8, and RANTES, as well as viral load, in MERS-CoV-infected patients and assessed their association with mortality. This study is the first to evaluate the pulmonary complement protein expression profile in MERS-CoV infected patients without chronic diseases. The results showed high levels of complement anaphylatoxins (C3a and C5a), positive complement regulatory proteins, neutrophil chemoattractant chemokine IL-8, RANTES, and viral load in MERS-CoV-infected patients.
A number of studies have found that high viral load, duration of viral shedding, and direct cytopathic effects were associated with severe complications during SARS-CoV-1,
SARS-CoV-2, and MERS-CoV infections [22]-[25]. MERS-CoV infection is characterized by persistent viral load, and in severe cases of MERS-CoV infection, viral shedding is detected beyond 21 days [26]. Our results showed that all MERS-CoV-infected patients exhibited high viral loads (Ct values). Further studies revealed that MERS-CoV-infected patients requiring ICU admission had high MERS-CoV
RNA levels [27]. In this study, 70% of the MERS-CoVinfected patients developed pneumonia and required ICU admission.
Over-activation of the immune response, including the complement system, is believed to be an important factor for the high fatality rate of the 1918 influenza pandemic; H1N1, H5N1, and H7N9; SARS-CoV-1, MERS-CoV; Ebola infection; and, most recently, COVID-19 (SARS-CoV-2) infection [5,10,[28][29][30][31][32][33][34][35]. Complement activation leads to the production of several effector pro-inflammatory molecules, including anaphylatoxins C3a and C5a. Overstimulation of the complement system or inadequate inhibition causes tissue damage [36] and the formation of high levels of anaphylatoxins. C5a and C3a also play important roles in inflammatory cascades [37,38]. Following infection, complement anaphylatoxins stimulate phagocytic cells and the production of high levels of inflammatory cytokines (cytokine storm), granular enzymes, and free radicals. These mediators may eventually contribute to vascular dysfunction, fibrinolysis, microvascular thrombosis formation, or tissue damage [11,12,14,15,17,18]. In this study, the expression levels of the pulmonary neutrophil chemoattractant chemokine, IL-8, C5a, and C3a were increased in MERS-CoV-infected patients and associated with a higher fatality rate. Our results are consistent with previous findings [39,40] showing excessive complement activation in mouse models, particularly anaphylatoxins C3a, C5a, and C5b-9, during MERS-CoV and SARS-CoV-1 infection, which, in turn, contribute to lung tissue damage, a hyper-inflammatory response, and severe complications from the infection. Furthermore, several studies have revealed that C5a was associated with acute lung diseases, severe pneumonia, and immunopathology during highly pathogenic viral infection with H1N1, H5N1, H7N9, and SARS-CoV-1 [34,35,39,41]. Similar to SARS-CoV-1 and MERS-CoV, patients with COVID-19 (SARS-CoV-2) are characterized by complement activation with high levels of complement protein [11,29,42,43]. A rapid elevation of C3a variants (C3a, C3b, iC3b, C3c, C3d) was observed in mice infected with SARS-CoV-1, which contributed to the systemic inflammatory response and lung injury [39]. C3 activation has also been shown to be elevated in cells infected with SARS-CoV-2 [44]. In addition, in vitro and in vivo experiments on human respiratory syncytial viral infections showed complementmediated lung damage induced by high levels of C3a [45]. These findings clearly demonstrate that C5a and C3a recruit and activate inflammatory immune cells and play a central role in lung injury and immunopathology during respiratory viral infection. In contrast, complement C5a and C3a are involved in ARDS pathogenesis, neutrophil recruitment and activation, and lung endothelial and epithelial injuries [38,46]. Recent data from COVID-19 patients showed that systemic complement activation is associated with inflammation and respiratory failure, as well as increased odds for oxygen therapy [47]. Moreover, MERS-CoV infection is associated with more severe pneumonia compared with SARS-CoV-1 infection [48]. ARDS was shown to be the main cause of mortality among patients infected with MERS-CoV, SARS-CoV-1, and highly pathogenic influenza virus [10,49,50]. In this study, we observed that almost all of the MERS-CoV-infected patients rapidly developed ARDS and required ICU admission. Therefore, we hypothesize that complement-induced ARDS may contribute to the disease severity and high mortality rate of MERS-CoVinfected patients observed in this study. We observed that the expression levels of pulmonary IL-8 (CXCL8) and C5a were significantly higher in MERS-CoV-infected patients compared with the MERS-CoV noninfected group. C5a contributes to the formation of neutrophil extracellular traps (NETosis) and inflammatory cytokine induction. IL-8 and C5a are important chemoattractants for neutrophil recruitment, activation, and neutrophil accumulation, and both induce NETosis. Excess NETosis contributes to inflammation, pathological cellular damage, and acute lung injury in mice infected by the influenza virus [51][52][53]. Similarly, high numbers of neutrophils and NETs were induced by C5a and contributed to the immunopathology, alveolar damage, and acute lung injury during influenza A H1N1 infection [51]. A number of in vitro and in vivo studies have found that C5a was associated with macrophage and endothelial cell activation, endothelial damage, NETosis, and increases in alveolar-capillary barrier permeability, as well as vascular leakage [37,54]. The complement and tissue factors are contributing to the NETosis in COVID-19 immunothrombosis, suggesting the role of complement activation in the development of coagulopathy in COVID-19 patients [55]. Also, NETosis is associated with more severe COVID-19 disease [56]. Furthermore, SARS-CoV-2 infection induced high levels of IL-8 and markers linked with neutrophil activation which is associated with higher case fatality among COVID-19 patients [57,58]. Thus, we hypothesize that high C5a, C3a, and IL-8 levels may play a vital role in lung tissue damage, immunopathology, ARDS development, ICU admission, and mortality in MERS-CoVinfected patients.
During SARS-CoV-1 infection, C5a induced several proinflammatory cytokines and chemokines including IL-8 [59]. C5a can also establish a positive feedback loop consisting of IL-8 induction, which results in further IL-8 production. Thus, a loop of continued IL-8 production may result in further inflammatory cell recruitment to the lung and subsequently contribute to lung damage and pathological changes [60]. We found that high levels of the lung complement anaphylatoxins, C5a and C3a, were closely associated with the overexpression of pulmonary IL-8 in MERS-CoV-infected patients. There was a significant correlation between C5a, C3a, and IL-8. The overproduction of C5a and IL-8 may be responsible for more damage to the host tissue compared with MERS-CoV. Previous studies have also shown that the levels of IL-8 significantly correlate with neutrophil numbers and airway inflammation, as well as lung dysfunction, in patients with chronic obstructive pulmonary disease and asthma [18,61]. Therefore, we conclude that the high levels of C5a might be a direct mediator of neutrophil chemoattraction or an indirect inducer of IL-8 production during MERS-CoV infection.
C5a is also known to activate inflammatory cells to release reactive oxygen species (ROS). ROS overproduction leads to oxidative stress that subsequently contributes to airway and lung damage [62][63][64]. A previous in vivo study demonstrated that ROS is strongly associated with lung damage and pneumonia in mice infected with the influenza virus [65]. Mice treated with an antioxidant resulted in significantly reduced mortality, lung damage, and pathogenesis following influenza virus infection [66]. Several studies have observed that anti-C5a and a C5aR receptor antagonist significantly blocked and inhibited neutrophil oxidative burst formation [18,67,68]. These studies suggest a critical role of the C5a/C5aR/ROS axis in lung pathology and virusinduced immunopathology. We hypothesize that high levels of C5a induce ROS, which results in a more severe MERS-CoV infection, ARDS, and immunopathology.
Several studies have shown that targeted therapy with C5a and C5aR represents a significant anti-inflammatory approach to control inflammatory cell recruitment, immunopathology, and acute lung damage induced by highly pathogenic viral infections [34,69,70]. One study showed that the blockade of the C5a-C5aR axis in mice infected with MERS-CoV markedly reduced pathological changes in lung and spleen tissues, viral load, lymphopenia, and systemic and pulmonary inflammatory responses [40]. Another study showed that complement-deficient mice infected with SARS-CoV-1 were associated with decreased lung neutrophilia, a pro-inflammatory response, and viral replication [39]. Blocking and targeting C5a/C5aR and C3a/C3aR during respiratory viral infections significantly reduced and controlled pathology, lung damage, and mortality and inhibited inflammatory cell infiltration in the lung, T-lymphocyte apoptosis, and NETosis [34,71].
A recent study demonstrated that severe COVID-19 cases are associated with high levels of plasma C5a and soluble membrane attack complex (sC5b-9), signifying the C5a blockade as a potential intervention strategy [72,73]. Several randomized controlled trials showed that anti-C5 therapy increased survival in severe COVID-19 cases and significantly reduced inflammatory marker production [72,[74][75][76]. These studies confirmed that the inhibition of an over-activated complement response and its signaling pathways significantly affected immunopathology, respiratory disease severity, and viral replication. Accordingly, it is reasonable to speculate that targeting the C5a/C5aR and C3a/C3aR axes may be effective at controlling MERS-CoV infection-induced immunopathology as MERS-CoV infection similarly leads to acute lung injury.
Over-activation of the complement system is controlled by a series of proteins known as the regulators of complement activation. These proteins play a key role in preventing complement-mediated tissue and cell damage and dysregulation of one or more of the complement regulatory proteins, which results in tissue injury, immunopathology, and inflammation-associated disease [77,78]. The complement system is negatively regulated by several complement proteins, including factor I, C1-inhibitor (CI-INH), factor H, and C4-BP [79][80][81][82]. In contrast, factor P is a positive regulatory complement protein. In this study, the levels of positive regulatory factor P were significantly higher in the lungs of MERS-CoV-infected patients compared with MERS-CoV non-infected group, suggesting that the levels of factor P may positively regulate complement activation during MERS-CoV infection [80]. Factor P has been associated with complement-mediated organ injuries in various human diseases. Therapy targeting factor P showed beneficial outcomes and prevented complement-mediated tissue damage [83][84][85][86][87][88][89]. The measurement of factor P may provide evidence for the involvement of the alternative complement pathway since factor P is an important factor in alternative pathway activation [90]. A recent study demonstrated that SARS-CoV-2 can directly activate the alternative complement pathway [73]. In contrast, the levels of negative regulatory proteins, factor I and C4-BP, were decreased in the lung of MERS-CoV-infected patients when compared with MERS-CoV non-infected group. These negative regulatory proteins play an important role in regulating the complement system. The low levels of factor I and C4-BP suggest that patients with MERS-CoV have a reduced capacity to control and regulate complement activation. These results indicate that MERS-CoV somehow suppressed and inhibited negative regulatory complement proteins during infection [47,79,80,91]. A number of viral infection-mediated chronic inflammatory and autoimmunity diseases are associated with complement regulatory protein dysfunction [78,80,92,93]. Over-activation of pulmonary and systemic complement plays a key role in inflammation, endothelial cell damage, thrombus formation, and intravascular coagulation and ultimately leads to multiple-organ failure and death [12,18]. In this study, high levels of pulmonary complement mediators, disease severity, and increased mortality appear to be linked to the degree of complement activation against MERS-CoV. Complement C3a and C5a may be an independent risk factor for death in MERS-CoV-infected patients.
RANTES is a key proinflammatory chemokine produced during respiratory viral infection. Virus-infected lung and epithelial cells secret a high amount of RANTES. RANTES has a critical role in the platelet activation and initiation of the coagulation cascade [94]. In this study, we detected high levels of RANTES, this inflammatory chemokine was significantly correlated with death and ARDS among MERS-CoV-infected patients. In an in vitro experiment, infection of human monocyte-derived macrophages (MDMs) and dendritic cells (MDDCs) with MERS-CoV showed a high level and upregulation of RANTES and other inflammatory chemokines such as MIP-1α, IP-10, and IL-8 [95]. Recent studies showed that the severe and mild COVID-19 patients had elevated levels of RANTES [95][96][97]. Recent data from SARAS-CoV studies suggested targeting CCR5 could be a therapeutic strategy for COVID-19 [97][98][99]. Previous studies also showed increased levels of RANTES contributed to the exacerbation of allergic airway inflammation and severe human respiratory syncytial infection [100][101][102].
Studying the protein-protein interaction networks and the predicted protein interactors provides significant data to explore the functions of proteins [103,104]. In PPI networks, proteins that have direct interaction with several other proteins are called hub proteins (hub nodes). Proteins with more interaction partners may become targets for follow-up investigation [105]. In this study, the constructed PPI network (25 nodes and 149 edges) demonstrated that C3, C5, CCL5, CR1, CXCL8, IL10, IL-4 and CXCR1 were the hub proteins (nodes). Most of these hub proteins are involved in the inflammatory response and molecular binding interaction as well as receptor-ligand interactions. These results indicate that predicted proteins (interactors) may also have a role in MERS-CoV disease severity. The function and the exact role of hub proteins identified in our PPI network in MERS-CoV infection require further investigation.
Here, we propose a mechanism to explain the role of pulmonary complement anaphylatoxins (C3a and C5a), IL-8, and RANTES in severe MERS-CoV infection. High levels of complement anaphylatoxins C5a and IL-8 recruit and activate neutrophils. Subsequently, activated neutrophils undergo NETosis and ROS production, which results in oxidative stress. Increased levels of C5a and IL-8 may cause an inflammatory loop that contributes to extensive cellular damage and pathological changes. Also, the RANTES recruit and activate macrophages, and the elevation of complement anaphylatoxins C5a and C3a may induce cytokine storm and inflammatory cascades. On the other hand, RANTES recruit and activate macrophages while, C3a recruit, active, and degranulate mast cells and eosinophils which results in airway smooth muscle contraction. These meditators eventually contribute to airway and lung damage, more severe MERS-CoV infection, ARDS, and immunopathology. The overexpression of lung C5a during MERS-CoV infection may establish an amplification loop of IL-8 induction. Thus, a loop of continued IL-8 production leads to increased inflammatory cell activation and recruitment to the lung, subsequently contributing to the lung pathological features. Our results and hypothesis may explain the events of complement activation and its causal relation to the lung tissue damage during MERS-CoV infection.
The main limitation of this study is that we measured complement proteins, IL-8, RANTES levels and viral load at a single time point; measuring this mediator expression levels and viral load at different time points to create a kinetic profile might provide additional information. Also, we did not analyze the cellular components of the bronchoalveolar lavage samples. Future studies to close these gaps are needed.
We conclude that the high levels of complement anaphylatoxins, C5a and C3a; positive regulatory complement protein (factor P); IL-8; and CCL5 in the lower respiratory tracts of MERS-CoV-infected patients are associated with immunopathology, higher fatality rates, more severe disease, and ARDS development. High levels of complement mediators, disease severity, and increased mortality appear to be linked to the degree of complement activation against MERS-CoV. Furthermore, the levels of C3a, C5a, IL-8, and CCL5 in the lung may be useful biomarkers to predict a more severe MERS-CoV infection and mortality. Targeting any of these mediators may offer an effective treatment for MERS-CoV infection. As such, future large studies characterizing components of the complement system at different stages of MERS-CoV infection may offer an effective immunotherapeutic strategy. | 2021-05-21T16:58:02.848Z | 2021-03-15T00:00:00.000 | {
"year": 2021,
"sha1": "5ebbcdb807ea0abc6bca78c36385009e85969c42",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10875-021-01061-z.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "5eef8250fc6db32019e9254c45130d6e9a1401d9",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
45804073 | pes2o/s2orc | v3-fos-license | Effect of the Secondary Structure in the Euglena gracilis Chloroplast Ribulose-bisphosphate Carboxylase/Oxygenase Messenger RNA on Translational Initiation*
The results reported in the previous paper indicate that the translational start site of the Euglena gracilis chloroplast mRNA for the large subunit of ribulose-bi-sphosphate carboxylaseloxygenase (rbcL 1 is not defined by primary sequence elements (Koo, J. S., and Spremulli, L. L. (1994) J. Biol. Chern. 269, 7494-7500). In the work presented here, the effects of secondary structure in the 5'-untranslated leader of the rbcL mRNA have been ex- amined. Only weak secondary structure can be detected in the 5'-untranslated leader of the rbcL message by en- zymatic and computer analysis. Further reduction of the weak secondary structure of this message by site-directed mutagenesis does not significantly affect the ability of this message to participate in initiation complex formation. The secondary structure near the trans- lational start site was increased by the introduction of an inverted repeat sequence and by site-directed muta- genesis. Messages with increased secondary structure are much less active in initiation complex formation if the structural element introduced is within -10 nucleotides of the start codon. These results suggest that the translational start site in this chloroplast mRNA is specified by the presence of an AUG codon in an unstructured or weakly structured region of the mRNA. No specific sequences around the start codon, either
mRNA encoding the large subunit of ribulose-bisphosphate carboxylaseloxygenase (rbcL) in Euglena gracilis in the activity of the mRNA in initiation was studied. The results obtained indicate that the full 55-nucleotide leader is required for maximal efficiency in initiation complex formation. This message does not have a Shine-Dalgarno sequence 5' to the start codon, and no clear primary sequence elements appear to be present that specify a particular AUG codon as the start site. The previous results lead to the suggestion that weakly structured regions of the mRNA having an AUG codon allow 30 S ribosomal subunits to have access to the start region and subsequently lead to the formation of an initiation complex at this position.
The secondary structure of a mRNA is believed to be one of the most important elements regulating the efficiency of initia-* This work was supported in part by National Institutes of Health Grant GM24963. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked u a d v e r t~s e~n~n in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
$ To whom co~espondence should be addressed.
tion in prokaryotes (1-5). Messenger RNAs having relatively unstructured translational initiation regions appear to be expressed efficiently, while those having stable secondary structures in or around the ribosome-binding site are generally less efficient (1-3, 6-8). In the prokaryotic system, a number of factors play a role in determining the specificity and efficiency of a translational start site (9)(10)(11)(12)(13). These include the start codon, the Shine-Dalgarno sequence (141, a n d the exact sequence of the ribosome-binding site (15)(16)(17)(18)(19). Although primary sequence elements appear to be the major determinants for specifying initiation in prokaryotes, these elements must be accessible to the ribosome (6, 7) in order to function efficiently. Hence, secondary structure in the mRNA is also crucial. For example, the destruction of stem-loop structures containing the initiation codon or the Shine-Dalgarno sequence has been shown to improve the efficiency of translation of mRNAs (4, 21-23). A careful analysis of translational efficiency as a function of secondary structure has been carried out with the coat protein cistron of bacteriophage MS2 as a model system (4).
The results of this analysis clearly show an inverse correlation between translational efficiency and the stability of the secondary structure of the mRNA in the initiation region. The chloroplast translational system resembles the prokaryotic system in several general ways (24); however, almost half of the messages i n chloroplasts do not have a Shine-Dalgarno sequence in the ribosome-binding site. In this report, the effects of RNA structure in the initiation region on the efficiency of initiation complex formation with one of these mRNAs has been examined.
EXPERIMENTAL PROCEDURES Materials-General chemicals and 5-bromouridine 5'-triphosphate were purchased from Sigma. RNase T1, RNase V1, calf intestinal phosphatase, and avian myeloblastosis virus reverse transcriptase were obtained from United States Biochemical Corp. RNase T2 was purchased from Life Technologies, Inc. Escherichia coli tRNA was from Boehringer Mannheim. ATP, dNTP, and ddNTP were purchased from Pharmacia LKB Biotechnology Inc. ty-32P1ATP (3000 Ciimmol) was obtained from Dupont NEN.
Construction of Plasmids and Preparation of Corres~ond~ng mlWAs-In previous work, the complete 5'-untranslated leader and exon 1 of the E. gracilis rbcL gene were fused in frame with a portion of the neomycin resistance gene, providing a construct designated pRbcN (25). Wild-type plasmid pRbcN was prepared as described previously (25). The plasmid template pRbcN X5IR (see Fig. 1) was prepared by inserting a 48-base pair oligonucleotide in the inverse orientation at the XbaI site of the vector pRbcN X5 using standard methods. The starting plasmid for this construction has a unique XbaI recognition sequence between positions -55 and -50, but is otherwise identical to pRbcN (26). Other mutants were created by oligonucleotide-directed mutagenesis basically as described by Kunkel (27) and McClary et al. (28). Oligonucleotides for mutagenic reactions (Table I) message mRbcN from plasmid pRbcN) by in vitro transcription using T7 RNA polymerase basically as described (25,291. RNA containing 5-bromouridine (30) in place of uridine was prepared by the substitution of UTP with 5-BrUTP during the in vitro transcription reaction. The mRNAs were stored a t -20 "C a t a concentration of 1 pmoYp1 and were incubated for 10 min at room temperature before use. Primer Extension Analysis of Secondary Structure of mRNA-All transcripts to be analyzed were incubated at room temperature for 20 min in 40 pl of initiation complex formation assay buffer (50 m Tris-HC1, pH 7.8, 40 m NH,Cl, and 10 m MgCl,) prior to structural analysis. The mRNAs (1 pg, 2.9 pmol) were then digested with the respective RNases. Digestion with RNase T1 (1 unit) or RNase T2 (3.0 units) was for 5 min at 37 "C. Digestion with RNase Vl(0.06 unit) was carried out for 10 min a t 37 "C. The concentration of each RNase was titrated to ensure that only primary cleavage products were being detected. RNAdigestions were terminated by phenol extraction. The RNA was precipitated with ethanol in the presence of 5 pg of E. coli tRNA as a carrier. The digested RNAfragments were dissolved completely in 5 pl of HzO, and 1 pl was generally used for primer extension analysis (31, 32) as described below.
The oligonucleotide 5'-CGCTGCCTCGTCCTGC-3' (16 nucleotides), which was used for primer extension analysis, hybridizes to a site 130 nucleotides from the 5'-end of mRbcN. This oligonucleotide was labeled before use with 32P at the 5'-end using polynucleotide kinase (33). Annealing of the digested RNA(-0.5 pmol) and the labeled primer (0.5 pmol) was carried out by heating the mixture (10 pl) to 90 "C for 3 min i n Buffer A (50 m Tris-HC1, pH 8.0, 10 mM MgC12, 50 mM NaC1, and 1 mM dithiothreitol), followed by incubation on ice for 1 min. The primer extension reaction (10 p1) was performed in Buffer A containing 0.5 m dNTP and 2 units of avian myeloblastosis virus reverse transcriptase. Incubation was a t 42 "C for 30 min. The reaction was terminated by the addition of 4 pl of gel loading buffer (95% formamide, 20 r m EDTA, 0.05% bromphenol blue, and 0.05% xylene cyanol), and the reactions were heated at 90 "C for 2 min prior to application to the gel. Primer extension cDNA bands were analyzed by denaturing 6% polyacrylamide gel electrophoresis (33).
For mRNA sequencing, 1 pmol of mRbcN and 1 pmol of 32P-labeled primer were annealed in water (10 pl) by heating for 3 min a t 90 "C, followed by slow cooling to room temperature over a 30-min period. Alternatively, the heated mixture was placed directly on ice. Following this step, 8 pl of buffer (250 m Tris-HC1, pH 8.3,40 mM MgCl,, 250 mM NaC1, and 5 m dithiothreitol) and 1 p1 of 5 m dNTP were added. This mixture was divided into four aliquots of 4.5 pl each, and 4 pl of the ddNTP for chain termination were added to the appropriate aliquot, giving final concentrations of 0.21 m ddATP, 0.17 m ddTTP, 0.11 mM ddGTP, and 0.15 m ddCTP. The RNA sequencing reaction was started by the addition of 2 units of avian myeloblastosis virus reverse transcriptase in 1 pl and was incubated for 30 min at 42 "C. The reactions were analyzed on 6% polyacrylamide gels containing 7 M urea (33).
Computer Analysis of Secondary Structures of mRNAs-Computer predictions of RNA secondary structures were done using the MFold program (version 7.0) of the University of Wisconsin Genetics Computer Group software running on a VAX computer system. This program uses the algorithm of Zuker (34) for structural predictions. The data obtained from the primer extension reactions were used as parameters to impose restrictions on the folding patterns when needed.
Initiation Complex Formation Assays-The efficiency of initiation complex formation with various mRNAs was determined by the nitrocellulose filter binding assay as described previously (25).
RESULTS
Secondary Structure Analysis of Znitiation Region of mRbcN-The 5'-untranslated leader of mRbcN is -90% A and U residues (Fig. 1). Such an AN-rich region would be expected to have little, or at most only weak, secondary structure. TO examine this possibility, RNA structural analysis was carried out on mRbcN using biochemical probing and computer predictions. Primer extension analysis was performed on intact mRNA ( Fig. 2 A , Ct lane) and on RNA that had been digested with RNases T1, T2, and V1. Each RNase digestion was opti- mized either by titrating the amount of enzyme or by varying preferentially cleaves helical regions of RNA (39). Very few the incubation time. signals were observed from primer extension analysis of the Primer extension analysis of mRbcN in the absence of prior RNase V1-treated mRNA (Fig. 2 4 ) . This observatioli indicates nuclease digestion indicated the presence of consistent reverse transcriptase stops a t positions -37 to -39 of mRbcN (Fig. 2 4 ) . This stop signal suggests the presence of the 3'-edge of a stemloop structure in the mRNA a t about this position (32,35). RNase T1 probes for the presence of G residues in singlestranded regions of RNA (36, 37). Since mRbcN does not have many G residues in the target region, RNase T1 digestion had limited usefulness. A faint signal from the G residue a t position +16 was obtained from mRbcN. The signal from cleavage a t this residue migrated a t the A residue a t position +17 in the mRbcN sequencing ladder. Note that RNase T1 cuts on the 3'-side of G residues and that reverse transcription results in a 1-nucleotide difference from the corresponding sequencing ladder.
RNase T2 cleaves single-stranded regions of RNA without any sequence specificity. This enzyme cuts most efficiently at single-stranded residues in loops. It is much less effective in cleaving unstructured single-stranded regions (38). As shown in Fig. 2 4 , primer extension analyses of mRbcN cleaved by RNase T2 showed a series of cleavages between positions -22 and -30 in the untranslated leader and between positions +6 and +25 downstream of the AUG start codon. This observation suggests that there is considerable single-stranded character to the 5'-untranslated leader of mRbcN.
Finally, the message was digested with RNase V1, which that the leader region has very little secondary structure and that the mRNA may spend a significant fraction of time in a fully single-stranded conformation ( Fig. 2 B , first structure). The relatively weak reverse transcriptase stop observed a t positions -37 to -39 suggests the presence, a t least transiently, of a stem-loop located near the 5'-end in a t least a portion of the mRNA molecules (second structure). The lack of strong signals from either RNase T2 or V1 digestion in the region between the AUG start codon and position -20 suggests that this region is highly unstructured (38). The potential secondary structure of mRbcN was also investigated using the MFold program (34). This program predicts two weak stem-loop structures in the 5'-untranslated leader (Fig. 2B, third structure). One of the predicted stem-loop structures is located near the 5'-end (between positions -32 and -52) and represents the structure detected by the enzymatic probing (second structure). The second hairpin structure (between positions +1 and -17) was found only by computer analysis. The free energies (AGO) of the predicted hairpin structures were calculated by the method of This strategy allowed us to investigate two questions: first, whether the weak secondary structures detected played a positive role in initiation complex formation and second, whether further reduction of the secondary structure of this mRNA could actually enhance its activity in initiation. The first construct substituted the sequences immediately upstream of the start codon (positions -6 to -21) with adenines (mRbcN As; Figs. 1 and 3 A ) . This mRNA should not be able to form the stem-loop structure predicted by the MFold program just upstream of the AUG initiation codon. Although this stem-loop structure was not detectable by primer extension analysis, it could be present as one of several conformations present in the RNA at equilibrium. This mutated mRNA has changes both in the primary sequence adjacent to the start codon and in the potential secondary structure near the start site. It should therefore also be useful for the further clarification of the importance of the primary sequences upstream of lntrols Chloroplast Ranslation the AUG codon.
To confirm that this mRNA had the predicted secondary structure near the translational initiation region, it was subjected to primer extension analysis. This analysis (data not shown) suggested that the 5"untranslated leader was largely single-stranded and that no unexpected structure had been introduced (Fig. 3A). The activity of mRbcN As in initiation complex formation was tested as a function of the concentration of the mRNA and was compared to the activity of the wild-type mRNA. As shown in Fig. 3B, this mRNA has essentially the same activity as mRbcN. These results have two implications. First, a defined primary sequence between positions -1 and -20 is not required for the activity of mRbcN in initiation. This result is in agreement with the data presented in the previous paper (481, which argue against a major role for a defined primary sequence in specifying the correct translational start site for this chloroplast mRNA. Second, the data argue that the putative weak stem-loop structure just upstream of the start codon does not play a positive role in initiation and that the 5'-untranslated leader of the wild-type mRNA is already sufficiently unstructured to allow maximal efficiency in initiation. The second construct prepared in this series replaced the stem-loop structure near the 5'-end of the mRbcN with a series of U residues (mRbcN Us; Figs. 1 and 3 A ) . A similar uridinerich sequence (U5AU4) is located just upstream of the start codon in mRbcN. Uridine-rich sequences have been suggested to enhance translational efficiency with E. coli mRNAs (16), and ribosomal protein S1 appears to have a high affinity for uridine-rich regions (17). It should be noted that the stem-loop structure being replaced here (AGO = -3.1 kcal/mol) is present in a t least a population of mRbcN conformers that can be detected by primer extension analysis (Fig. 2). This stem-loop structure may be partially stabilized by its GUAA tetraloop.
RNA loops with the sequence GNRA (where N = A, C, G, or U and R = A or G) appear to be significantly more stable than other loop sequences (41-43).
Primer extension analysis of mRbcN Us was carried out and indicated that the stem-loop predicted to be present at the 5'-end of mRbcN has indeed been eliminated (data not shown). The activity of mRbcN Us in initiation complex formation was determined as a function of the mRNA concentration and was compared to the activity of the wild-type mRNA (Fig. 3B 1. The mRNA was -70% as efficient as mRbcN, suggesting that it does not differ significantly from the wild-type RNA in its ability to direct the initiation of translation. This result indicates that the weak hairpin structure near the 5'-end of mRbcN is not necessary for the activity of this mRNA in translation. In addition, the data indicate that the specific primary sequence from positions -36 to -50 is not critical for ribosome recognition. This observation is in agreement with the results obtained with several of the mutants described in the previous paper (48).
Effect of Enhancing Stability of Secondary Structure of mR-bcN on Its Activity in Initiation Complex Formation-The results outlined above and in the accompanying paper (48) lead t o the suggestion that the 5'-untranslated leader of mRbcN does not provide any unique sequence information that is directly responsible for specifying a specific AUG as the start codon.
Rather, it appears that the low degree of secondary structure in this region of the mRNA might lead to the selection of a particular AUG codon for initiation.
An initial investigation into the role of secondary structure in the initiation of mRbcN was carried out by substituting 5-bromouridine for uridine in the RNA. This change was easily accomplished by replacing UTP with 5-BrUTP during the in vitro synthesis of mRbcN (producing mRbcN UBr). RNAs having 5-bromouridine in place of uridine have stronger secondary structures due to an increase in base stacking interactions (30). When mRbcN UBr was tested for the ability to participate in 30 S initiation complex formation, it was only -20-30% as active as mRbcN in this assay (data not shown). This result suggests that enhanced secondary structure in the message may reduce ribosome access to the start codon and inhibit initiation.
Effect of Inverted Repeat Sequences Creating Secondary Structures in 5'-Untranslated Leader on Initiation-A significant increase in the secondary structure of the 5"untranslated leader region of mRbcN was accomplished by creating a message that had a duplication of 48 bases in the leader region present as a 48-base pair inverted repeat (mRbcN X5IR Fig. 1).
Primer extension analysis was carried out to determine the secondary structures of this mRNA (Fig. 4A). In this analysis, the mRNA was mapped for intrinsic reverse transcriptase stops. Strong reverse transcriptase stop signals were observed throughout the region between positions -13 and -45. These stops reflect the stem-loop that is predicted to form due to the inverted repeat in this leader (Fig. 4B). The observation that there are numerous stops along the 3'-edge of this stem suggests that reverse transcriptase can partially melt this structure or that this stem is partially opened due to thermal energy. The calculated total free energy value (AGO = -49.1 kcal/mol) for the stem shown here suggests that it is a strong stem. However, the bottom of the stem is largely composed of A:U base pairs, which may be partially opened at the temperature used for the initiation complex assays (37 "C).
The mRNA containing the inverted repeat structure was tested for the ability to participate in initiation complex formation (Fig. 5) and had -20% of the activity observed with the wild-type mRNA. The significant reduction in activity observed with this mRNA is most likely due to the presence of the secondary structure created by the inverted repeat present. This structure would interfere with the binding of 30 S subunits to the start site on the mRNA. Structures in 5'-Untranslated Leader-The experiments described above have consistently indicated that the information for the selection of the start site on mRbcN by the chloroplast 30 S subunit resides in the unstructured single-stranded nature of the leader region of this mRNA. If this idea is correct, it is logical to predict that the introduction of stronger secondary structures would inhibit initiation complex formation. To test this prediction, three additional mutants were constructed that had strong stem-loop structures at different positions within the leader region (Fig. 1). The first mutant mRNA has a strong stem-loop structure composed of G and C residues immediately upstream of the AUG start codon (mRbcN GC2). The second has the strong stem-loop located 10 residues 5' to the start AUG codon (mRbcN GClO), while the third construct has the Danslation strong stem beginning 33 residues 5' to the start codon (mRbcN GC33).
The presence of these stem-loop structures was confirmed by primer extension analysis (Fig. 6A ), and the predicted secondary structures of these mRNAs are shown in Fig. 6B. Mapping mRbcN GC2 gave strong reverse transcriptase stop signals a t positions -1 to -7. These stop signals presumably arise from the strong stem introduced a t this position. Cleavage of mRbcN GC2 with RNase T2 resulted in strong signals from positions -10 to -14. Since RNase T2 preferentially cuts residues in single-stranded loops, these results agree with the introduction of the predicted stem-loop structure. RNase V1 provided evidence for the 5'-side of the stem-loop by giving a signal from the C residue a t position -17. The hairpin structure in mRbcN GC2 is immediately upstream of the AUG codon and may actually encompass the start codon. As expected, this message is practically inactive in initiation complex formation (Fig. 7). Presumably, the strong secondary structure present prevents 30 S ribosomal subunits from gaining access to the start codon.
The second construct places the same strong stem-loop structure 10 nucleotides 5' to the AUG codon (mRbcN GC10). The secondary structure of this mRNA was probed by primer extension analysis (Fig. 6A ), and the results of this analysis combined with the analysis by the MFold program are shown in Fig. 6B. Reverse transcription of mRbcN GClO gave a strong stop signal a t position -10. This stop appears to dominate the analysis of the mRNA regardless of cleavage with RNase. This stop signal presumably arises from the base of the G/C-rich stem-loop at position -10. The activity of mRbcN GClO in initiation complex formation is significantly reduced (Fig. 7), and mRbcN GClO has only -20% of the activity seen with the wild-type mRbcN message. This reduced activity is presumably due to steric hindrance created by the stem-loop structure on the mRNA, which would interfere with the access of the 30 S subunit to the start codon. The stem-loop in this mRNA would encompass a significant portion of the normal ribosome-bind- ing site, which usually spans positions -20 to +15.
Finally, a mutant of mRbcN was prepared in which the hairpin structure was moved between positions -33 and -52, which should lie outside of the ribosome-binding site (mRbcN GC33). Primer extension analysis of mRbcN GC33 (Fig. 6 A ) gave a strong stop signal for reverse transcription at position -33 presumably due to the stem-loop that had been introduced into the RNA. RNase T2 probing gave signals from residues -41 and -42, which are expected to be part of the loop region. Finally, RNase V1 gave a signal at position -50 on the 5'-side of the predicted stem. The ability of mRbcN GC33 to participate in initiation complex formation was examined (Fig. 7). Interestingly, the mRbcN GC33 mRNA has essentially the same activity as the wild-type mRbcN. This observation indicates that a stem-loop >30 residues 5' to the start codon is too far away to interfere with its accessibility to the small subunit. Data provided in the accompanying paper (48) clearly indicate the fulllength 55-base leader is important for maximal activity in initiation complex formation. However, only -30 nucleotides upstream of the AUG start codon need to be single-stranded. The remainder of the mRNA can be weakly structured (wildtype RNA), unstructured (Fig. 3), or in a strong secondary structure (Fig. 7).
Effects of Downstream Sequences on Initiation Complex Formation with mRbcN-In prokaryotic systems, the presence of an additional sequence element located immediately downstream of the AUG codon has been suggested to enhance the efficiency of translational initiation (44,45). This downstream sequence is believed to hydrogen-bond to the 16 S rRNA present in the 30 S ribosomal subunit. Computer analysis of the nucleotide sequence of mRbcN just downstream of the start codon indicates the presence of possible base pairing of residues 4 5 9 4 6 5 of the 16 S rRNA and nucleotides +7 to +13 (CCU-CAAA) of mRbcN. In addition, residues +9 to +15 (UCAAACU) of mRbcN can potentially hydrogen-bond to the sequence from positions 12 to 18 or from positions 971 to 977 of the small subunit ribosomal RNA. A mutant in which this hypothetical hydrogen bonding was disrupted (mRbcN DB; Fig. 1) was prepared. The activity of mRbcN DB in initiation complex formation is essentially the same as that observed with mRbcN (data not shown). This observation indicates that no specific primary sequence information resides within the ribosome-binding site downstream of the start codon. DISCUSSION The results presented here and in the previous paper (48) suggest that there is no direct primary sequence information used to specify the translational start site on the rbcL mRNA. The translational start site of this mRNA has the AUG start codon positioned in an unstructured or very weakly structured region of the message, making the AUG codon accessible to the 30 S subunit. Introduction of strong secondary structure elements close to the AUG codon reduces its activity in initiation complex formation significantly. These results argue that the only major determinant defining a particular AUG as the start codon in this message is its presence in an accessible region of the RNA. The rbcL mRNA is representative of one class of E. gracilis chloroplast mRNAs that lack Shine-Dalgarno sequences and that appear to have little structure in the translational start site. The results observed here can thus probably be extrapolated to the initiation regions of about half of the mRNAs in the chloroplasts of E. gracilis.
Numerous observations from prokaryotic systems also argue that secondary structure present in the translational initiation region blocks access to the start site and reduces the efficiency of the message in translation (3,22,46). In contrast, the effect of RNA secondary structure is more complex in the eukaryotic cytoplasmic system. In the yeast Saccharomyces cerevisiae, a stem-loop structure with a predicted stability greater that -28 kcaVmol inhibited in vivo translation significantly, and a leader with a stem having a AGO of -14 kcal/mol showed only about one-third the normal activity (35). Creation of a hairpin structure (AGO = -30 kcaVmo1) involving the AUG codon in the mammalian preproinsulin coding sequence did not reduce the yield of preproinsulin; however, a more stable stem-loop structure (AGO = -50 kcaVmo1) reduced the yield significantly. Downstream stem-loop structures may actually improve the recognition of a n AUG initiation codon in eukaryotes by reducing the speed with which the 40 S subunit scans the mRNA and by providing more time for recognition of the AUG codon.
The data presented here and in the previous paper (48) argue strongly that primary sequence information does not play a direct role in specifying the start site of mRbcN. Rather, the basic information required to direct the chloroplast 30 S ribosomal subunit to the start site is an AUG codon present in a highly unstructured region of the RNA. The 30 S subunit may recognize the single-stranded region of the mRNA by using a single-strand RNA-binding protein such as ribosomal protein S1 (15). Recently, a chloroplast S1-like protein has been identified and characterized from spinach chloroplasts (20,47). In E. gracilis ribosomes, an S1-like protein has been detected by Western blotting using E. coli anti-S1 antibody (data not shown). Interestingly, the chloroplast S1-like protein from spinach preferentially binds poly(A), while the E. coli S1 protein binds most strongly to poly(U), although it will also bind poly(A) and poly(C) (15,471. The 30 S subunit interacting with the selected single-stranded region of the mRNA via an S1-like protein might then locate the nearby AUG codon. In the presence of Wet-tRNA and initiation factors, the initiation complex will form a t this position, and the interaction between the mRNA and the 30 S subunit will be stabilized by the codonanticodon interaction between the mRNA and the initiator tRNA bound to the small subunit. | 2018-04-03T06:11:20.707Z | 1994-03-11T00:00:00.000 | {
"year": 1994,
"sha1": "3019b44d7979f3de4ce98468a6bb4de226c1a919",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/s0021-9258(17)37314-3",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7cb34749fae1e1ff8c7f26c802b2574b88eb231a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
24867220 | pes2o/s2orc | v3-fos-license | Effect of viral load on T-lymphocyte failure in patients with chronic hepatitis B
AIM: To investigate peripheral T-lymphocyte subpopulation profile and its correlation with hepatitis B virus (HBV) replication in patients with chronic hepatitis B (CHB). METHODS: Distribution of T-lymphocyte subpopulations in peripheral blood was measured by flow cytometry in 206 CHB patients. HBV markers were detected with ELISA. Serum HBV DNA load was assessed with quantitative real-time polymerase chain reaction (PCR). The relationship between HBV significantly associated with viral replication level. The substantial linear dose-response relationship and strong independent predictive effect of viral load on T-lymphocyte subpopulations suggests the possibility of a causal relationship between them, and indicates the importance of viral load in the pathogenesis of T cell hyporesponsiveness in these patients.
METHODS:
Distribution of T-lymphocyte subpopulations in peripheral blood was measured by flow cytometry in 206 CHB patients. HBV markers were detected with ELISA. Serum HBV DNA load was assessed with quantitative real-time polymerase chain reaction (PCR). The relationship between HBV replication and variation in peripheral T-cell subsets was analyzed. showed a similar pattern of these parameters was significantly associated with high viral load, presence of serum hepatitis B e antigen (HBeAg) expression, liver disease severity, history of maternal HBV infection, and young age at HBV infection, all with P < 0.01. There was a significant linear relationship between viral load and these parameters of T-lymphocyte subpopulations (linear trend test P < 0.001). There was a negative correlation b e t w e e n t h e l e ve l s o f C D 3 + a n d C D 4 + c e l l s a n d CD4 + /CD8 + ratio and serum level of viral load in CHB patients (r = -0.68, -0.65 and -0.75, all P < 0.0001), and a positive correlation between CD8 + cells and viral load (r = 0.70, P < 0.0001). There was a significant decreasing trend in CD3 + and CD4 + cells and CD4 + /CD8 + ratio with increasing severity of hepatocyte damage and decreasing age at HBV infection (linear trend test P < 0.01). In multiple regression (after adjustment for age at HBV infection, maternal HBV infection status and hepatocyte damage severity) log copies of HBV DNA maintained a highly significant predictive coefficient on T-lymphocyte subpopulations, and was the strongest predictor of variation in CD3 + , CD4 + , CD8 + cells and CD4 + /CD8 + ratio. However, the effect of HBeAg was not significant.
INTRODUCTION
Hepatitis B virus (HBV) is one of the most prevalent www.wjgnet.com viral pathogens in humans, with almost a third of the world population having evidence of infection, and about 350 million chronically infected patients [1] . Chronic hepatitis B (CHB) is characterized by inflammatory liver disease of variable severity and is associated with a significantly increased risk of cirrhosis, liver failure and hepatocellular carcinoma [1,2] . Seventy-five percent of patients with CHB are Asian [3] . In China, > 120 million people are chronic carriers of HBV, and 40%-60% catch HBV infection from their mothers [1,2] . For neonates and children younger than 1 year who acquire HBV infection perinatally, the risk of the infection becoming chronic is 90% [1,2] .
The pathogenesis of persistent viral infection and hepatitis B is very complex. Both viral factors, as well as the host immune response, have been implicated in the pathogenesis and clinical outcome of HBV infection [4,5] . Apart from direct biological effects of viral variants, there is a growing consensus that the host immune response, especially the virus-specific T cell response, is the key determinant influencing the course of disease and the onset of liver disease [5,6] . Many investigators suggest the chronicity of HBV infection is caused by a deficient cellular immune function, but the mechanism has not been defined [7][8][9] . For a non-cytopathic virus like HBV to persist, it must either overwhelm or not induce an effective antiviral immune response, or it must be able to evade it. Hepatitis B e antigen (HBeAg) may play an important role in the interaction of the virus with the immune system. Data from transgenic mice indicate neonatal tolerance to HBeAg is a crucial mechanism responsible for the lack of an antiviral immune response following mother to infant transmission [10,11] . Milich et al [12] have further demonstrated an immunomodulator y role of HBeAg in antig en presentation and recognition by CD4 + cells.
T he relationship between HBV specific T-cell response, HBV viral load and HBeAg expression in CHB is complicated by their close correlation and remains unclear. In China, where vertical perinatal transmission is the main route of transmission, most patients with CHB have become infected in the early years of life. The influence of age at infection and maternal HBV infection status on T-cell immune status and HBV replication is still not settled. The aim of the present study was to evaluate the peripheral blood T lymphocyte subpopulation profile and its correlation with HBV replication. [13,14] .
Enrollment of study subjects
The following criteria were fulfilled by all patients: (1) steady positivity for hepatitis B surface antigen (HBsAg) in the serum for at least 12 mo, to establish CHB; and (2) exclusion of other concomitant causes of liver disease (hepatitis C, D and HIV infection and alcohol consumption > 60 g/d), relatively rare liver disease (autoimmune hepatitis and metabolic liver disease), and treatment with immunosuppressive therapy or antiviral therapy for HBV infection within the past 12 mo before entry. None of the patients was a drug user, or exposed to hepatotoxin. Those who had liver cirrhosis were also excluded since their long history of treatment and terminal disease state may have complicated the interpretation of the results.
One hundred individuals who were free of HBsAg were identified from those attending the outpatient service for a health check-up; 61 of the participants were male, 39 were female; mean age, 33.24 (SD 10.28) year. These served as the control group for comparison of T-lymphocyte subpopulations with those who had HBV infection.
Quantitative measurement of HBV DNA (viremia)
Serum HBV DNA load in patients was assessed with the real-time fluorescent quantitative polymerase chain reaction method (real-time PCR) using a Lightcycler PCR system (FQD-33A, Bioer, Hongzhou, China) with a lower limit of detection of about 1000 viral genome copies/mL. The handling procedures were perfor med in strict accordance with the reagent kit package insert (Shenzhen PG Biotech Co., Ltd., Shenzhen, China). The primer was provided in the kit, the reaction volume was 40 µL, and the reaction conditions were 37℃ for 5 min, 94℃ for 1 min, then 40 cycles at 95℃ for 5 s and 60℃ for 30 s. Results were considered abnormal when HBV DNA was > 1000 copies/mL.
Peripheral blood T lymphocyte subsets measurement
Blood samples were collected in heparinized vacutainer tubes. Samples were analyzed with a Muti-Q-Prep processor (Coulter, USA) and thereafter Epics-XL flow cytometry (FCM) (Coulter). Lymphocytes were analyzed using a gate set on forward scatter versus side scatter. Antihuman monoclonal antibodies CD3-PE-CY5/CD4-FITC/ CD8-PE were purchased from Immunotech (USA). For each sample, detection was carried out using CELLQuest software (Coulter). The results were expressed as the percentages of CD3 + , CD4 + and CD8 + cells found to be positive for the marker antigen in the total T cell population. The handling procedures were performed in strict accordance with the manufacturer's instructions.
Maternal HBV infection status (MH)
MH was confirmed according to the maternal presence of serum HBV markers and/or HBV DNA, documented on at least two occasions, at least 3 mo apart; or documented maternal death from HBV-related liver diseases such as CHB, HBV-related liver cirrhosis and/or hepatocellular carcinoma.
Age at HBV infection
In the past three decades in China, all children have been obliged to be tested for HBV markers when they first go to kindergarten and elementary school. Subsequent obligatory tests are carried out when they apply for university or a job. The results of these tests were obtained from medical records and interviews. We classified the age of the first positive test as < 8 years, 8-20 years and > 20 years old.
Statistical analysis
Initial sample size calculation came up with 50 subjects positive for HBV DNA positive and the same number negative. This provided the study with a statistical power of 80% at the 0.025 level of significance to detect a difference in T-cell variation values of 33 versus 38. However, to cover the problem of being potentially confounded by other variables, and to have enough subjects for stratifying levels of HBV DNA load to examine a dose-response relationship, 206 CHB patients and 100 controls were recruited.
Descriptive statistics were used to examine the age, gender, serum HBV load, HBeAg status, ALT, AST, total bilir ubin, age at HBV infection and maternal HBV infection status. The levels of T-lymphocyte subpopulation in normal individuals (HBsAg-negative) were summarized as means ± SD, to serve as a control reference. Effects of various independent demographic, clinical and serological variables on T-cell profile were analyzed only among HBsAg-positive individuals. In univariate analysis, breakdown of these profiles by individual independent variables was carried out. An independent t test was done for two-level independent variables and one-way ANOVA for more than two-level variables. The relationship between HBV replication and peripheral T-lymphocyte subpopulation was analyzed by correlation analysis and ANOVA linear trend test. Finally, a multiple linear regression model was employed for multivariate analysis to assess the independent effects of variables on peripheral blood T lymphocytes. Variables yielding P ≤ 0.2 in univariate analysis were included in the multivariate analysis, and the models were refined by backward elimination, guided by a change in log likelihood of successive models. A final P < 0.05 was considered statistically significant. Computations were carried out with the aid of R software version 2.5.1 [15] .
Demographic characteristics and clinical features of CHB patients
The demographic, virological, serological and clinical characteristics of the patients are summarized in Table 1.
Over two-fifths of the CHB patients acquired the infection before the age of 8 year. Almost three-quarters had detectable levels of HBV DNA. Among these, the majority (57.7%) had > 10 7 copies/mL. Around half of the patients' mothers were HBV-positive. A little less than half were HBeAg-positive (46.6%). Among various courses of CHB, all severe-CHB patients had detectable levels of HBV DNA; the majority of severe-CHB patients
Peripheral T-lymphocyte subpopulation composition in CHB patients
CHB patients had significantly decreased total CD3 + and CD4 + subpopulations and CD4 + /CD8 + ratio, and increased CD8 + subsets compared with uninfected controls, all with P < 0.001. Univariate analysis showed that T-cell failure was significantly associated with higher viral load, serum HBeAg expression, severity of liver disease, history of maternal HBV infection, and lower age at HBV infection ( Table 2). Linear dose-response relationship between the level of T-lymphocyte subpopulation and copies of HBV DNA was also highly significant (linear trend test P < 0.001). A negative correlation existed between the levels of CD3 + and CD4 + cells and CD4 + /CD8 + ratio and serum HBV viral load, whereas a positive correlation existed between the level of CD8 + cells and viral load, all with P < 0.0001. Correlation between T lymphocytes and viral load is shown in Figures 1 and 2. Furthermore, there was a significant decreasing trend of CD3 + and CD4 + cells and CD4 + /CD8 + ratio with increasing hepatocytic damage; this was inverse for CD8 + cells. A similar pattern was also seen among age at HBV infection, all with a linear trend test P value < 0.01.
Linear regression predicting peripheral blood T-lymphocyte subpopulation from relevant parameters
In Table 3, linear regression models are separately summarized for CD3 + , CD4 + and CD8 + cells and CD4 + / CD8 + ratio, which are the dependent variables. After adjustment for all independent variables listed in the table, serum HBV viral load was the key predictor for T-cell profile. The severity of liver disease reduced the number of CD3 + T lymphocytes, increased the number of CD8 + T cells, and decreased the CD4 + /CD8 + ratio. Those who had infection at a young age had a lower CD4 + T cell count and CD4 + /CD8 + ratio than those who acquired infection later in life. Maternal infection history and serum HBeAg expression had no independent effect on T-lymphocyte profile.
DISCUSSION
This study demonstrated disorder of cellular immune function in CHB patients. The level of T-cell dysfunction had a linear dose-response relationship with the load of HBV DNA. Furthermore, the study also illustrated that the strong independent effect of HBV viral load seemed to eliminate and/or weaken the effects of liver disease severity, maternal carrier status, early age of infection and HBeAg positivity on the impairment of T-cell function.
Figure 2
Peripheral T-lymphocyte subpopulations by serum HBV viral load level. On the figure, the marks "<1.0e+03", "e+03-e+05", "e+05-e+07" and ">1.0e+07" denote "< 10 3 ", "10 3 -10 5 ", "10 5 -10 7 " and "> 10 7 ", respectively. Our findings indicate that CHB patients have T-cell failure. The same finding has also been demonstrated previously, namely, that the chronicity of HBV infection is caused by a deficiency in cellular immune function [16][17][18][19][20] , and hepatocyte damage is mainly caused by immunological injur y [21][22][23][24][25][26][27][28][29] . However, the mechanism has not been defined [5] . Apart from direct biological effects of viral variants, there is a growing consensus that the host immune response, especially the virus-specific T-cell response, is the key determinant influencing the course of disease and the onset of liver disease [5,6,30] . The significant decrease in total T lymphocyte (CD3 + T) revealed that there is a lack of immunologically competent cells involved in cellular immunoreactivity against HBV infection. A lack of CD4 + T cells can impair CD8 + T-cell activity and antibody production [31] , while the inability to mount a virus-specific CD8 + T-cell response results in a level of circulating virus that cannot be cleared by antibodies alone [32][33][34] . Activation-induced cell death (AICD) is related to a decrease in lymphocytes and functional defects. This phenomenon can cause decreased immune clearance. This may be an important reason for persistent infection with HBV. AICD in peripheral blood T lymphocytes in CHB has been demonstrated previously [35,36] . Thus, AICD is considered an important modulator in down-regulating the "burst" of responding T cells in patients with CHB [36] .
Our results revealed T-cell failure was significantly associated with viral replication level. The substantial linear dose-response relationship and strong independent predictive effect of HBV DNA, but not other variables, on T-lymphocyte subpopulations suggests the possibility of a causal relationship between them. However, the crosssectional nature of our data did not allow us to identify the temporal direction of the causal relationship between these two variables. Mizukoshi et al [37] have suggested antiviral therapy of persistently infected patients appears to increase the frequency of HBV-specific CD4 + T-cell responses during the first year of treatment. Boni et al [38,39] have reported antiviral treatment can overcome CD8 + T-cell hyporesponsiveness in subjects with CHB, which suggests the T cells are present but suppressed. It has been reported by Pham et al [40] in 21 CHB patients that the ratio of CD4 + /CD8 + liver-derived lymphocytes, but not of peripheral blood lymphocytes, appears to be related to the level of HBV replication, which reveals a positive correlation with viral load. The evidence that an efficient antiviral T-cell response can be restored by antiviral monotherapy in CHB, concurrently with reduction of viremia, indicates the importance of viral load in the pathogenesis of T-cell hyporesponsiveness.
The strong independent effect of viral load on T-cell impairment and viral factors (viral variants) might explain the disappearance of the effect of other variables in multivariate analysis. Among our patients, the majority were characterized by young age at first HBV infection, maternal carrier status, and high serum viral load, especially in severe CHB patients. In addition to HBV DNA, HBeAg is also a serological marker for viral replication, which plays a crucial role in chronicity of HBV infection and high viral load, by inducing immunological tolerance to HBV in the fetus. The tolerance-inducing effect of HBeAg has been well characterized in mice [41][42][43] and likely contributes to the low level of core-specific T-cell responses present in HBeAgpositive CHB patients [4,5] . Clinical evidence supports the tolerogenic effect of HBeAg [5,44] . Also, viral mutations that abrogate or antagonize antigen recognition by virus-specific T cells have been reported in patients with CHB [45,46] . Although the results from univariate analysis in our study showed T-cell dysfunction was significantly related to HBeAg, the association disappeared in multivariate analysis. One possible reason is that some of the subjects were infected with pre-C stop codon mutation virus (pre-C/C mutant), which resulted in a loss of HBeAg. In these patients, therefore, viral replication may have persisted, despite elimination of HBeAg and seroconversion to anti-HBe. While the loss of HBeAg appears irrelevant to the biology of the virus, it may play an important role in the interaction of the virus with the immune system. This may weaken the independent association between HBeAg and T-cell failure, so that the sample size in our study could not detect this magnitude of association. Moreover, those who had a history of maternal carriage usually acquired infection at a younger age, and a higher HBV viral load was detected in the majority of those who had infection at a younger age. In the same way, those who had severe liver damage were usually positive for maternal HBV carrier status and acquired infection in early life, thus a high viral load was measured in these patients. This phenomenon suggests that infection from the mother and/or at younger age predisposes to tolerance to HBV infection and thus, higher viral load.
Our study clearly showed the severity of the liver disease was significantly associated with functional disorder of T-lymphocytes, and the effect was independent of viral load. Hepatocyte damage may also be correlated directly with T-cell failure, rather than through the load copies of viral replication. Previous studies have suggested that hepatocyte damage is mainly caused by immunological injury [9,[21][22][23][24][25][26][27][28][29][30] . HBV is a typical non-cytopathic virus that can induce tissue damage of variable severity by stimulating a protective immune response that can simultaneously cause damage and protection, by killing an intracellular virus through the destruction of virus-infected cells [5] . Therefore, immune elimination of infected cells can lead to the termination of infection when it is efficient, or to a persistent necroinflammatory disease when it is not [47] . Destruction of infected cells, however, is not the only mechanism implicated in the elimination of intracellular virus, as demonstrated by studies carried out in animal models of HBV infection and in human hepatitis B, which demonstrate the importance of cytokine-mediated, noncytolytic mechanisms of anti-viral protection [20][21][22][23][24][25][26][27][28][29][30][31] .
The strength of this study lies in the large sample size and the measurements of T-lymphocyte subpopulations using modern advanced FCM technolog y and viral load with quantitative real-time PCR. A limitation of this study is that the specificity of T-lymphocyte subpopulations and liver-derived T-lymphocytes were not explored concurrently. Although a strong relationship between T-lymphocyte subpopulations and viral load was illustrated, further studies are needed to confirm the causal relationship between them.
Our results, which suggest high viral load contributes to functional impairment of T cells in CHB patients, have practical implications for understanding the pathogenesis and control of persistent viral infection and diseases progression and prognosis. This is because patients with CHB are at risk of persistent viral infection that leads to liver failure, cirrhosis and even hepatocellular carcinoma [1,2] . We should take into account effective inter vention strategies such as anti-viral and/or immunotherapy to prevent progression and long-term consequences. Inhibition of viral replication with agents such as lamivudine may enhance the likelihood that therapeutic stimulation of the T-cell response will induce HBV antigen seroconversion, ultimately leading to recovery from disease. Further clinical studies are needed to explore this possibility in persistent HBV-infected patients.
In conclusion, we found a strong, independent predictive effect of viral load on T-lymphocyte subpopulations, which suggests a causal relationship between viral load and T-cell failure. T-cell dysfunction might contribute to viral persistence. HBV establishes persistent infection mainly by vertical transmission from HBV-infected mothers to neonates, and the immunomodulatory effects of HBeAg might play an important role in this setting. High viral load may be one important factor that contributes to T-lymphocyte failure, and is more important than HBeAg in this regard. Clearly, additional studies are required to better understand the complex host-virus interactions that determine the persistence and outcome of HBV infection.
Background
HBV infection is a global public health problem. Infection with hepatitis B virus (HBV) leads to a wide spectrum of clinical presentations ranging from an asymptomatic carrier state to self-limited acute or fulminant hepatitis to chronic hepatitis with progression to cirrhosis and hepatocellular carcinoma. But the pathogenesis of persistent viral infection and hepatitis B is very complex and has not been clarified until now. Generally, it is not HBV itself that damages hepatocytes directly, but the results of function disorder of cell-mediated immunity. The outcome of HBV infection would depend upon the balance between development of immunity (leading to virus elimination) and tolerance (leading to chronic viral persistence).
Research frontiers
Outcome of infection and the pathogenesis of liver disease are determined by virus and host factors, which have been difficult to fully elucidate because the host range of HBV is limited to man and chimpanzees. The pathogenesis of liver disease and interaction between virus and host remain the research hotspots in this field.
Innovations and breakthroughs
The pathogenesis and correlation of functional disorder of cellular immune and viral replication level remains unknown. In our study, peripheral T-lymphocyte subpopulation of chronic hepatitis B (CHB) patients in large sample size were measured using advanced flow cytometry technology and viral load with quantitative real-time polymerase chain reaction (PCR) method. The results suggest T-lymphocyte failure was significantly associated with viral replication level. The substantial linear dose-response relationship and strong independent predictive effect of viral load on T-lymphocyte subpopulations suggests close proximity of the causal pathway between them, and indicates the importance of viral load in the pathogenesis of T cell hyporesponsiveness in these patients.
Applications
The results, which suggest high viral load contributes to functional impairment of T-cell in CHB patients, have practical implications because the understanding of the immune response upon HBV infection is useful in developing appropriate therapeutic strategies for controlling viral hepatitis and disease progression, as well as for improving current knowledge regarding persistent HBV infection prognosis. In addition, it will be possible to predict the variation of T-lymphocyte subpopulations in peripheral blood in the future by measuring serum viral load level in chronic HBV-infected patients.
Terminology
CD4 + T cells, classically referred to as helper T cells that are required for the efficient development of effector cytotoxic/suppressor CD8 + T-cell and B-cell antibody production, play an important role in HBV infection by secretion of Th1 cytokines that down-regulate HBV replication, and by promoting CD8 + T-cell and B-cell responses. CD8 + T cells go on to clear HBV-infected hepatocytes through cytolytic and non-cytolytic mechanisms, reducing the levels of circulating virus, while B-cell antibody production neutralizes free viral particles and can prevent (re)infection. CD3 + , CD4 + and CD8 + cells are major function subgroups of T cells and play an important role in response to HBV infection, which can reflect the situations of cellular immune function and immunoregulation and are usually regarded as a valuable index to forecast the changes of patients' immunity. | 2018-04-03T01:50:27.246Z | 2008-02-21T00:00:00.000 | {
"year": 2008,
"sha1": "d32db593403efeaf3632f32bf60342b74010d707",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.14.1112",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "198ed4aacd0a3b0e54657d5b02b3c8e6f3a4ece5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
73654894 | pes2o/s2orc | v3-fos-license | Staff sizing as a mechanism of efficiency: An application of a non-parametric method
The concept of staff sizing aims to estimate or determine the ideal or optimal number of people needed to perform some organizational activities, which can be considered as a trend. So, models for staff sizing constitute a fundamental part of accurately identifying staff allocation. The objective of this paper is to propose a framework for decision-making based on Data Envelopment Analysis–DEA, to estimate the staff sizing in a Brazilian entity responsible for promoting and supporting the competitiveness and sustainable development of micro and small enterprises. Data collection was carried out in the headquarters of the entity, located in Brasilia. Firstly, interviews were carried with managers in order to assess qualitatively the needs of staff for each service unit. Secondly, the documental analysis of reports from 21 units was analyzed quantitatively in order to determine their efficiency in terms of staff sizing. The results found through DEA show that only three service units can be considered efficient in terms of staff sizing. Thus, there is a need to reduce the number of workers in most of the organization. In this context, the contributions for the entity lie in the discussion on the creation of quantitative indicators and the adoption of an efficiency analysis, which can be used to better estimate or determine the optimal quantity of staff. This paper innovates by proposing a quantitative and systematized approach to estimate the staff sizing, which is the DEA. *Corresponding author: Raissa Damasceno Cunha is doctoral student in Psychology in the postgraduation program in Social Psychology of the Work and Organizations (PSTO), Universidade de Brasília (UnB), Campus Universitário Darcy Ribeiro, Prédio da FACE, Asa Norte CEP, 70.910–900, Brasília, DF, Brasil E-mail: damasceno.rc@gmail.com Reviewing editor: Yusliza Mohd Yusoff, Universiti Malaysia Terengganu, Malaysia Additional information is available at the end of the article
PUBLIC INTEREST STATEMENT
Models for staff sizing constitute a fundamental part of accurately identifying staff allocation, giving relevant information to managers in decisionmaking process. This paper proposes a framework based on Data Envelopment Analysis, to estimate the staff sizing in a Brazilian entity, which promotes and supports the competitiveness and sustainable development of micro and small enterprises. The data collection was carried out in the headquarters of the entity in Brasília through interviews with managers and by documental analysis of reports of 21 units operating throughout Brazil, to determine its efficiency in terms of staff sizing. The results found can be useful for practitioners and researchers from development countries, as such as, Brics, Caribbean and Mercosul ones. In the case of the current Brazilian scenario of crisis, there is an even greater attention to the need to promote a better management with personnel expenses, in line with strategic planning.
Introduction
In the early 1980s, a new vision emerged regarding the role of people management within organizations. From the United States of America, the Strategic Human Resources Management (or simply Strategic Management) model is now recognized as necessary to achieve competitive results in the short and long terms (Lacombe & Tonelli, 2001). There has been a process of recognition that planning requires a systematic process of assessing the future needs of human capital, relating the composition and profile to the definition of the actions that can make it possible to reach the needs of an institution (Devana, Fombrun, & Tichy, 1984).
Although the concept of workforce planning has existed for a long time, there are few studies on the subject, due mainly to the existence of few tools to support decision-making. In addition, it is important to point out that these tools are usually based on intuition and experience, making them more susceptible to inaccuracies and errors (Di Francesco, Díaz-Maroto Llorente, Zanda, & Zuddas, 2016). Some papers have been published in this context, some of them can be highlighted. Gunter (2008) researched the workforce planning policy, "School Workforce Remodeling", in England. Celik, Xi, Xu, and son (2010) proposed a framework to help project managers develop an ideal workforce with assignments that consider the short-and long-term aspects of projects in the Kuali Foundation, a nonprofit organization in the USA. Goodman, French, and Battaglio (2015) evaluated the use of workforce planning by municipalities in the USA. Di Francesco et al. (2016) proposed a general mathematical programming model for the short-term workforce planning problem. de la Torre, Lusa, and Mateo (2016) proposed a Mixed-integer linear programming (MILP) model for dealing with the long-term staff composition planning in public universities. Chen, Lin, and Peng (2016) proposed a two-stage method in order to determine an integrated medical staff allocation and staff scheduling problem in uncertain environments. Nayebi, Mohebbifar, Azimian, and Rafiei (2017), studied the number of nursing staff in an emergency department of a general training hospital in Qazvin, Iran. Kroezen, Van Hoegaerden, and Batenburg (2017) discussed the results of the Joint Action on Health Workforce Planning and Forecasting (JAHWF, 2013(JAHWF, -2016 in all current challenges in health workforce planning. It seems relevant to note that some of the papers related to workforce planning propose a framework or methodology in order to help managers in decision-making, besides that it is very common to find papers related to medical sciences. To the best of our knowledge, research focusing on the problem of staff sizing using the efficiency concept with Data Envelopment Analysis -DEA is still scarce. Banker (1984) has shown that DEA can determine the optimal scale in a production process inside hospitals. Biørn, Hagen, Iversen, and Magnussen (2003) compared hospitals' relative efficiency. Braglia, Zanoni, and Zavanella (2003) studied the productivity of systems.
Therefore, the objective of this paper is to propose a framework for decision-making based on Data Envelopment Analysis (DEA), to estimate the staff sizing in a Brazilian entity responsible for promoting and supporting the competitiveness and sustainable development of micro and small enterprises in Brazil.
The relevance of the company chosen for this study is related to its scope and segment in which it operates in the country. The researched company originated in the public sector and, since 1990, it has been a private and non-profit entity, maintained by the largest companies in the country, in accordance with the law. Considering that in Brazil, most of the formalized companies are micro and small, generating a considerable number of jobs, the role of this entity is essential.
Data collection was primarily carried out through interviews with managers from the headquarters located in Brasilia. It seems important to highlight that the company has service points located in 27 cities of the country, and in its headquarter the new strategies are planned and implemented as a pilot in one city that has 21 units, which were analyzed through DEA. In a second moment, the researchers had access to reports, which served as a basis to the quantitative analysis and modeling process with DEA.
Although there is no consensus of how Strategic Management should be carried out, some practices already exist. Among them, the idea of privileging internal and more qualified recruitment (Devana et al., 1984) can be cited. It stands out as one of the directives of this new management, the allocation of employees who have an expertise in the company and the training for those who do not have the same level of knowledge (Lacombe & Tonelli, 2001).
However, a diagnosis is necessary, in which aspects related to the compatibility of the abilities of each individual with their functions and with the managerial skills for strategic administration are taken into account (Reis, Freitas, Martins, & Oliveira, 2015). Within planning in organizations, the procedure of staff sizing, defined as the estimation of the ideal or optimal number of people needed to develop some organizational process, has consequences to how one might compose their workforce (Ostroff & Schmitt, 1993).
In addition, planning strategies can be classified as: (a) job analysis; (b) professional profiles; (c) staff sizing; and (d) scheduling (Sinclair, 2004). In this context, some of the workforce planning tools can be used: Succession plans (Cappelli, & Keller, 2014); Talent Management (Cappelli & Keller, 2014); Sustainable human resource management (Kramar, 2014); and, Staff Sizing (Tachizawa, 2015). According to Tachizawa (2015), there is no complete methodology in the literature that gives directions and definitions about the size of the staff/workforce within companies.
In this paper, the strategy approached is the staff sizing which can be categorized as a strategy of workforce planning. The results found through DEA showed that among 21 service units analyzed, only three can be considered efficient in terms of staff sizing. Thus, the results point out the need to analyze the reduction of personnel in most of the organization. In this context, the contributions for the entity lie in the discussion on the creation of quantitative indicators and the adoption of an efficiency analysis, which can be used to better estimate or determine the optimal quantity of staff The innovation of the paper lies in the adoption of a quantitative and systematized approach to estimate the staff sizing, which is the DEA and, in terms of originality of the case study. The results can be useful for practitioners and researchers from developing countries, such as Brics, Caribbean and Mercosul ones. In the case of the current Brazilian scenario of crisis, there is an even greater attention to the need to promote a better management with personnel expenses, in line with strategic planning, because control strategies allow strategically proper adjusted management.
Workforce planning and sizing of operational and strategic activities
The workforce sizing result-driven approach has transformed organizational spaces, highlighting the performance of employees and the development of their skills within the company, due to the intensification of competitiveness. Following this trend that seeks efficiency and valorization of employees, models for staff sizing constitute a fundamental part of accurately identifying personnel allocation needs (Vianna et al., 2013).
In the same way as happens with more common practices, such as the management of competencies, staff sizing should be structured based on a strategic planning of the organization (Carbone, Brandão, & Leite, 2009). In contrast, while competency management aims to identify the competencies needed to achieve strategy formulated (Brandão & Bahry, 2005), sizing determines the ideal number of workers for carrying out activities and tasks within each organizational unit so that possible relocations or layoffs can be achieved (Vianna et al., 2013). Therefore, an adequate staff sizing allows not only the hiring of staff for effective positions, but also, for example, an adequate hiring of temporary staff or trainees (Davis-Blake & Uzzi, 1993).
In a context in which strategic personnel planning is a central tool it is necessary to conduct a suitable strategy for estimating the size of the staff of an organization. Thus, this process should focus on two main results: (1) subsidize decisions of movements, assignments, promotions, and disconnections; and (2) identify the potentialities and interests of employees in face of the needs of the institution's units, enabling reallocations. Ideally, in the same way that performance evaluations are carried out, the implementation of a sizing should be constantly improved and give support for fluid and constant decision-making. Finally, the objective of this study is to present an elaborated methodology for sizing the production capacity of a private company, using a non-parametric method of efficiency analysis known as Data Envelopment Analysis (Charnes, Cooper, Lewin, & Seiford, 1997).
For the reasons presented above, it is essential for the public and private company to prepare and guide people (Vieira, 2016). In the process of sizing the productive capacity, it must be considered together with the acquisition of technology, the restructuring of the productive process and the structuring of new management models. Other aspects to be taken into consideration are the process of improvement, qualification and development of individuals as well as hiring or firing, all of which present organizational dilemmas of planning and adequacy. Thus, this finding allows decision-makers to know the gaps and leftovers in relation to the different career levels or different levels of complexity of the essential processes (Reis et al., 2015).
Considering this context, sizing staff needs, in the long, medium and short term, to ensure the demands and goals of an organization that seek the workforce suitable to the reality of each institution. The basic assumptions of human resource planning will define the guidelines and decisions regarding the feasibility of planning. The identification of the number of people needed to perform a given job had already been used by Taylor since the last century and continues to be a challenge for personnel management professionals. In part due to the fact that the measured reality is complex and the uncertainties of the context, like opportunities and threats in the labor market influence decisions about movements into the institutions (Mascarenhas, 2008).
Staff planning is an aspect normally dealt with by Human Resources or Personnel Management areas, based on mathematical modeling. Planning the staff involves three main types of flows: (i) recruitment; (ii) internal staff flows between different categories of employees (among other promotion flows) and waste (Guerry, 2011). Workforce planning is a process designed to ensure that an organization is prepared for its current and future needs, having the right people in the right places at the right times (Jacobson, 2010).
This concept creates a systematic evaluation of the content and composition of an organization workforce to determine the actions that need to be taken to respond to current and future demands to achieve organizational goals and objectives pertaining to personnel management (Jacobson, 2010). The International Personnel Management Association (IPMA), 2002 defines planning as a methodical process that analyzes the current staff/workforce, determining future staff/workforce needs, so that the organization can fulfill its mission, goals and objectives. The five strategic areas commonly covered by workforce planning are: personnel, infrastructure, organizational design, organizational culture, and risk management (Goodman et al., 2015).
Studies related to staff/workforce planning
Some studies on staff/workforce planning have been developed in the last few years, most of them with qualitative approach. In England, a workforce planning policy, "School Workforce Remodeling" (Gunter, 2008) was implemented, in which schools should expand the number and change the role of what was traditionally known as support staff.
The use of workforce planning by municipalities in the USA was evaluated by Goodman et al. (2015). That research shows that certain aspects of staff/workforce planning, such as employee retirement, long-term recruitment and retention, training and development, have been integrated into the human resources functions of various municipalities. Lewis and Yoon Jik Cho (2011) examined the effects on turnover, institutional memories, diversity, and educational qualifications that the American Community Census, developed employing surveys through the US Office of Personnel Management (OPM) and US MSPB, from 1989 to 2007. They found that retirement of servers, especially in leadership positions and critical occupations has led to negative results related to these issues.
A framework to help project managers to develop an ideal workforce with assignments was proposed by Celik et al. (2010), considering the short-and long-term aspects of projects that must be completed through multi-organizational social networks. The study was developed based on the case of the Kuali Foundation, a nonprofit organization in the United States, which was created to facilitate and coordinate community-based software development partnerships between major universities and colleges. A mathematical programming model for the short-term workforce planning problem was presented by Di Francesco et al. (2016), which can be used for any shift setup. Nayebi et al. (2017), determined the required number of nursing staff in an emergency department of a general training hospital in Qazvin, Iran, in 2016. The authors used the WISN method, which is a combination of judgment of specialists and measurement of activity patterns to define workload standards for each category of workforce. Kroezen et al. (2017) discussed the results of the Joint Action on Health Workforce Planning and Forecasting (JAHWF, 2013(JAHWF, -2016 in all current challenges in health workforce planning, terminology, data availability, model-based planning, and future-based planning and collaboration. The traditional workforce planning models determine the size and experience levels required to support production and are generally based on the aggregate planning model from Operational Research. In this framework, a sufficient level of workforce must be maintained to provide the services of the system under study. However, in order to maintain a balanced workforce, one should try to minimize the use of overtime, which is usually included in the objective function (Jennings & Shah, 2014). In such framework, staff sizing can be used as a strategy in order to estimate or determine the quantity of workforce needed to accomplish a given task or activity. The staff sizing concept is detailed in the next section.
Staff/workforce sizing
Organizations that can predict the need for personnel both quantitatively and qualitatively in an environment of uncertainty gain a great competitive advantage (Marconi, 2003). The adequate staff/workforce sizing is a systematic and continuous process of assessing the current and future needs of human resources, regarding the ideal number of workers and the composition of employees' profiles (Tachizawa, 2015). The result of a sizing indicates the correct number of people with the appropriate skills, competencies and aptitude to perform the correct assignments at the right time and place (Rodrigues, Oliveira, & Lima, 2015).
The definition of the composition of personnel and the strategic approach to personnel management are understood to obtain the results of the use of competitive advantage, planning, coherence between policies, employment practices and business strategy, and decision-making on aspects of the employment relationship at the highest hierarchical level. The scenario of managing people with strategic corporate actions results in the alignment between actions and organizational goals. In this perspective, the human resources management model focuses on elements such as valuation of human talent, attraction and maintenance of people, motivation and mobility, diagnosis, information management, and integrated policies (Lacombe & Chu, 2008).
It is worth highlighting a relevant point among the advantages of staff sizing. After planning the necessary framework and attracting the professional relevant to the vacancy, it is necessary that personnel management be prepared to support their development and post-selection adaptation. From a strategic point of view, being clear about an organization's value chain is the basis for internal analysis that guides the planning process by recognizing available resources as well as vulnerabilities. In this context, the staff-sizing model is a technique to identify the skill gaps necessary to achieve strategic objectives and, thus serve as input to the strategies for the provision and development of the staff. The constant evaluation of human capital allows even greater synergy between the organizational values and its employees, insofar as it shows the expected competencies, experience and degree of commitment (Morrow, Jackson, Disch, & Mood, 2014).
Another relevant point in the composition of the workforce dimensioning refers to the interrelations of the productive capacity between the execution time of the demands, the absence of the employee, reliability of the service, complexity and the percentage of productive capacity. In the meantime, issues related to the labor supply are indicated as being preponderant factors for the calculation of an optimal number of employees. Thus, the classic problem of productive capacity planning as described by Koutsopoulos and Wilson (1987) and by Hickman, Koutsopoulos, and Wilson (1988) refers to the unit cost of employees and absenteeism that impact on the institution's budget. The result of the problem will comprise the size and composition of the staff/workforce in the planning horizon.
To solve this problem of inefficiency, the economic theory suggests models in which the level of work is defined in the employee's hiring. In this case, the decision variables of the model are the regular workforce available to perform the job, which consists of employees whose responsibility is the execution and fulfillment of goals and tasks. If the total cost per hour of work was lower for a worker who works more hours than for an employee who works regular time, then the solution would be to rely heavily on overtime or hire new employees to fill missing jobs. This solution is common in Brazil in some sectors of economy, mainly due to the high cost of additional benefits applied to employees who work on a regular basis.
The greater the confidence in working hours is, the greater the number of situations in which no employee will be willing to work more hours during the week is, and the institution will achieve the fulfillment of its goals. Whenever this does not occur, service reliability will be affected, which in itself is an important determinant of the results achieved, so it can be expected to affect the results to be achieved by the institution. Even, when workforce is available, replacement employees may not be familiar with their routines and designated processes, resulting in delays in service and lack of reliability. On the other hand, some employees may be more likely to decrease their effort after reaching a weekly wage limit, and this level can be reached after a few hours of work. Other employees may reduce their level of effort for other reasons such as absenteeism. For the composition of this paper, we considered factors as such as, complexity of work, effort employed, percentage of productive capacity, such as those defined by Nicholson (1977), Fichman (1984), Dilts, Deitsch, and Paul (1985).
Firstly, the decrease in effort is an approach-avoidance behavior. This research is based on this premise (Beehr & Gupta, 1978;Gupta & Jenkins, 1982), as most of the work based on job satisfaction is (Steers & Rhodes, 1978). Occupational stress can also be included in this category. Then, the decrease in effort is the result of a decision process. The expectancy theory (Vroom, 1964) and some attitude models (Ajzen & Fishbein, 1977) are decision models in which the most attractive action or object is chosen. In the idealized model, the person decides on a given day to participate or not in the work impacting productivity. According to the economic theory and using the utility maximization model or work-leisure trade-offs there are other useful examples for optimal labor basket composition (Allen, 1981;Chelius, 1981).
Lastly, reducing effort is a habit. The habit is implicit in the frequent observation that some workers are responsible for decreased production. Predicating on the basis of the past is consistent with the habit hypothesis but does not directly support it (Breaugh, 1981;Morgan & Herman, 1976;Waters & Roach, 1979). Thus, the management model is determined by the way in which a company organizes and guides the behavior of the employees' production in order to minimize the reduction of the allocated effort and thus achieve production goals following the strategic directions. It is important to point out that, what distinguishes a management approach from another is its ability to propose innovations proper to its organizational maturity (Carneiro, 2000).
It is worth noting that employees maximize their utility, which consists of income and production, so that employees face the budget constraint and the time restriction. The time constraint indicates that the total time in the period under consideration should be equal to the sum of the contracted number of work hours, hours worked as overtime hours, leisure hours, illnesses and other situations described in the CLT-Consolidation of work laws in Brazil, or specific law that include hours (Leisure hours include all activities that are not working time, for example: family activities and duties, sleep, etc.).
Thus, products or services are the items that make up the unit's portfolio and reflect the unit's contribution to the achievement of organizational objectives and strategies. Such products or services may serve the end customers, Federative Units, Government or internal customers. While the outcomes are groupings of assignments that make up the services/products of the unit and the assignments are the set of activities necessary for an outcome to take place. An outcome can have multiple assignments in different amounts. Assignments should highlight "what to do" rather than "how to do". In this way, macroprocess is defined as a set of processes executed in an orderly way, in one or more units, of wide aspect and necessary to the achievement of organizational goals and objectives (Nicoluci, Ferreira, & de Mogi Mirim, 2012). The integration between the macroprocess of an organization is fundamental for its competitiveness in the market. Every macroprocess defined must have a reason to exist, products and services generated, someone responsible for its execution, and customers and suppliers to its production line (Carpinetti, 2000).
Another relevant point for this research is the understanding and distinction of concepts presented in the production engineering literature, in which the distinction between the concepts of efficacy and efficiency is one of the most common (Ferreira & Gomes, 2009). The first concept deals with the simple accomplishment of a task: a work is efficacious if it reaches its objective. The concept of efficiency evolves from the concept of efficacy, adding that not only must the objective be achieved, but it also must be achieved with an optimal relation between inputs and outputs.
A widely spread measure of efficiency is presented by Farrell (1957), known as "Farrell`s efficiency", where the level of efficiency of a company is measured by the distance between the production observed and what would be its optimal production. Among the many methodologies developed, DEA stands out as one of the main operational research techniques (Fethi & Pasiouras, 2010). The DEA technique is detailed as follows.
Efficiency of organizational sizing with DEA
DEA is an inferential analysis technique, based on linear programming (Charnes et al., 1997). Its mechanism starts with the definition of a frontier of production possibilities, analyzing only the technical efficiency, defined as the capacity to change the quantity of inputs due to what is produced, or to change what is produced due to what is consumed. This feature is especially convenient for services in general, given that, in this context, the distinction between input and output is not always well defined (Golany & Roll, 1989) and, it is often not possible to change the quantity of inputs or outputs that are generated. In this analysis, the monetary value of the inputs and outputs is not necessarily considered.
The model does not demand the need for functional relations between inputs and products and is not restricted to single, unique measures of inputs and outputs either (Ferreira & Gomes, 2009). Therefore, it can be labeled as a nonparametric method of analysis, with gives it some advantages over parametric methods, such as Stochastic Frontier Analysis (Afonso & St Aubyn, 2005).
DEA works through benchmarking comparisons (Cooper, Seiford, & Zhu, 2004) of individual cases with an exemplar case, which is closer to the production frontier. It is widely used in several contexts such as, comparisons among hospitals relative efficiency (Biørn et al., 2003) and productivity of systems (Braglia et al., 2003). Banker (1984) reported that DEA can determine the optimal scale in a production process inside hospitals using as inputs: nursing service hours, general service hours, ancillary service hours and number of beds. Many others used number of employees as a factor related to performance efficiency (Mahajan, 1991;Yu & Lin, 2008).
From this, comparing efficiency within a company when comparing procedurally similar organizational units can result in workforce sizing. Since the primary focus of sizing is to calculate the required number of people to perform a task, the number of people in each area has been calculated. This indicator is considered the input for the DEA. This means that, it is necessary to create an index that serves as output, oriented to the organizational objective of generating more organizational value.
In order to identify and quantify the efficiency of equivalent organizational units in staff allocation, an applied, descriptive and quantitative study was carried out. It was divided into five parts. A brief introduction to the objectives is given first. Secondly, the theoretical reference is presented. Then, the methods and techniques of the research with its implementation stages and applied mathematical model are presented. Finally, the results and discussions related to the methodological and theoretical procedures used are presented.
Method-Design and procedures
This research is classified as applied, descriptive and quali-quantitative. The case study and the model were used as strategy of study. In this sense, two main theoretical tools were used: the mapping of employees' technical skills (Barduchi & Miglinski, 2015); and the use of a non-parametric method of efficiency analysis known as DEA. The case study was carried out in a private, non-profit entity under public law, and maintained by resources from the largest companies in the country, proportional to the value of their payrolls.
The origin of the entity was in the public sector, from which it was separated in 1990. The entity is responsible to promote and support the competitiveness and sustainable development of micro and small enterprises in Brazil. The organization researched has a focus on strengthening entrepreneurship and accelerating the process of formalization of the economy through partnerships with the public and private sectors, promoting training programs, access to credit and innovation and encouraging partnership.
This entity has been in operation for over 45 years in Brazil throughout the entire country and has service points in the 27 federal units. In this paper, the unit researched is the national headquarter, located in Brasilia, which maintains the top managers and, usually, implements practices that serve as a pilot for the other units of the federation. For the purpose of this paper, documents related to 21 units were judgmentally sampled and analyzed in order to determine its efficiency in terms of staff sizing in one federal unit during the year of 2017. It is important to emphasize that it is the only entity operating in this segment with this scope in the country and has a total of 512 employees. Particularly in Brazil, the role of this entity is very important, considering that in the Brazilian economy, the micro and small companies represent 99% of the total number of existing establishments and account for around 40% of the paid employees in private companies.
First of all, the data collection and the quantitative estimation of the efficiency of its units were carried out through interviews and documental analysis. The interviewers and documents were chosen under the criteria of accessibility and representativeness. For the collection of primary data, interview scripts were used to access information about the attributions of the products and services. First, the 21 managers were interviewed and information about the work performed at the units was collected. The interviewees were then interviewed and briefed on the position, competencies and activities each of them performed.
The interviews were aimed at identifying the specific technical competencies of each unit within the institution, in order to ensure that each unit was compared to units that have qualitatively similar goals. Therefore, greater precision is reached in the evaluation processes, and as a consequence, it specifies the measurement model for outcomes and expected results, used in DEA.
Data analysis took place in two stages. First, a Productivity Capacity Index (PCI) was created per unit. For modeling the PCI, information was collected about the complexity of each task, the amount of times the task is done during a year and how much dedication each employee devotes to it. Subsequently, this index was used for efficiency analysis using DEA. The focus of the analysis using DEA is the comparison between the DMUs that will be used. All modeling process and the relationship between variables are detailed below.
Data analysis: Productivity capacity index
As in the construction of any mathematical model, or economic indicators of production, for the construction of the PCI a series of assumptions was established. The first assumption addresses the productive potential of the employees of the sizeable institution. Since all employees are at a medium level and more than 93% are at a higher level, the productive potential of each employee is considered constant. The second assumption refers to the outcomes used. The variability between unit and occupational spaces indicates that the outcome does not have the same nature or level of complexity (Klein, Dansereau, & Hall, 1994). In order to differentiate the level of complexity, the average PCIs of the same complexity of the same unit were made only after determining the overall index of the unit. Specific weights were not attributed to the complexity of each unit because it was not a continuous variable, for example, it would not be appropriate to multiply each outcome by any value representing complexity, and a low complexity cannot be described as double or half of a high complexity. Finally, it is assumed that PCI is on a logarithmic scale, according to production assumptions commonly found in the economy literature (e.g., Christensen, Jorgenson, & Lau, 1973;Coelli, Rao, O'Donnell, & Battese, 2005). In this way, the PCI of each unit can be given by Equation (1): In which, PCI is the Productivity capacity index; C ik is an amount of outcome that occurs k times of complexity i; P ik is the number of individuals performing the outcome k of complexity i; (1) With PCI being used as output and the number of people as input, it is possible to compare the units in relation to their efficiency through input-oriented DEA and to estimate the efficient quantitative of the workforce in each unit. It is worth mentioning that the method used makes it possible to calculate the comparative efficiency of production units, called Decision-Making Units (DMUs). Weak assumptions regarding the frontier of production technology and the axioms (Fare, Grosskopf, & Lovell, 1994) enable this approach to compare multiple inputs and outputs. Also, this means that the focus of the analysis will be dependent on which organizational dimension one intends to act on: inputs or outputs. For example, if the number of people is the input and ICP is the output, and the goal is to know which organizational unit is most efficient in the use of its personnel, then an inputoriented analysis should be carried out.
Data analysis: DEA
To use DEA, one must first establish which production frontier will be used. According to the theory, for each DMU, technology transforms nonnegative inputs x k = (x k1 , … , x kN ) ∈ R N + into non-negative products y k = (y k1 , … , y kN ) ∈ R N + . When the measure of technical efficiency is input-oriented, technology is represented by the set of production possibilities T = {(x, y): x can produce y}, which includes all vectors of feasible inputs and outputs. The correspondence of inputs to the DEA reference technology, characterized by Constant Return to Scale (CRS) and the existence of free disposition of inputs (strong disposability, S), defines the linear technology, built from the combinations of inputs and outputs: The matrix M with dimension k × m has m products observed in k DMUs; N represents the matrix k × n with n inputs; and z is the vector 1 × k of the parameters. For each activity, the technical efficiency in the inputs, F i , can be defined as Consequently, this measure of radial efficiency varies between 0 and 1. Efficient production has a score equal to a unity. Thus, (1 − θ) represents the proportion at which inputs can be reduced without changing production. Using the technology specified in (2), the technical efficiency (input oriented) for agency k can be calculated as the solution of the following linear programming problem: According to DEA, the model presented implies strong constraints on the production, the existence of constant returns of scale (the increase in the number of inputs causes a proportional increase in the outputs, when a DMU is operating in its optimal capacity) being known as CCR model (Charnes, Cooper, & Rhodes, 1978) or CRS. This assumption can be easily relaxed by modifying the constraints on the intensity vector z. Fare et al. (1994) extended this technique to include the existence of decreasing returns (DRS -Decreasing Return to Scale). For this, the following restriction was added to the problem (4): (2) z kj x jn , n = 1, 2, 3, 4, … , N y km ≥ K ∑ j=1 z kj y jm , m = 1, 2, 3, 4, … , N , Z kj ≥ 0 , j = 1, 2, 3, 4, … , N Thus, it is inferred that the sum of the intensity variables cannot exceed unity, which implies that the different activities cannot be expanded infinitely. In the presence of Variable Return to Scale (VRS), the model proposed by Banker, Charnes, and Cooper (1984) considers that activities cannot be expanded without limit, nor contracted at source. Thus, increasing returns, that is, the increase in the number of inputs causes a disproportionally greater increase in the number of outputs, which occurs when a DMU is operating well below its optimum capacity, for the low levels of production and decreasing returns to the highest levels. In this model, called DEA-BCC (or DEA-VRS), efficiency indices are obtained by imposing equality to the constraint (5). Figure 1 graphically represents the differences between BCC and CCR boundaries for a two-dimensional DEA (1 input and 1 output). DMUs A, B, and C-are efficient BCC; the DMU B is efficient CCR. DMUs D and E are inefficient in both models.
One can understand, therefore, that the BCC model proposes to compare only DMUs that operate in a similar scale. This has the consequence that the efficiency of a DMU is obtained by dividing its productivity by the higher productivity among the DMUs that have the same type of return to scale. Thus, the BCC frontier presents straight lines of varying angles, which characterizes a linear boundary by parts (Mello, Lins, & Gomes, 2002). Finally, the model analyzed here has an input orientation and VRS assumption about the production frontier, in order to ensure that the optimal number of employees is determined to generate the current ICP and that the DMUs are compared in the same measurement scale, respectively.
The use of DEA allowed the estimation of existing gaps between the current staff and the capacity to meet the current and future demands of each unit, based on quantitative criteria. The results that pointed to a distance between the current staff and a quantitative reduction in 18 of the 21 units are presented in Table 1.
In addition to identifying the efficient DMUs, the DEA models were seen to allow the measurement and location of inefficiency and to estimate a piecemeal production function, which provides the benchmark for the inefficient DMUs. This benchmark is determined by the projection of inefficient DMUs at the efficiency frontier. The way in which this projection is done determines orientation of the model: orientation to inputs (when it is desirable to minimize inputs, keeping output values constant) and orientation to outputs (when it is desirable to maximize results without cutting resources). Table 2 shows the intervals of efficiency scores, the indication for reduction of the workforce and the effort index in the institution object of this simulation. Regarding the efficiency indicator, 14.28% of the observations are in the range of efficiency scores equal to 1, that is, efficiency considered high (maximum), while 47.61% of the data tested verified that the efficiency is considered average and 38.09% efficiency low. Regarding the Indication for Reduction, 85.71% of the observations indicate the need to reduce the force of personnel and finally 14.28% present a condition of maintenance of the labor force in the institution tested in its 21 units scattered throughout Brazil, as well as in the efficiency model. Another point raised refers to the PCI, in which the results showed that 52.38% of the results exceeded 8 and 47.61%, with a score of less than 8, this PCI aims to compare the units tested and through the use of the non-parametric mathematical method oriented to the input, the estimation of the efficient quantitative of the workforce in each unit of the institution tested in this research was guaranteed. The increase in production capacity was assessed for the 21 units tested according to Table 3.
The results showed that only 4.76% referring to one unit had an indicator higher than 1, a quantitative of 14.28% of results was found in 3 units, impacting the need for expansion and reduction of productive capacity, and finally, in the 17 final units a percentage of 80.95% was found, demonstrating the possibility productive capacity increase with the current framework for generating results. 82.1% of the allocated individuals are able to expand their capacities if reengineering mechanisms are implemented, and only 1 unit demonstrated the need to reduce the work intensity or increase the productive capacity.
Discussion of the findings
The objective of this study was to present an elaborated methodology for staff sizing of a private company, using two main theoretical tools: the mapping of employees' technical skills (Barduchi & Miglinski, 2015); and the use of a non-parametric method of efficiency analysis known as DEA. The results showed that there is a need to reduce the personnel in most of the organization. It is worth noting that, using a method of efficiency estimation, only in very specific contexts-such as the estimation of superefficiency (Avkiran, 2011)-it is possible to observe a need to increase the picture.
Also, it is understood that the method employed works heuristically as follows: it observes the relation between inputs and outputs, defines where the optimal units should be, ranks the observed units, defines the quantity of inputs (expressed by the number of people) to be reduced in the units that have not been completely efficient. However, like every model that tries to explain a reality, it presents a series of limitations and distortions in the results. One of the reasons that generates distortions is the monthly variation that each outcome presents and the limitation as to the accuracy that the employees have to express their percentage of dedication.
It is important to consider the need to enlarge or reduce the staff, since the comparison of the current situation with the desired situation through the DEA model allows the decision makers to check the gaps and the leftovers in relation to the framework and also to the complexity of the essential processes. To fill these gaps, the organization may promote trainings, hire staff, or service providers (Neri, 1999). Also, in the case of an excess contingent, the transfer of people or even their dismissal as a mechanism of budget cuts should be evaluated in order to maintain and consolidate the institution.
Some observations have to be made about the range of problems that can be solved using the proposed method. DEA diagnoses units that are below the possible target of efficiency because it establishes the most efficient units (in the frontier) and, then, compares the other units. This implies that it will always show only the need to reduce the size of the workforce. A solution to this problem may be the use of models of super efficiency (Seiford & Zhu, 1999), which can estimate unities with efficiencies higher than 1. Nevertheless, more problems can arise with the application of those models (Li, Jahanshahloo, & Khodabakhshi, 2007;Mehrabian, Alirezaee, & Jahanshahloo, 1999). The method proposed also uses only one composed indicator, which may cause information loss and possibly over or underestimation (Smith, 1997). Future studies should use the literature on sizing and production to be able to create a model with several other indicators that could be transformed to corroborate with the understanding and estimation of the ideal number of workers (Rocha & Morais, 2009). A multidimensional performance model that encompasses several dimensions of performance as synonymous with outcome, which could be observed in whatever occupational positions, would be ideally implemented. The model proposed by Viswesvaran, Schmidt, and Ones (2005) offers subsidies for this type of measurement. The scaling model sought to simplify an extremely complex reality that is organizational. Despite the possibility of implementations, the results enabled the presentation of information that objectively guides decisions on allocation, dismissal and the like.
The results led to the conclusion that there is a need to reflect, structure and develop methods that can lead not only to quantitative outcomes, but also promote discussion about the reconfiguration of organizational processes which affect the real needs of workers. Institutions and the population are within the scope of action of the categories involved. Assisted by will and support, where the institutionalization of a work management policy can guide the implementation of managerial tools and, mainly, contribute to the understanding of the subjective factors of the care actions by the institution.
The role of research in face of the challenges of labor management problems is fulfilled, as well as the need to invest in the improvement and management of labor force information, both quantitatively and qualitatively. It must rely on an information system that is capable of providing strategic information to the employees of an institution, in which it deserves full attention for its facilitating and contributing potential in the recognition, identification and analysis of the workforce.
Concluding remarks
Considering that thinking human resources strategically is a trend, this research presents findings related to the importance of the integration of qualitative and quantitative research in order to estimate or determine staff sizing. The main purpose is providing relevant elements and information to managers in decision-making processes related to staff sizing. In this paper, specifically considering the particularities of the entity studied, which acts promoting competitiveness and supporting micro and small companies in Brazil, whose economy and employment generation are concentrated on small businesses.
Nevertheless, it is important to point out some limitations of this study. First of all, the case study does not allow generalizations, some results can be similar to micro and small companies from other developing countries, however, the particularities of Brazil can be considered. The application of the study only in the headquarters of the entity under study can be also considered a limitation. Although this unit can be considered representative, since the strategies to be adopted in the other units throughout the country are first analyzed in this unit, it can be highlighted as a limitation. The utilization of DEA as a single quantitative approach to estimate staff sizing can be also considered a limitation of this study. Another limitation was the distortions caused by the monthly variation that each outcome presents and the limitation as to the accuracy that the employees have to express their percentage of dedication. The method proposed also uses only one composed indicator, which may cause information loss and possibly over or underestimation (Smith, 1997).
Future studies can conduct surveys with managers from all 27 service points of the studied entity throughout the country, in order to test the results of this research. Qualitative research with managers from other units can also be conducted in order to compare results obtained in this research. Further research can also investigate the reality of micro and small companies in terms of staff sizing in emerging economies and other developing and developed countries. In this sense, some comparative studies can be developed in order to analyze the practices carried out in developing vs developed countries. The utilization of other approaches to estimate or determine staff sizing can also be used in this context such as, Multicriteria decision-aid approach, Markov chains, Linear Programming and Stochastic models, and others. Finally, future studies should use the literature on sizing and production to be able to create a model with several other indicators that could be transformed to corroborate with the understanding and estimation of the ideal quantity of personnel (Rocha & Morais, 2009).
To the best of our knowledge, this is the first paper using DEA in the context of staff sizing. Thus, the main contribution of this paper lies in the adoption of a quantitative and systematized approach to estimate the staff sizing problem, which is the DEA. Another contribution lies in the study of a larger entity, which is the only one to operate in the segment, supporting micro and small companies throughout Brazil. In terms of the case study, considering that Brazil has a significant role in the Mercosul -Mercado Comum do Sul (South Common Market, comprising Argentina, Brazil, Paraguay and Uruguay), representing 75% of the economy and population, the results can be relevant to these countries for providing insights related to the promotion of micro and small businesses.
On the other hand, it is important to highlight that when compared to Brics (Brazil, Russia, India, China and South Africa), Brazil presents more differences than similarities, mainly related to common interests, incentives to trading, infrastructure and others. Nevertheless, the Brazilian reality, in terms of small businesses, can be compared with other developing countries such as the Caribbean and the Mercosul ones. In this sense, some practices related to staff sizing can also be useful for these developing countries. For practitioners, the framework presented in this paper can be useful as a staff sizing model for micro and small companies, considering the particularities of the case studied. For researchers, this paper can provide insights for further studies related to the utilization of staff sizing systematized models. Other framework or approaches can be compared with the DEA approach in order to analyze its strengths and weaknesses. The issue related to policies regarding staff sizing aimed to micro and small companies in developing countries can be also focus of investigation. | 2018-12-29T12:05:19.574Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "027154b567c54695da8debf437722b21bb289a4e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/23311975.2018.1463835",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "027154b567c54695da8debf437722b21bb289a4e",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14873516 | pes2o/s2orc | v3-fos-license | Risk Factors of Chronic Rhinosinusitis After Functional Endoscopic Sinus Surgery
Background Clinical data of 288 chronic rhinosinusitis patients were retrospectively analyzed to investigate the risk factors of clinical prognosis, aiming to provide clinical evidence for the diagnosis and treatment of chronic rhinosinusitis. Material/Methods A total of 288 patients diagnosed with chronic rhinosinusitis in the Department of Otolaryngology of the First Affiliated Hospital of Xinjiang Medical University were recruited. Among all participants, 177 were male and 111 were female, aged from 22 to 83 years, (52±14) years on average. Subsequent follow-up was conducted to evaluate surgical efficacy. Influencing factors of clinical prognosis were analyzed by univariate and multivariate logistic regression analyses. Results After functional endoscopic sinus surgery by Messerklinger technique, 187 (64.9%) patients were fully recovered, 72 (25.0%) presented with improvement, and 28 (10.1%) were untreated. Univariate logistic regression analysis revealed that 11 variables were correlated with the clinical prognosis of chronic rhinosinusitis. Multivariate logistic regression analysis demonstrated that age, history of allergic rhinitis, severity of dysosmia, history of nasosinusitis surgery, and long-term use of nasal decongestant were the risk factors, whereas comprehensive therapy after surgery was a protective factor. Conclusions More emphasis should be placed upon the factors associated with the clinical prognosis of patients with chronic rhinosinusitis following undergoing endoscopic sinus surgery, offering consolidated evidence for the prevention and treatment of chronic rhinosinusitis.
Background
Chronic rhinosinusitis describes a variety of chronic inflammatory conditions of the nasal mucosa and paranasal sinuses accompanied by stuffy nose, rhinorrhea, dizziness, headache, and alternative symptoms [1]. Chronic rhinosinusitis is not only regarded as one of the most common chronic diseases in developed countries, but also imposes a substantial negative effect upon patient quality of life, daily work, and healthcare expenditure [2][3][4]. Along with the widespread application of endoscopic sinus surgery, the success rate of chronic rhinosinusitis surgery has been dramatically improved [5]. However, the risk factors affecting the clinical prognosis of chronic rhinosinusitis patients remain elusive. Maintaining and improving the clinical prognosis of chronic rhinosinusitis after endoscopic sinus surgery remains the focus of physicians. To resolve this challenge, clinical and epidemiological data of 288 patients diagnosed with chronic rhinosinusitis were retrospectively analyzed in the present clinical trial, aiming to identify risk factors affecting the clinical prognosis of chronic rhinosinusitis patients undergoing endoscopic sinus surgery.
Baseline data
A total of 288 patients diagnosed with chronic rhinosinusitis undergoing endoscopic sinus surgery in the Department of Otolaryngology of the First Affiliated Hospital of Xinjiang Medical University were recruited as study subjects according to strict diagnostic criteria [6]. Exclusion criteria were: those with nasal papilloma, acute nasosinusitis, chronic paranasal sinus fungus disease, acute episode of chronic rhinosinusitis, and paranasal sinus malignant tumors. Eventually, 288 patients, who were diagnosed with chronic rhinosinusitis in the Department of Otolaryngology of the First Affiliated Hospital of Xinjiang Medical University, were recruited. Among all participants, 177 were male and 111 were female, aged from 22 to 83 years, (52±14) years. The duration of disease ranged from 1 to 22 years, with a mean duration of 6.2 (±1.9) years.
Surgical procedures
Prior to formal surgery, all patients received a computed tomography (CT) scan of the nose and nasal endoscopic examination to observe the intranasal morphology. Anti-inflammation and hormone medications were administered before surgery. All surgical procedures were accomplished by chief physicians under general anesthesia. Messerklinger technique was used in all cases. For patients with deviation of nasal septum, septoplasty was simultaneously performed to straighten the nasal septum along with endoscopic sinus surgery. Postoperatively, all patients were administered nasal decongestant, antibiotics, and glucocorticoid. Subsequent follow-up was conducted for 10 months to 3 years. Informed consent was obtained from all patients prior to functional endoscopic sinus surgery.
Relevant variables
The factors probably affecting the clinical prognosis included age, sex, smoking, drinking alcohol, history of asthma, allergic rhinitis and nasosinusitis, accompanied by nasal polyp, nasal septum deviation, severity of diseases, severity of dysosmia, duration of nasal hormone use, duration of comprehensive treatment after surgery, and long-term use of nasal decongestant.
Efficacy evaluation
Healing was defined as alleviation of all clinical symptoms; the opening of the ostia was achieved; and epithelization was observed in the mucosa of sinus cavity, but no purulent secretion was observed. Treatment was regarded as effective when relevant symptoms were significantly mitigated; thickening, edema, and granulation tissue formation were documented in the mucosa of sinus cavity, and a slight quantity of purulent secretion was noted. Treatment was regarded as ineffective when clinical symptoms were not evidently alleviated; sinus cavity adhesion occurred after surgery; the opening of ostia was small and even closed; and signs of nasal polyp and purulent secretion were documented. The patients who were fully healed were assigned into the recovery group, and the others who presented with improvement and inefficacy were allocated into the untreated group.
Statistical analysis
All data were analyzed using SPSS 19.0 statistical software (SPSS Inc., Chicago, IL). Univariate and multivariate logistic regression analyses were used to identify the risk factors affecting the clinical prognosis of patients with chronic rhinosinusitis. The association between risk factors and the incidence of chronic rhinosinusitis was analyzed by Wald test. A P value of less than 0.05 was considered as statistically significant.
CT scan findings
We used the Lund-McKay CT scoring system [7] to evaluate the opacification degree of the sinuses and ostiomeatal complex with a score range of 2, 1, and 0 is if there was complete, partial, or no signs, respectively, of opacification. Scoring of the CT findings based upon the Lund-Mackey scoring system revealed that the maximum number of cases obtained the scores ranging from 5 to 8 and 9 to 12.
Surgical efficacy
A total of 288 patients with chronic rhinosinusitis underwent endoscopic sinus surgery and were subjected to subsequent follow-up from 10 months to 3 years, with a mean duration of 12 months. Among all participants, 187 (64.9%) patients were fully recovered, 72 (25.0%) presented with certain improvement, and 28 (10.1%) were left untreated.
Univariate regression analysis
Univariate Logistic regression model was used to screen the variables related to the clinical prognosis. In total, 11 variables were proven to be associated with the clinical prognosis, as illustrated in Table 1.
Multivariate regression analysis
Subsequently, these 11 variables were subjected to multivariate regression model analysis. As revealed in Table 2, age, history of nasosinusitis and allergic rhinitis, severity of dysosmia, and long-term use of nasal decongestant were identified as the risk factors affecting clinical prognosis, whereas comprehensive treatment after surgery was considered as a protective factor in clinical prognosis.
Discussion
Chronic rhinosinusitis consists of a variety of inflammatory and infectious diseases involved with the nose and paranasal sinus. It exerts a more severe impact upon quality of life compared with other chronic illnesses, including hypertension, mellitus diabetes, and cardiac failure [8][9][10][11]. Moreover, patients with chronic rhinosinusitis have significantly different etiology, pathology, clinical manifestations, severity of disease, and clinical prognosis. Currently, functional endoscopic sinus surgery, comprising several techniques, has become a well-established strategy for the treatment of refractory chronic rhinosinusitis untreated by medications. The management of ostiomeatal complex plays a pivotal role in surgical procedures of endoscopic sinus surgery. Through surgical management, the ostiomeatal complex can be established and restored to create ventilation and drainage channels for the paranasal sinus and to steadily restore the function of nasal mucosa. In addition, endoscopic sinus surgery has multiple advantages, such as explicit visual field, retaining the physiological function of nasal cavity, small trauma, fast recovery, and low recurrence rate [12,13]. However, due to the complex and elusive influence of varying factors, it remains a challenge to predict the clinical prognosis of patients with chronic rhinosinusitis after undergoing endoscopic sinus surgery. The long-term clinical efficacy and prognosis remain to be elucidated. Consequently, multiple potential factors likely affecting the clinical prognosis of chronic rhinosinusitis patients were retrospectively analyzed in this study, aiming to provide evidence-based data for the prevention, treatment, and prediction of clinical prognosis of chronic rhinosinusitis.
The findings in our study have demonstrated that the elderly population with chronic rhinosinusitis presented with a higher risk of poor prognosis compared with younger individuals. The main causes may include poor physical condition, low immunity, and high prevalence of severe complications, such as diabetes mellitus, cardiovascular diseases, kidney disease, and malignant tumors, which contribute to low efficacy and poor prognosis. Logistic regression analysis revealed that medical history of allergic rhinitis was another influencing factor. Allergic rhinitis exposes the nasal cavity mucosa to high reactive status for a long period, leading to mucosa swelling, evident exudate, and ostia stenosis and closure. All these conditions create a favorable proliferation environment for bacteria, viruses, and fungi.
Patients with chronic rhinosinusitis who had a history of nasosinusitis tended to have poor prognosis. Previous studies have indicated that chronic rhinosinusitis patients have a high risk of pathological changes in the bone, which aggravates the progression of nasosinusitis. In addition, the severity of lesions in the ethmoid bone is significantly correlated with the clinical prognosis following endoscopic sinus surgery. Surgical procedures may cause trauma and injury to the nasal structure and function, often resulting in bone exposure, fibrous scarring, and ostia stenosis. All these symptoms are likely to induce the recurrence of chronic rhinosinusitis [14][15][16][17].
In this study, multivariate logistic regression analysis revealed that the severity of dysosmia is associated with more severe nasal mucosa pathological changes. Patients with chronic rhinosinusitis were constantly complicated with irreversible injuries in the nasal mucosa, which were difficult to thoroughly eliminate, thereby affecting the functional recovery of the smell sensation, which is consistent with previous findings [18][19][20].
Our investigation also demonstrated that long-term use nasal decongestant might induce the risk of exfoliated mucosa cilium, epithelial cell necrosis and distortion, enlarged intercellular space, stromal edema, inflammatory cell infiltration, thickening and fibrosis of epithelial tissue, and even the emergence of squamous metaplasia. In a rabbit study, the researchers also found that long-term use of ephedrine could cause evident damage to the nasal mucosa of rabbits, suggesting that nasal decongestant should be used very cautiously [21].
Previous investigations have indicated that comprehensive treatment includes clearance of lesions in the nasal cavity, as well as topical use of hormone, antibiotics and topical irrigation [22,23]. In this study, the patients were required to receive comprehensive therapies during subsequent follow-up, and relatively high efficacy was obtained, indicating the protective effect and clinical significance of comprehensive treatment after functional endoscopic sinus surgery.
Conclusions
We found that 11 variables were correlated with the clinical prognosis of chronic rhinosinusitis patients. Multivariate logistic regression analysis demonstrated that age, history of allergic rhinitis, severity of dysosmia, history of nasosinusitis surgery, and long-term use of nasal decongestant were risk factors affecting clinical prognosis. Comprehensive therapy after surgery was a protective factor. | 2017-10-04T08:00:53.335Z | 2017-02-28T00:00:00.000 | {
"year": 2017,
"sha1": "b2114387b1efdafda17b052d9ef83ff0e32292f3",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc5341909?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2114387b1efdafda17b052d9ef83ff0e32292f3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13322865 | pes2o/s2orc | v3-fos-license | The integration of autophagy and cellular trafficking pathways via RAB GAPs
Macroautophagy is a conserved degradative pathway in which a double-membrane compartment sequesters cytoplasmic cargo and delivers the contents to lysosomes for degradation. Efficient formation and maturation of autophagic vesicles, so-called phagophores that are precursors to autophagosomes, and their subsequent trafficking to lysosomes relies on the activity of small RAB GTPases, which are essential factors of cellular vesicle transport systems. The activity of RAB GTPases is coordinated by upstream factors, which include guanine nucleotide exchange factors (RAB GEFs) and RAB GTPase activating proteins (RAB GAPs). A role in macroautophagy regulation for different TRE2-BUB2-CDC16 (TBC) domain-containing RAB GAPs has been established. Recently, however, a positive modulation of macroautophagy has also been demonstrated for the TBC domain-free RAB3GAP1/2, adding to the family of RAB GAPs that coordinate macroautophagy and additional cellular trafficking pathways.
M acroautophagy is a conserved degradative pathway in which a doublemembrane compartment sequesters cytoplasmic cargo and delivers the contents to lysosomes for degradation. Efficient formation and maturation of autophagic vesicles, so-called phagophores that are precursors to autophagosomes, and their subsequent trafficking to lysosomes relies on the activity of small RAB GTPases, which are essential factors of cellular vesicle transport systems. The activity of RAB GTPases is coordinated by upstream factors, which include guanine nucleotide exchange factors (RAB GEFs) and RAB GTPase activating proteins (RAB GAPs). A role in macroautophagy regulation for different TRE2-BUB2-CDC16 (TBC) domain-containing RAB GAPs has been established. Recently, however, a positive modulation of macroautophagy has also been demonstrated for the TBC domain-free RAB3GAP1/2, adding to the family of RAB GAPs that coordinate macroautophagy and additional cellular trafficking pathways.
Macroautophagy is a membrane mobilization and vesicle trafficking system
Macroautophagy is an evolutionarily conserved eukaryotic process in which cytoplasmic contents are sequestered by phagophores, which mature into autophagosomes and deliver their cargo to lysosomes for degradation. 1 The pathway is induced under conditions of nutrient deprivation or stress and is an important functional component of the cellular homeostasis network. Deterioration of macroautophagy is associated with several disorders, including neurodegenerative diseases and cancer. 2 One main characteristic of macroautophagy is the double-membrane autophagosomes, which are generated at distinct cellular locations, the phagophore assembly sites (PAS). Upon macroautophagy induction, the activated ULK1/2 complex (including ATG13 and RB1CC1/ FIP200) and phosphatidylinositol 3kinase complex (including PIK3C3/ Vps34, ATG14, and BECN1/Vps30/ Atg6) are recruited to the PAS and initiate the formation of a phagophore by directing additional autophagic proteins to this site. These include WIPI1/Atg18, WIPI2/ Atg18, ZFYVE1/DFCP1, ATG9, and the ATG12-ATG5-ATG16L1 complex. 3 The latter is part of a ubiquitin-like conjugation system and mediates the attachment of phosphatidylethanolamine to the C terminus of Atg8 family members. This protein family comprises the subfamilies of MAP1LC3 and GABARAP in mammals, and lipidation results in their binding to the growing phagophore membrane which is essential for phagophore expansion and maturation. 4 Phagophore formation and autophagosome maturation are dependent on the adequate supply of membranes and appropriate cellular membrane dynamics. Recently, the plasma membrane, the Golgi, the ER, 5,6 and lipid droplets 7 have been recognized as lipid sources. In response to different regimens of macroautophagic activity they are considered to be selectively accessed to satisfy macroautophagic membrane requirements. 8 Interestingly, it is considered that the phagophore matures to an autophagosome by the addition of lipids via vesicular fusion rather than via lateral movement of membranes from existing cellular organelles. 5,9 Consequently, the resulting sophisticated and complex Keywords: autophagosome formation, autophagy, RAB GAP, RAB GTPase, RAB3GAP, vesicle trafficking Abbreviations: ATG, autophagy related; BECN1, Beclin 1, autophagy related; CAL-COCO2, calcium binding and coiled-coil domain 2; ER, endoplasmic reticulum; GABARAP, GABA(A) receptor-associated protein; GDP, guanosine-5 0 -diphosphate; GTP, guanosine-5 0 -triphosphate; LRRK1, leucine-rich repeat kinase 1; MAP1LC3, microtubule-associated protein 1 light chain 3; NBR1, neighbor of BRCA1 gene 1; PAS, phagophore assembly site; PE, phosphatidylethanolamine; PIK3C3, phosphatidylinositol 3-kinase, catalytic subunit type 3; RAB GAP, RAB GTPase activating protein; RAB GEF, RAB GTPase guanine exchange factor; SQSTM1, sequestosome 1; TBC domain, TRE2-BUB2-CDC16 domain; TBCGAP, TBC domain-containing RAB GAP; ULK, unc-51 like autophagy activating kinase; WIPI, WD repeat domain, phosphoinositide interacting 1; ZFYVE1, zinc finger, FYVE domain containing 1 membrane acquisition system needs to be carefully coordinated, and proteins that control vesicle transport systems are important factors for macroautophagy.
The protein family of small RAB GTPases is specialized in the control of vesicle transport routes and ensures trafficking of vesicles to their appropriate target compartments. 10 RAB GTPases interact with effector proteins such as cargo sorting complexes, motor proteins, and tethering factors, which results in vesicle budding, transport, and fusion. The interactions with these effectors are precisely controlled by GDP/GTP exchange and hydrolysis of GTP. Since GDP is principally tightly bound by RAB GTPases and their intrinsic GTP hydrolysis rates are low, this cycle is regulated by guanine exchange factors (RAB GEFs) that catalyze the dissociation of GDP, and RAB GTPase activating proteins (RAB GAPs) that facilitate the hydrolysis of GTP. 11 Both regulators are required to coordinate the temporal-spatial activity of RAB GTPases. In recent years multiple RAB GTPases, RAB GEFs, and RAB GAPs have functionally been associated with macroautophagy. 12 This commentary will focus on RAB GAPs and briefly address their effects on this degradative pathway (schematically summarized in Fig. 1) and vesicle trafficking systems.
TBCGAPs: TBC domain-containing RAB GAPs that function in macroautophagy
In approaches aiming to identify RAB GAPs that affect macroautophagy, several TBC domain-containing RAB GAPs have been characterized. [13][14][15] The TBC domain accelerates the hydrolysis of GTP by RAB GTPases and TBC domain-containing RAB GAPs (hereafter referred to as TBCGAPs) are linked to different trafficking routes, and are important factors that integrate diverse cellular pathways. 16 TBC1D25/OATL1 was identified in a study expressing 41 TBCGAPs in mouse embryonic fibroblasts and selecting proteins that colocalize with endogenous MAP1LC3. 13 TBC1D25/OATL1 targets the ATG16L1-interacting RAB GTPase RAB33B and is recruited to autophagosomes by direct binding to Atg8 family members. Increased levels of TBC1D25/ OATL1 inhibit the fusion of autophagosomes with lysosomes and prevent autophagosomal maturation.
In an approach overexpressing 38 TBCGAPs in HEK293 cells and analyzing their ability to inhibit autophagosome formation upon nutrient deprivation, 11 TBCGAPs were shown to negatively regulate macroautophagy. 14 The TBCGAP TBC1D14 was analyzed in detail and was shown to modify the trafficking of ULK1containing recycling endosomes and to interfere with the activity of the RAB GTPase RAB11A/B. The function of RAB11 is required to transport recycling endosomes to the PAS and, thus, TBC1D14 and RAB11 regulate starvationinduced formation of autophagosomes.
In another study employing GST affinity isolation techniques, 14 TBCGAPs were identified to interact with Atg8 family members. 15 Subsequently, the colocalization of these TBCGAPs with MAP1LC3 and SQSTM1 was analyzed, resulting in 4 promising candidates. The TBCGAP TBC1D5 was further characterized and was shown to have 2 binding motifs for Atg8 family members. During basal macroautophagy conditions TBC1D5 binds to the retromer complex and influences retrograde transport routes. Upon macroautophagy induction, TBC1D5 dissociates from the retromer, associates with MAP1LC3, and directs ATG9 and active ULK1 from the retromer to the PAS. 17 This rerouting of ATG9 is additionally regulated by the clathrin adaptor complex (AP2) and requires functional clathrin-mediated endocytosis. Thus, the dynamic translocation of TBC1D5 to autophagosomes is central for the trafficking of ATG9 from the retromer complex to the site of autophagosome biogenesis.
The protein TBC1D2/Armus is an additional TBCGAP that interacts with MAP1LC3 and integrates trafficking pathways and macroautophagy. 18 Overexpression of TBC1D2 results in the accumulation of enlarged autophagosomes, and its deficiency delays macroautophagic flux. Upon macroautophagy induction, TBC1D2 is recruited to autophagosomes by binding to Atg8 family members and regulates the activity of the RAB GTPase RAB7, which is essential for the fusion of autophagosomes and lysosomes. 12 Interestingly, TBC1D2 is also an effector of the small GTPase RAC1, which is a negative regulator of macroautophagy. Nutrient deprivation inactivates RAC1, which allows the association of TBC1D2 with autophagosomes and results in regulation of RAB7. Thus, the interplay of TBC1D2, RAC1, and RAB7 underlines the coordinate character of macroautophagy and other cellular trafficking pathways mediated by RAB GTPases and RAB GAPs.
In these studies a multitude of TBCGAPs were linked to macroautophagy, which are summarized in Table 1 with respect to their substrate RAB GTPases and their nonautophagic functions, if characterized. Although the influence on macroautophagy of the majority of these RAB GAPs needs to be confirmed, the large number of potential candidates highlights the complexity of the coordination of membrane or vesicle trafficking and the macroautophagic pathway.
RAB3GAP1 and RAB3GAP2 as non-TBCGAPs and their function in macroautophagy and beyond
The introduced TBCGAPs function in macroautophagy and contribute to the reorganization of membrane trafficking routes according to the cellular requirements. This coordinate property has been well established for TBCGAPs that are ideally placed for such a role, as one TBCGAP can act as an effector of different RAB GTPases. Interestingly, according to sequence homology the human TBCGAP family includes 44 proteins and is complemented by the RAB3-GAP complex, which is the only described RAB GAP without a TBC domain. 16 The heterodimeric complex consists of the catalytic subunit RAB3GAP1 and the noncatalytic subunit RAB3GAP2 19 and has been well established to regulate the name-giving RAB GTPase RAB3A-D and to modify neurotransmitter release at the neuronal synapse. In a RAB3GAP1 knockout mouse model, GTP-bound RAB3 accumulates in the brain and Ca 2C -dependent glutamate release from cerebrocortical synaptosomes is inhibited. 20 Indeed, by regulating the activity of RAB3, the RAB3GAPs are essential for maintenance of synaptic homeostasis. 21 Recently, we showed that the TBC domain-free RAB3-GAP1/2 also modulate macroautophagy and are essential factors of autophagosome formation. 22 Deficiency of both proteins in human primary fibroblasts deteriorates autophagosomal biogenesis and reduces macroautophagic activity at basal and induced macroautophagy conditions, whereas their overexpression enhances this process. The positive modulation of macroautophagy is dependent on the GAP activity of RAB3-GAP1 but independent of RAB3, suggesting that RAB3GAP1/2 access an alternative RAB GTPase, which has not been identified yet. Interestingly, the RAB3GAP complex was recently shown to be a RAB GEF for the RAB GTPase RAB18 and provokes localization of RAB18 to the ER, which is necessary for maintenance of ER structure. 23 Excitingly, mutations in RAB3GAP1/2 and RAB18 cause the Warburg Micro syndrome, a devastating developmental disorder. 24 The molecular mechanisms of this disease are not clarified yet but a functional association of RAB3GAP1/2 and RAB18 might support the identification of responsible pathogenetic pathways. Next to RAB3 regulation and its involvement in macroautophagy, RAB3-GAP1 interacts with LMAN1/ERGIC53 25 and mediates the exocytosis of CLDN1, 26 which highlights the coordinative character of this TBC domain-free RAB GAP in cellular trafficking systems. As indicated above, several macroautophagy-modifying TBCGAPs were identified by their interaction with Atg8 family members and this interaction is counteracted by other interacting proteins that compete for binding sites. The ability of Atg8 family members to direct RAB GAPs to phagophores indicates that they might act as scaffolding molecules and, thus, are central partners for the activity of RAB GAPs in macroautophagy. This mechanism is comparable to the interaction of Atg8 family members with cargo receptors involved in selective macroautophagy, such as SQSTM1, NBR1, or CALCOCO2. 27 MAP1LC3 serves as a binding partner and recruits cargo receptors to phagophores, which mediates substrate-specificity to macroautophagy.
Interestingly, an interaction with Atg8 family members has also been indicated for RAB3GAP1/2 based on a proteomic approach, 28 although a direct physical interaction awaits confirmation. 22 Relevance of RAB GAPs in macroautophagy and compensatory mechanisms for membrane mobilization The formation and transport of autophagosomes is one of the major challenges for the entire macroautophagy process and needs to be carefully controlled to reduce interference with other cellular trafficking pathways. The activity of RAB GTPases, RAB GEFs, and RAB GAPs positions these proteins as central factors for this coordination and their relevance for macroautophagy has been shown in multiple studies. 12 However, the selection of macroautophagy-deficient yeast strains resulted in the characterization of at least 40 Atg proteins, most of which do not appear to be involved in membrane mobilization or vesicle transport. An exception (although not an "Atg" protein) is the ortholog of RAB1, Ypt1, 29 and its RABGEF, the TRAPPIII complex, 9 which have been defined as important factors for autophagosome formation in yeast and possess a likewise important role for macroautophagy also in mammalian cell lines. 12 Interestingly, several RAB GAPs modulate macroautophagy particularly under induced conditions when macroautophagic membrane requirements are increased, which underlines the need for a stringent control, and some RAB GAPs seem to function in overlapping pathways. For example, TBC1D14 and TBC1D5 appear to be important both for the coordination of endosomal trafficking and autophagosome biogenesis. 14,15,17 Recently, TBC1D2, which effects the RAB GTPase RAB7 and modulates autophagosome-lysosome fusion, was shown to be activated by LRRK1 upon macroautophagy induction. 30 Therefore, the characterization of upstream factors that modulate the activity of RAB GAPs and the identification of target RAB GTPases will help to dissect the precise pathways that are modulated by these proteins and allow the identification of possible compensatory mechanisms. This will increase our understanding of the reorganization and the condition-dependent plasticity of cellular trafficking systems that are necessary to keep macroautophagy going.
Disclosure of potential conflicts of interest
No potential conflicts of interest were disclosed. | 2018-04-03T00:37:46.097Z | 2015-11-13T00:00:00.000 | {
"year": 2015,
"sha1": "5b1c397fe9d3429060e6b04043822f6672d973c2",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/15548627.2015.1110668?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b1c397fe9d3429060e6b04043822f6672d973c2",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
253110905 | pes2o/s2orc | v3-fos-license | GABAA receptor function is enhanced by Interleukin-10 in human epileptogenic gangliogliomas and its effect is counteracted by Interleukin-1β
Gangliogliomas (GGs) are low-grade brain tumours that cause intractable focal epilepsy in children and adults. In GG, as in epileptogenic focal malformations (i.e., tuberous sclerosis complex, TSC), there is evidence of sustained neuroinflammation with involvement of the pro-inflammatory cytokine IL-1β. On the other hand, anti-inflammatory mediators are less studied but bear relevance for understanding seizure mechanisms. Therefore, we investigated the effect of the key anti-inflammatory cytokine IL-10 on GABAergic neurotransmission in GG. We assessed the IL-10 dependent signaling by transcriptomic analysis, immunohistochemistry and performed voltage-clamp recordings on Xenopus oocytes microtransplanted with cell membranes from brain specimens, to overcome the limited availability of acute GG slices. We report that IL-10-related mRNAs were up-regulated in GG and slightly in TSC. Moreover, we found IL-10 receptors are expressed by neurons and astroglia. Furthermore, GABA currents were potentiated significantly by IL-10 in GG. This effect was time and dose-dependent and inhibited by blockade of IL-10 signaling. Notably, in the same tissue, IL-1β reduced GABA current amplitude and prevented the IL-10 effect. These results suggest that in epileptogenic tissue, pro-inflammatory mechanisms of hyperexcitability prevail over key anti-inflammatory pathways enhancing GABAergic inhibition. Hence, boosting the effects of specific anti-inflammatory molecules could resolve inflammation and reduce intractable seizures.
www.nature.com/scientificreports/ mTOR Mammalian target of rapamycin NMDA N-methyl-D-aspartate AMPA α-Amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid CX3CL1 Chemokine (C-X3-C motif) ligand 1 ASMs Anti-seizure medications IL-1Ra Interleukin-1 receptor antagonist pS6 S6 ribosomal protein TYK2 Tyrosine kinase 2 PKA Protein kinase A PKC Protein kinase C PKG Protein kinase G CNS Central nervous system I GABA GABA current E GABA GABA current reversal potential I-V Current-voltage relationship JAK1 Janus activated kinase 1 JAK2 Janus activated kinase 2 IL-10Rα Interleukin-10 receptor α IL-10Rβ Interleukin-10 receptor β NeuN Neuronal nuclei antigen MAP2 Microtubule-associated protein 2 GFAP Glial fibrillary acidic protein STAT3 Signal transducer and activator of transcription 3 mIPSCs Miniature inhibitory post-synaptic currents DG Dentate gyrus neurons EC 50 Half maximal effective concentration IFN-γ Interferon γ Gangliogliomas (GGs) are the most frequent tumor type among developmental low-grade brain tumors which are well-recognized causes of intractable focal epilepsy in children and young adults 1,2 . Accordingly, epileptic seizures are reported in 80-100% of patients with GG compared to 30% in malignant gliomas 3 . However, the pathophysiological mechanisms of GG epileptogenicity are still poorly understood 4,5 . Due to their strong association with epileptic seizures, it was suggested that GGs are endowed of intrinsically altered synaptic functions. This clinical feature aligns with recent findings that the oncogenic BRAF somatic mutation in GG elicits hyperexcitability 6 that is mediated by RE1-silencing transcription factor, a master regulator of ion channels and neurotransmitter receptors in epilepsy 3 , and by the activation of the epileptogenic Akt/ mTOR signaling 7,8 .
One factor likely contributing to GG epileptogenicity relates to neuroinflammation that is described in these lesions 4,9 , since this phenomenon is involved in both epileptogenesis and ictogenesis 10 .
Indeed, the expression and receptor signaling of various cytokines undergo changes in epileptic foci, and cytokine levels are often modified in serum and cerebrospinal fluid of patients with epilepsy 11 . Some of these molecules play a role in seizure generation in animal models by modifying the activity of voltage-gated or receptor-coupled ion channels 12,13 , and by inducing transcriptional changes of genes involved in synaptic transmission and epileptogenesis 10,13 . In particular, the prototypical inflammatory cytokine interleukin-1β (IL-1β) plays a pivotal role in ictogenesis and epileptogenesis both in experimental models of epilepsy [14][15][16] and patients.
Our investigation stemmed from the observation that the developmental brain tumours, such as GG and TSC cortical tubers represent common causes of drug-resistant focal epilepsy with early seizure onset 5 . In addition, recent advances highlight the involvement of different, but also converging, epileptogenic mechanisms including the activation of mTOR pathway as well as a sustained inflammatory response in both these lesions with the involvement of the pro-inflammatory cytokine IL-1β 17 .
The anti-inflammatory cytokine interleukin-10 (IL-10) 18,19 has attracted attention in epilepsy as master regulator of glial cell inflammatory phenotypes 20 . Moreover, IL-10 was shown to reduce IL-1β production and inflammasome activation in experimental epilepsy 21 and attenuated behavioral changes induced by chronic administration of IL-1β in rats 22 . However, scarce information is available on the effects of IL-10 on synaptic transmission and whether IL-10 modulates the neuronal activity as reported for IL-1β 23 Cytokines and chemokines may affect Ca 2+ permeability of NMDA and AMPA receptors 13 and regulate GABA A receptors (GABA A Rs) trafficking 24 . Interestingly, while IL-1β decreased the amplitude of GABA-evoked currents 23 , the chemokine fractalkine (CX3CL1) reduced the GABA current desensitization in temporal lobe epilepsy (TLE), thus resulting in opposite functional effects on GABA neurotransmission 25 . This evidence suggests that the net effect of neuroinflammation on neuronal network excitability likely depends on the balance between the action of individual cytokines/chemokines and how their effects are compensated for by anti-inflammatory mechanisms 13,26 . Here, we studied the expression of IL-10 and IL-1β related genes and proteins by transcriptomic analysis and immunohistochemistry in GG as compared with TSC-cortical tubers, highly epileptogenic focal malformations. We performed electrophysiology experiments to study IL-10 and IL-1β effects on GABAergic neurotransmission in order to shed light on the effects of anti-inflammatory and pro-inflammatory stimuli on neurotransmission in epileptogenic lesions. www.nature.com/scientificreports/
Results
Differential expression analysis of IL-1β and IL-10 pathway related genes. The IL-10R complex includes the IL-10 binding subunit IL-10Rα and the accessory subunit IL-10Rβ responsible for recruitment of downstream signaling proteins 27 . IL-10 binding to its receptor leads to the activation of the proximal kinases JAK1-JAK2-TYK2 and subsequently of phosphokinases and the STAT3 system 18,27 . Therefore, we first carried out a differential gene expression analysis of mRNAs encoding several proteins involved in the IL-10 downstream signaling pathway ( Fig. 1) in GG and TSC patients who underwent surgery for drug-resistant epilepsy and compared with control cortex cases (Supplementary Information). We found that IL-10 transcript was significantly upregulated only in GG patients (log 2 fold-change, FC = 1.019) (Fig. 1) whilst its receptors (IL-10Rα and IL-10Rβ) and STAT3 were upregulated in both GG (IL-10Rα log 2 FC = 1.775; IL-10Rβ log 2 FC = 0.956; STAT3 log 2 FC = 0.622) and TSC (IL-10Rα log 2 FC = 1.212; IL-10Rβ log 2 FC = 0.534; STAT3 log 2 FC = 0.527) (Fig. 1). As for IL-10 transcript only JAK1 showed significant RNAseq data indicate significant up-regulation (adjusted p value = 0.05) of IL-10Rα, IL-10Rβ, IL-1Ra, IL-1β and STAT3 in both GG and TSC. In addition, there is a significant upregulation of IL-10 and JAK1 in GG. IL-10 downstream signaling protein such as TIK1, phosphokinases (PIK3CA, PIK3CB, PIK3CD) and IL1-R1 did not show significant changes in either GG or TSC vs controls. * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001. A linear model was fit for each gene and moderated t-statistic was calculated after applying an empirical Bayes smoothing to the standard errors. Those genes with a Benjamini-Hochberg adjusted p value < 0.05 were considered significant. Differential expression analysis compared 21 www.nature.com/scientificreports/ overexpression in GG (log 2 FC = 0.303) ( Fig. 1) whilst TYK2 and the phosphokinases PIK3CA, PIK3CB, PIK3CD were not differentially expressed in either TSC or GG (Fig. 1).
The IL-1β and IL-1Ra (IL-1 receptor antagonist) were significantly upregulated in both TSC (IL-1β log 2 FC = 3.846; IL-1Ra log 2 FC of 1.872) and GG (IL-1β log 2 FC = 3.832; IL-1Ra log 2 FC = 2.097) (Fig. 1). Noteworthy, the ratio between IL-1β/IL-1Ra was shifted towards the pro-epileptogenic IL-1β (2.05-and 1.83-fold in TSC and GG, respectively) suggesting that the IL-1β signaling was not efficiently controlled by the required ~ 100-fold excess of IL-1Ra 28 . IL-1R1 transcript showed no differential expression in epileptogenic lesions versus control tissue (Fig. 1). Cellular expression of IL-10Rα in GG and TSC. In human control cortex, throughout all cortical layers and white matter, IL-10Rα immunoreactivity was not detectable in neurons or glial cells ( Fig. 2A-C). In GG, IL-10Rα immunoreactivity was observed in dysplastic neurons and tumor astrocytes (Fig. 2D, E). Double-labelling showed IL-10Rα expression in neuronal cells (NeuN-positive) and in GFAP-positive astrocytes. In GG, IL-10Rα was also detected in pJAK-positive cells. In TSC, IL-10Rα immunoreactivity was observed in dysmorphic neurons as well as in astrocytes and in scattered giant cells (Fig. 2F-H). Double-labelling showed IL-10Rα expression in neuronal cells (NeuN-and MAP2-positive) as well as in GFAP-positive astrocytes and in dysmorphic neurons positive for pS6, a marker of mTOR activation. Semiquantitative analysis of IL-10Ra immunoreactivity is shown in Supplementary information. IL-10 effect on GABA A mediated currents. We determined whether the up-regulation of IL-10 and related signaling was associated with an effect of IL-10 on GABAergic transmission.
First, we used oocytes microinjected with human cDNAs encoding for α1β2γ2 GABA A Rs (the most common receptor isoform in CNS 29 ) or α4β2γ2 (expressing α4, one of the most relevant subunit mediating tonic inhibition 29 ) to test whether IL-10 affects GABA currents (I GABA ) by direct interaction with GABA A R.
I GABA amplitude was stable in the transplanted oocytes exposed only to incubation medium (see "Methods") for 3 h showing a mean variation of -5.25% of the control amplitude (time zero, 24.8 ± 2.5 nA versus time 3 h, 23.5 ± 2.4 nA; n = 49; # 8-12 in Table 1). Moreover, IL-10 (100 ng/ml) did not modify I GABA amplitude in oocytes transplanted with control tissues because the treatment determined a not significant average current increase of + 6.4% (I GABA = 28.0 ± 4.5 nA before IL-10 and 29.8 ± 4.1 nA after IL-10, n = 17; Fig. 5A, B).
Next, we used two drugs blocking the downstream signaling activated by IL-10 27 . Both K252a, a broad spectrum protein kinases inhibitor 30 and baricitinib, a selective JAK1 and JAK2 inhibitor 31 , pre-incubated for 30 min and co-incubated for 3 h with IL-10, prevented the cytokine effect on I GABA current ( Table 2 and Fig. 5C). This evidence supports that the increase of I GABA amplitude induced by IL-10 in GG is mediated by the activation of IL-10 signaling axis.
To investigate if the increase of I GABA amplitude in GG was due to a change of GABA affinity, we carried out dose-response GABA experiments before and after 3 h incubation with IL-10 (100 ng/ml). We found a significant leftward shift of the GABA dose-response curve after exposure to the cytokine (GABA EC 50 = 106.0 ± 1.5 μM, n H = 1.5 ± 0.1 before IL-10 and 69.7 ± 5.0 μM, n H = 1.7 ± 0.18 after IL-10; # 8-10, Table 1; n = 16; p < 0.05; Fig. 5D) suggesting that IL-10 induces an increase of GABA A R affinity.
IL-1β prevented IL-10 enhancement of GABA A current. We determined the net effect on GABAevoked currents when oocytes transplanted with GG were exposed to both IL-1β and IL-10, in order to mimic the neuroinflammatory milieu of GG where both cytokines are induced with fold-increase of IL-1β exceeding that of IL-10 ( Fig. 1). We pre-incubated oocytes for 30 min with IL-1β (25 ng/ml) and subsequently with a combination of IL-1β (25 ng/ml) 23 and IL-10 (100 ng/ml) for further 3 h. IL-10 effect was suppressed by IL-1β at a concentration within the range measured in epilepsy brain tissue 32 (GABA 250 μM, 4 s applications; 37.8 ± 5.8 nA before IL-1β + IL-10 and 31.9 ± 4.3 nA after IL-1β + IL-10; n = 10; # 8-10 in Table 1). Notably, we obtained similar results when the pre-incubation was performed with IL-10 before using the same protocol as above . IL-10 effect on GABA current amplitude in oocytes injected with human α1β2γ2 cDNA. The bargraph represents the mean and ± s.e.m. of the I GABA amplitudes evoked from oocytes intranuclearly injected with α1β2γ2 cDNAs before (black) and after (red) the incubation with IL-10 (200 ng/mL, 3 h; n = 8; p > 0.05 by paired t-test).The I GABA amplitudes recorded after the IL-10 incubation were normalized to the response obtained before exposure to IL-10 for each cell (range of current amplitudes: from 247.5 to 1119.0 nA), then averaged and expressed as a percent variation. Traces depict representative currents measured after 4 s application of GABA (white bar, 50 μM) in oocytes injected with α1β2γ2 cDNAs before (black trace) and after (red trace) IL-10 incubation (for 3 h). Grey bar on the right trace represents the block by 100 μM bicuculline (representative of 4 experiments). www.nature.com/scientificreports/ Table 1) represent the percentage increase of the peak amplitude induced by IL-10. Data were normalized to the mean current amplitude recorded at time zero (23.7 ± 8.6 nA, n = 12). Inset: Traces depict representative GABA currents at the indicated times. * = p < 0.05 by Wilcoxon signed rank test; ** = p < 0.01 by paired t-test.
Discussion
Our main objective was to study the role of the anti-inflammatory cytokine IL-10 in neurotransmission underlying GGs epileptogenicity. First, we report the novel evidence that IL-10 and related signaling molecules are up-regulated in GG, and IL-10Rα is induced in neurons and astrocytes. Similarly to GG, IL-10 receptors were induced in TSC and IL-10Rα was expressed by both dysmorphic neurons and astrocytes. However, IL-10 itself and the JAK1 downstream kinase were upregulated in GG but not in TSC, where only STAT3 was induced, suggesting that the IL-10 signaling was activated to a minor extent in TSC compared to GG. Patients with GG and TSC share common characteristics, such as a high incidence of early onset drug-resistant epilepsy and a neuroinflammatory response which is one hallmark of the neuropathology 1,5,33-36 . Notwithstanding these common features, IL-10 and related signaling have a significant impact on GABA A R mediated currents in GG but not in TSC, as assessed in oocytes microinjected with membranes from epileptic patients. Data support that the up-regulation of IL-10-related signaling represents a homeostatic attempt to counteract hyperexcitability in epileptogenic lesions by enhancing GABA-mediated currents. However, this up-regulation was insufficient in TSC supporting that the extent of IL-10 increase and cognate signaling activation determine the functional consequences on neurotransmission in epileptiform lesions. In support, IL-10-mediated GABA current potentiation was absent in control tissue where the cytokine and its receptor were undetectable. This evidence bears relevance for therapeutic interventions aimed at boosting IL-10R activation with stable IL-10 analogs or brain penetrant mimetic drugs 37 . A potential limitation of this study is the use of post-mortem brain tissue as control for surgical resected specimens from GG and TSC. However, comparison of the transcriptional profile has been performed demonstrating minimal variations between the two tissue types when high quality RNA was used as an input 38 . Moreover, we previously showed that surgical control tissue shows a pattern of immunoreactivity for inflammatory markers very similar to autoptic tissue, thus indicating antigen preservation in control autopsies 25,39 . In accordance, the use of post-mortem brain material is routinely used in transcriptome and immunohistochemical studies. We used the microtransplantation approach since it allows to measure GABA currents that are otherwise difficult to record using human brain slices due to the rarity and tissue damage of surgical specimens. Indeed, to our knowledge there are no studies on ex-vivo brain slices in human GG and this is not surprising considering that GG are rare primary brain tumours with a challenging diagnosis that requires integrated diagnostic genotype-phenotype analysis, thus limiting the availability of representative tissue slices for electrophysiological recordings 1,40 . In addition, there is only one animal model of GG carrying the BRAFV600E mutation where electrophysiological studies on acute brain slices were performed, although modulation of neuronal activity by cytokines has not been evaluated yet 3,8 .
One limitation of our approach is that we microtrasplanted a mixture of glial or neuronal membranes 25 therefore we cannot distinguish whether the IL-10 effect is mediated by glial or neuronal IL-10 receptors. However, the microtransplantation technique bypasses the biosynthetic machinery of host cell allowing the incorporation of native receptors and associated signaling that maintain their functional properties 25 www.nature.com/scientificreports/ Furthermore, this approach permits to use minute amounts of control tissue from individuals without neurological diseases, which is highly relevant when studying neuroinflammatory mediators.
We found that IL-10 increases GABA current amplitude in GG by activating receptor-related kinase cascade, and this effect was dose-dependent and it becomes significant after 3 h incubation. The involvement of the IL-10-related signaling is supported by (i) the blockade of the cytokine effect using drugs interfering with the , TSC (n = 24) and GG (n = 80) tissues. Data are expressed as mean ± s.e.m. Inset: I GABA amplitude is expressed as percent increase above baseline (before IL-10 incubation, ranges of current amplitudes: control tissue, from 3.8 to 53.7 nA; TSC, from 7.7 to 73.7 nA; GG, from 5.5 to 84.0 nA). ** = p < 0.01 by paired t-test. (B) Representative superimposed current traces (GABA 250 μM, white bars) of control-, TSC-and GG-injected oocytes before (black trace) and after (red trace) incubation with IL-10 (100 ng/mL for 3 h). Grey bars represent the block by 100 μM bicuculline (representative of 3 experiments for each tissue) (C) Bar-graph shows the effect of incubation of K252a (2 μM, a broad-spectrum protein kinases inhibitor) or baricitinib (Bar 0.5 μM, a selective JAK1 and JAK2 inhibitor) with IL-10 (100 ng/ml). Black bar-graph represents the mean current value (nA) before incubation with IL-10 alone (red, n = 18) or in combination with the two blockers (blue, n = 8 for each blocker). ** = p < 0.01 by paired t-test (D) Dose-response curves of GABA (1 μM-1 mM) before (black curve) and after (red curve) incubation with IL-10 (100 ng/ml for 3 h) in oocytes microinjected with GG tissues (Patients # 8-10, Table 1). Averaged EC 50 were 107.0 ± 9.7 μM, n H = 1.4 ± 0.10, before IL-10 and 67.0 ± 3.79 μM, n H = 1.77 ± 0.16; n = 16; statistics for the dose-response experiments: p < 0.05 by paired t-test. www.nature.com/scientificreports/ downstream kinases, and (ii) the lack of IL-10 effect on GABA current in oocytes injected with exogenous cDNAs encoding human α1β2γ2 or α4β2γ2 GABA A Rs, thus excluding a direct interaction between IL-10 and GABA A Rs. In agreement, the recruitment of the same JAK/TYK signaling due to the activation of IL-10 receptor complex (IL-10Rα and IL-10Rβ) can trigger an anti-inflammatory axis that reduces neurodegenerative phenomena 18 . Direct application of IL-10 on hippocampal naïve rat slices was reported to induce a decrease of peak amplitude and frequency of mIPSCs recorded from DG neurons 42 . This evidence is only apparently at variance with our results since we measured the IL-10 enhancing effect of GABA amplitude exclusively in pathological cortical tissues, but not in control tissues, suggesting that GABA A receptor subtypes are altered by the pathology, as previously shown in epilepsy patients and animal models 43,44 .
Notably, the shift of GABA EC 50 induced by IL-10, with no changes in reversal potential or current decay, indicates that GABA current potentiation in GG is a consequence of increased receptor affinity for GABA.
We hypothesize that IL-10 could act on tonic GABAergic inhibition which is characterized by high affinity for GABA and the activation of specific GABA A R subunits 29,45 . Specifically, it is likely that IL-10 could modulate the function of α4 containing GABA A -Rs by acting on subunit phosphorylation or receptor trafficking mechanisms 46 . Notably, in line with this hypothesis, here we blocked the IL-10 effect by using a broad spectrum kinase inhibitor. A similar role of phosphorylation was previously described for the GABA current potentiation induced by BDNF or levetiracetam in TLE patients both in oocytes and human slices 44,47,48 . In addition, we described the block of IL-10 effect with a specific inhibitor of JAK1-2 that, together with the reported lack of effect on cDNAs injected oocytes, further supports that the IL-10 signaling machinery is transplanted in the host cells.
Although our approach 25 does not allow to determine whether the IL-10 effect is mediated by neuronal or astrocytic receptors, this aspect needs to be elucitated since enhancement of tonic extrasynaptic GABA A R currents may reduce seizures susceptibility 45,49 .
Previous studies showed an altered chloride homeostasis in peritumoral tissue of low-grade gliomas resulting in depolarizing GABA actions which may contribute to hyperexcitability induced by tumors 50,51 . Since we did not find any chloride alteration in GG tissues, the enhanced GABA current induced by IL-10 is likely to result in anti-ictogenic effects.
In oocytes transplanted with human TLE membranes, IL-1β reduced GABA A R-mediated currents through activation of its signaling pathway 26 . Our data show that IL-1β has a similar effect in GG and TSC and this effect was blocked by the specific receptor antagonist IL-1Ra confirming that it was mediated by the activation of the IL-1β receptor and associated molecular cascade as previously reported 23 . IL-1β is a key component of the neuroinflammatory milieu in epileptogenic tissue 10 , and its ability to promote neuronal NMDAR-dependent Ca 2+ influx and decrease GABA current amplitude likely mediates its ictogenic properties. We provide novel evidence that IL-1β is induced in large excess compared to IL-1Ra in GG supporting the inefficient control of this proinflammatory signal and its pathological consequences 10 .
The contribution of cytokines to neuroinflammation in epilepsy is complex since various pro-and antiinflammatory molecules are secreted, and they are often endowed of opposite effects on synaptic transmission and neuronal excitability 13,52,53 . In line with this scenario, our functional data show that IL-1β prevents the enhancing effect of IL-10 on GABAergic transmission while retaining its ability to decrease GABA currents, thus supporting the failure of anti-inflammatory cytokines to efficiently control neuroinflammation and the consequent hyperexcitability leading to seizures. This hypothesis is also supported by IL-10 serum levels being comparatively lower, and IFN-γ levels higher, in patients with drug-resistant versus drug-responsive epilepsy 10,21,54 .
Our results reinforce the link between cytokine-mediated neuroinflammation and altered neurotransmission in drug-resistant human epilepsies. In particular, our data suggest that boosting key anti-inflammatory endogenous molecules may represent a novel therapeutic strategy for controlling drug-resistant seizures as also suggested for children with febrile seizures 55 . In support, recent evidence shows that also the administration of anakinra 56,57 , the human recombinant IL-1Ra, by increasing the level of endogenous IL-1Ra provides significant therapeutic benefits in drug-resistant patients affected with febrile infection-related epilepsy syndrome.
Conclusions
This study provides fresh evidence that the anti-inflammatory mediator IL-10 affects GABA currents in epileptogenic human tissue, thus bearing implications for novel strategies to increase inhibitory neurotransmission in drug-resistant epilepsy. Since IL-1β voided the effect of IL-10 on GABA currents, this supports that the resolution mechanisms of the pathogenic neuroinflammatory response may fail in epilepsy, thus allowing the ictogenic effects of the concurrent inflammatory molecules to prevail.
Our data provide therapeutic insights for inhibiting hyperexcitability underlying seizures by boosting endogenous anti-inflammatory homeostatic mechanisms with drugs that mimic key anti-inflammatory molecules.
Methods
Patients. The cases included in this study were obtained from the archives of the Departments of Neuropathology of the Amsterdam UMC (Amsterdam, the Netherlands) and the University Medical Center Utrecht (UMCU, Utrecht, the Netherlands). Cortical brain samples were obtained from patients undergoing surgery for drug-resistant epilepsy and diagnosed with GG or TSC (cortical tubers). All cases were reviewed independently by two neuropathologists, and the diagnosis of GG was confirmed according to the revised WHO classification of tumors of the central nervous system 58 . All patients with cortical tubers fulfilled the diagnostic criteria for TSC 59 . The predominant seizure types observed were focal seizures with/without impaired awareness, and all patients were resistant to maximal doses of different anti-seizure medications (ASMs) ( Table 1 and Supplementary Information). All the patients included in this study had a post-surgical outcome in Engel's class I or II. Epilepsy duration was calculated as the interval in years from the age at seizure onset to the age at tissue www.nature.com/scientificreports/ sampling. After resection, the tissue was immediately snap-frozen in liquid nitrogen and then part of the samples was used to perform the electrophysiology experiments. Control autopsy cases had no known history of epilepsy, a normal cortical structure for the corresponding age and no significant brain pathology. All autopsies were performed within 16-48 h after death with the acquisition of appropriate written consent for brain autopsy and subsequent use for research purposes. As pathologies in young patients are investigated, surgically resected control tissue was not available due to technical and ethical issues. The transcriptional profiles of post-mortem and surgical resected tissues have previously been compared to take into account potential post-mortem effects on RNA expression, thus showing minimal differences if the tissue is of high quality (i.e., extracted, handled and stored as in our study) 38 . Additional details can be found in Supplementary information. The brain specimens used for electrophysiological and immunohistochemical analyses are identified in the text by patient number ("#") ( Table 1). Due to the limited tissue availability of these rare human specimens, we used the frozen samples at completion for both transcriptomic analysis and electrophysiology, thus preventing additional measurements (e.g. western blot) to be done. For electrophysiological experiments, perituberal tissue was available from two TSC patients only and the amount of tissues was insufficient for recording reliable GABA currents amplitudes. Formalin-Fixed Paraffin-Embedded (FFPE) material was used for diagnostic pathology and immunohistochemistry. Control cortical tissue was obtained from two females (age 7 yrs, intestinal ischemia; 39 yrs, respiratory failure) and one male (age 31 yrs; respiratory failure). Patients and their controls used for electrophysiological and immunohistochemical analyses ( Table 1) Membrane preparation. Tissues were immediately processed upon receipt in the laboratory or stored at − 80 °C until use. Human membranes preparation, injection in Xenopus laevis oocytes were carried out as previously described 60,61 . Injection and voltage-clamp recordings. Experiments with microtransplanted oocytes were carried out 24-48 h after cytoplasmic injection 60 (patients are reported in Table 1). GABA-evoked currents were recorded with the technique of two-electrode voltage clamp as previously reported 60 after the oocytes were placed in a recording chamber (0.1 ml volume) and continuously perfused with oocyte Ringer solution (OR: NaCl 82.5 mM; KCl 2.5 mM; CaCl 2 2.5 mM; MgCl 2 1 mM; Hepes 5 mM, adjusted to pH 7.4 with NaOH) at room temperature (20-22 °C). These GABA currents were blocked by biculline (100 μM) as previously reported 62 , thus indicating that we recorded genuine GABA A evoked responses 23 . Table 1). Data are expressed as a % variation of the mean current amplitude after the incubation with each cytokine singularly or in combination. Mean current variation was + 31.0 ± 2.6% after IL-10 incubation (100 ng/ml, n = 10), − 19.6 ± 3.15% after incubation with IL-1β (25 ng/ml, n = 10) and − 15.6 ± 3.5% after co-incubation with IL-10 + IL-1β (n = 10) as described in the text. ** p < 0.01 by paired t-test. www.nature.com/scientificreports/ In one set of experiments, we used oocytes expressing human α1β2γ2 GABA A Rs or α4β2γ2 after intranuclear injection of cDNAs encoding human α1 or α4, β2 and γ2 GABA A Rs subunits 63 . cDNAs were kindly provided by Dr. K. Wafford and were used at a ratio of 1:1:1.
Unless otherwise specified, 50 μM GABA (plateau dose-response concentration) was used in the experiments with cDNA injected oocytes and 250 μM GABA (plateau dose-response concentration) with microtransplanted oocytes 63 . The stability of the evoked currents (I GABA ) was ascertained by performing two consecutive GABA applications, separated by a 4 min washout. Only the cells that had a < 5% variation of current amplitude were used to test the effect of IL-10 and IL-1β. In some experiments we applied bicuculline (100 μM, 30 s of incubation), a competitive antagonist of GABA A Rs, to confirm that we recorded genuine GABA-evoked responses as previously shown 23 .
When performing dose-response relationships (before and after IL-10 incubation), we used GABA concentrations ranging from 1 μM to 1 mM, as previously reported 60 . GABA pulses were applied every 4 min to avoid receptor desensitization; to determine the half-maximal effect (EC 50 ) data were fitted to Hill equations using least-square routines, as previously described 60 .
GABA current reversal potential (E GABA ) was calculated by constructing current-voltage (I-V) relationships that were elaborated by a linear regression curve-fitting software (Sigmaplot 12, Systat software inc.). GABA current decay time (T 0.5 ) was measured as the time taken for the current to decay from its peak to half-peak value after applying GABA 250 μM for 60 s 64 .
Cytokines were diluted at the final concentration (specified for each experiment) in Barth's modified saline solution (88 mM NaCl; 1 mM KCl; 2.4 mM NaHCO 3 ; 10 mM HEPES; 0.82 mM MgSO4; 0.33 mM Ca(NO 3 ) 2 ; 20.41 mM CaCl). IL-10 was purchased from Immunotools GmbH (Friesoythe, Germany), IL-1β was purchased from Peprotech (London, UK) and recombinant human IL-1Ra from Invitrogen (Waltham, MA, USA). Salts were purchased from Sigma-Aldrich (USA) while GABA and biculline methocloride were purchased from Tocris Bioscience (Bristol, UK) and dissolved in sterile water before dilution to the final concentration in OR.
In some experiments, we used K252a (Sigma; 2 μM), a potent non-specific inhibitor of protein kinases such as PKA, PKC, PKG and Trk receptors, and baricitinib (Selleckchem; 0.5 μM), a selective JAK1 and JAK2 inhibitor. Oocytes were incubated for 30 min with the inhibitor alone, followed by co-incubation with the cytokine for 3 h.
Immunohistochemistry. Immunohistochemistry (patients are reported in Table 1) was carried out as previously described 65 . The primary antibody against IL-10 receptor alpha (IL-10Rα, rabbit polyclonal, Genetex, Irvine, CA, USA, 1:150) was incubated at room temperature for 1 h for single labelling. The ribosomal protein S6 was used as marker of the mTOR activity (pS6 Ser235/6, polyclonal rabbit, Cell Signaling; 1:200) to perform double-labelling of IL-10Rα.
RNA-Seq library preparation, sequencing and bioinformatics analysis. All library preparation, sequencing and bioinformatic analyses including differential expression analysis were carried out as previously described 65 (patients are reported in Supplementary Information). Differential expression analysis compared 21 TSC patients and 15 age-matched control cortices; 37 GG patients and 15 age-matched controls cortices. The relationship between expression level of differentially expressed RNAs and subject's age was assesssed using Spearman's rank correlation to. A correlation coefficient of (adjusted p value < 0.05) > 0.7 or < − 0.7 was considered indicative of meaningful relationship between the two variables. As no significant correlation was found between gene expression levels and subject's age, it was deemed that no correction for age needed to be applied.
Statistics.
Before data analysis, normal distribution was assessed with Shapiro-Wilk test to inform about the choice between parametric (Student's t-test) or non-parametric (Wilcoxon signed rank test, Mann-Whitney rank sum test) tests. Statistical analysis of data was performed with Sigmaplot 12 software, and differences between two data sets were considered significant when p < 0.05. The (n) indicates the number of oocytes used in each experiment.
Ethics approval. Human brain tissue was obtained and used in accordance with the Declaration of Helsinki and the Amsterdam UMC Research Code provided by the Medical Ethics Committee. All the samples were used upon acquisition of appropriate written consent for research purposes. The use of Xenopus laevis frogs, the surgical procedures for oocytes extraction and use conformed to the Italian Ministry of Health guidelines (authorization no 427/2020-PR), and were approved by the Local Committee for Animal Health (OPBA, Department of Physiology and Pharmacology, Sapienza University). All the animal procedures followed the recommendations of the ARRIVE guidelines.
Data availability
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. | 2022-10-26T14:15:08.981Z | 2022-10-26T00:00:00.000 | {
"year": 2022,
"sha1": "967e1df5b76ca40215a8771d6426eaa3435a0d30",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "967e1df5b76ca40215a8771d6426eaa3435a0d30",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236320937 | pes2o/s2orc | v3-fos-license | Comparison of Functional Connectivity in the Prefrontal Cortex during a Simple and an Emotional Go/No-Go Task in Female versus Male Groups: An fNIRS Study
Inhibitory control is a cognitive process to suppress prepotent behavioral responses to stimuli. This study aimed to investigate prefrontal functional connectivity during a behavioral inhibition task and its correlation with the subject’s performance. Additionally, we identified connections that are specific to the Go/No-Go task. The experiment was performed on 42 normal, healthy adults who underwent a vanilla baseline and a simple and emotional Go/No-Go task. Cerebral hemodynamic responses were measured in the prefrontal cortex using a 16-channel near infrared spectroscopy (NIRS) device. Functional connectivity was calculated from NIRS signals and correlated to the Go/No-Go performance. Strong connectivity was found in both the tasks in the right hemisphere, inter-hemispherically, and the left medial prefrontal cortex. Better performance (fewer errors, faster response) is associated with stronger prefrontal connectivity during the simple Go/No-Go in both sexes and the emotional Go/No-Go connectivity in males. However, females express a lower emotional Go/No-Go connectivity while performing better on the task. This study reports a complete prefrontal network during a simple and emotional Go/No-Go and its correlation with the subject’s performance in females and males. The results can be applied to examine behavioral inhibitory control deficits in population with neurodevelopmental disorders.
Introduction
Inhibitory control deficits are seen across a variety of conditions, including neurodevelopmental disorders, such as attention deficit hyperactivity disorder (ADHD) [1], as well as in the aging process [2]. Due to its impact across developmental stages, understanding the neural mechanisms of behavioral inhibition in typically developing populations is a necessary step in developing assessments and interventions for this ability. Relatedly, this developmental range and the disorders affected by behavioral inhibition deficits make the use of an accessible, easy-to-tolerate (e.g., robust to movement, easy to transport) technology necessary to ensure it can be successfully utilized across these populations. Simple laboratory tasks of behavioral inhibition, such as the Go/No-Go (GNG) task [3], paired with accessible neuroimaging technologies could be immensely helpful in identifying a mechanism behind these behaviors that can be targeted therapeutically in populations with deficits in this area. However, steps must first be taken to verify this approach in populations with typical neurodevelopment to determine its utility.
The GNG task is designed to measure the motor inhibitory response [3] and has been widely used to assess inhibition in neuroimaging studies. During the task, a series of "Go" and "No-Go" stimuli are presented to a subject, who is required to respond to a "Go" Brain Sci. 2021, 11, 909 2 of 11 stimulus, but not to a "No-Go" stimulus. Repeated presentations of the "Go" stimulus create a prepotent urge to respond during the trials, making inhibition of this prepotent response during "No-Go" stimuli challenging. Previous research using functional magnetic resonance imaging (fMRI) has shown that the GNG task evokes brain activation in the prefrontal cortex, which suggests the critical role of this brain region in controlling response inhibition [4]. However, the GNG task can take on a variety of forms, which use different stimuli in the same general paradigm to interrogate behavioral inhibition ability.
Reviews of fMRI studies investigating behavioral inhibition using the GNG task show that frontal areas such as the pre-supplementary motor area, insula, and medial prefrontal cortex are activated during these tasks [5,6]. However, it is evident that as task parameters change, the areas of activation also differ. Activation likelihood estimation (ALE) metaanalyses further investigating how the varied parameters of GNG tasks affect activation found that alterations in the task complexity or "Go" to "No-Go" trial ratio, the number of "No-Go" stimuli and working memory load (e.g., more complex stimuli, more than one "Go" or "No-Go" target) changed which areas were activated [6,7]. Specifically, the authors note that certain areas, such as the right dorsolateral prefrontal, inferior parietal circuits [6], and the pre-supplementary motor area [7], may not play a direct role in inhibition because activation in these areas appear to be attributed to increased working memory load [7]. Moreover, an ALE meta-analysis by Gavazzi et al. revealed different networks for different types of inhibitory phases with the right inferior frontal gyrus associated with proactive inhibition and the right middle frontal gyrus corresponding to reactive inhibition [8]. Additional neuroimaging studies using the GNG in both its simple form and in more complex versions of the task are warranted to better evaluate the potential differences between such forms of the task.
One variation of the simple GNG task is the emotional GNG [9]. The emotional GNG uses faces with neutral or emotional expressions as the stimuli in a GNG framework, requiring input from both the medial prefrontal cortex associated with a simple GNG task [5,6] as well as the ventral prefrontal cortex involved in emotional processing [10]. The emotional GNG task is shown to be more challenging than a nonemotional GNG, evidenced by more errors and slower responses than the nonemotional version. However, scores on the emotional GNG are correlated with nonemotional GNG performance, indicating that it still appropriately measures behavior inhibition [11]. The emotional GNG was of particular interest due to the relationship between behavior inhibition and emotion regulation, shown by differences in behavior inhibition elicited by altering emotional context [11] as well as their shared neural architecture [12]. The interplay of these processes makes the emotional GNG particularly interesting, as the use of emotional stimuli may modulate behavioral inhibition ability while adding complexity to the simple GNG paradigm. This is particularly compelling when disorders associated with behavioral inhibition deficits are examined, such as ADHD, as they also show emotion regulation challenges [13,14]. In addition, there exists evidence suggesting that emotional GNG performance may relate to estrogen variation, specifically activation in the dorsolateral prefrontal cortex while inhibiting response to positive stimuli was positively correlated with luteal phase estradiol, and it was significantly increased during the luteal (high estrogen), compared to the follicular (low estrogen) phase [15]. Furthermore, females were reported to be better at emotion recognition tasks than males [16,17]. Based on these critiques and evidence, the present study used multiple versions of the GNG task to investigate differences in connectivity between a simple and complex task, and sex was included as a covariate in the analyses to control for its potential effect on emotional GNG performance.
Additionally, the majority of studies on the neural associations of behavioral inhibition have focused on measuring areas of activation, not network-level dynamics. Cerebral functional connectivity is a measure of the temporal correlation between two separate brain regions. When there exists a statistical dependence between time series of data recorded in two different regions, these regions are considered to have functional connectivity. Previous studies have largely employed simple GNG paradigms to investigate connectivity, with evidence that greater connectivity between prefrontal areas is associated with better GNG performance [18,19]. Further, studies using fMRI and diffusion tensor imaging have demonstrated structural and functional connectivity impairments in disorders characterized by inhibitory deficits such as ADHD [20], indicating that connectivity may play a critical role in inhibition. Further exploration of these network-level dynamics in the prefrontal areas is warranted to better understand the processes underlying behavioral response inhibition across contexts.
Functional near infrared spectroscopy (fNIRS) is an optical technique that indirectly monitors brain activity through cerebral hemodynamic changes. In addition to being low cost, invulnerable to motion artifact, and highly portable in comparison to fMRI, fNIRS has an important advantage of having higher temporal resolution, which is crucial to characterizing the shape and change in the hemodynamic responses. For this reason, when a large dataset is required to obtain reliable results relating to hemodynamic activation across brain regions in the computation of the functional connectivity, fNIRS is preferred over fMRI.
Although activation of the prefrontal cortex during a GNG task has been thoroughly investigated, not many studies have focused on cerebral functional connectivity during such tasks. In this research, the prefrontal cortex is selected as a targeted region because of its association with the GNG task [5,6] and its crucial role in emotion processing [21]. The present study used fNIRS to (1) examine prefrontal connectivity during a simple and an emotional GNG task, (2) analyze the sex-based correlation between connectivity and subject's performance, and (3) identify connections specific to a simple and an emotional GNG task in females and males. We hypothesized that both female and male groups would present a positive prefrontal connectivity-performance relation during a simple GNG task, but this positive correlation may vary depending on sex group during the emotional GNG task.
Participants and Experimental Protocol
The experimental protocol was approved by the National Institute of Child Health and Human Development's Institutional Review Board (10CH0198). Parts of data from this protocol were previously published in a multimodal study examining prefrontal function in relation to measures of autonomic activity (CITE) [22]. This study included 42 healthy subjects (20 males, age 37.2 (±14.7)). Before the experiment, all subjects were required to complete a health history questionnaire and sign an informed consent letter. Subjects with history of cardiovascular disease or skin disease were excluded. During the experiment, the participant was seated comfortably in a chair and was asked to follow the instructions on a monitor in front of them.
The experimental protocol consisted of three conditions: a "vanilla" baseline [23] (6.5 min), a simple GNG task (6.5 min, Figure 1a), and an emotional GNG task (6.5 min). During the baseline, participants watched a neutral video clip (Coral Sea Dreaming: Plankton Productions and MJL Network, 2014), which helped maintain minimal engagement. After the vanilla baseline, the simple and emotional GNG tasks were displayed in a random order across participants. GNG tasks consisted of 192 trials with 144 Go and 48 No-Go trials. Each trial was 500 ms long, followed by a 1500 ± 250 ms interstimulus interval. The subject was required to press a <SPACE> bar when seeing a Go stimulus and to not press any buttons when seeing a No-Go stimulus. Letters (Y: Go; X: No-Go) were presented during the simple GNG and emotional faces (neutral: Go; happy: No-Go (24); angry: No-Go (24)) were presented during the emotional GNG task. Each subject practiced six trials before each task. (a) experimental paradigm during a simple GNG task, letter Y: Go stimulus, letter X: No-Go stimulus, + sign: inter-stimulus rest. Letter Y was replaced by a photo of a neutral face and letter X was replaced by a photo of a happy or angry face during the emotional GNG task; (b) location of 16 fNIRS channels in a brain model.
Omission error, commission error, and response time were recorded and regarded as subject's performance. An omission error is an error committed by the participant when she/he did not press a <SPACE> bar in a Go trial. Commission errors are counted when the participant pressed the <SPACE> bar in a No-Go trial. Response time is the time interval from the letter/face that was displayed on the screen until the subject pressed the <SPACE> bar.
Data Recording
Cerebral hemodynamic changes were measured in the subject's prefrontal cortex using an fNIRS device (fNIR Devices LLC, New Orleans, LA, USA). The device consists of 4 LEDs (light emitting diode) emitting near infrared light at 730 nm and 850 nm and 10 light detectors, which form 16 channels. The distance between a LED and a detector is 2.5 cm. All LEDs and detectors were embedded in a flexible head band. Before experiments, participants' head size was measured, the forehead was cleaned, an fNIRS probe was placed on the subject's forehead centered at Fpz, and the signal quality was examined. The projection of the 16 fNIRS channels on a brain model is shown in Figure 1b. The fNIRS signal was recorded at 2 Hz sampling rate through COBI Studio software (fNIR Devices LLC, Potomac, MD, USA).
Data Processing
The recorded optical intensity was converted into the hemodynamic response changes, including oxy-(HbO) and deoxy-(HbR) hemoglobin, using Beer-Lamberts' law with a differential pathlength factor was assumed to be 6. Converted data were then bandpass filtered (0.01-0.5 Hz) and denoised. Principal component analysis (PCA) was applied to remove superficial and systemic physiological signals. Studies on resting state functional connectivity have often band-pass-filtered NIRS signals in the range of 0.01-0.1 Hz (or 0.01-0.08 Hz) to acquire the cerebral spontaneous hemodynamic change [24]. However, due to the nature of the simulation used in this study (~2 s each trial), the fNIRS signal was filtered in the range of 0.01-0.5 Hz to retain possible fast brain response to the stimulus. Our previously published work has shown that systemic physiological signals in the range of 0.1-0.5 Hz such as Mayer wave and respiratory rhythm were effectively removed by the application of PCA [25]. The preprocessed fNIRS signal was split into three datasets (baseline, simple, and emotional GNG). The Pearson correlation coefficient was then calculated from HbO data between every pair of fNIRS channels to generate a symmetric 16×16 correlation matrix per subject per condition. Finally, the correlation coefficient was converted to a z-value using Fisher transformation [24,26] to be used as functional connectivity values. A total of 120 connections were considered. (a) experimental paradigm during a simple GNG task, letter Y: Go stimulus, letter X: No-Go stimulus, + sign: inter-stimulus rest. Letter Y was replaced by a photo of a neutral face and letter X was replaced by a photo of a happy or angry face during the emotional GNG task; (b) location of 16 fNIRS channels in a brain model.
Omission error, commission error, and response time were recorded and regarded as subject's performance. An omission error is an error committed by the participant when she/he did not press a <SPACE> bar in a Go trial. Commission errors are counted when the participant pressed the <SPACE> bar in a No-Go trial. Response time is the time interval from the letter/face that was displayed on the screen until the subject pressed the <SPACE> bar.
Data Recording
Cerebral hemodynamic changes were measured in the subject's prefrontal cortex using an fNIRS device (fNIR Devices LLC, New Orleans, LA, USA). The device consists of 4 LEDs (light emitting diode) emitting near infrared light at 730 nm and 850 nm and 10 light detectors, which form 16 channels. The distance between a LED and a detector is 2.5 cm. All LEDs and detectors were embedded in a flexible head band. Before experiments, participants' head size was measured, the forehead was cleaned, an fNIRS probe was placed on the subject's forehead centered at Fpz, and the signal quality was examined. The projection of the 16 fNIRS channels on a brain model is shown in Figure 1b. The fNIRS signal was recorded at 2 Hz sampling rate through COBI Studio software (fNIR Devices LLC, Potomac, MD, USA).
Data Processing
The recorded optical intensity was converted into the hemodynamic response changes, including oxy-(HbO) and deoxy-(HbR) hemoglobin, using Beer-Lamberts' law with a differential pathlength factor was assumed to be 6. Converted data were then bandpass filtered (0.01-0.5 Hz) and denoised. Principal component analysis (PCA) was applied to remove superficial and systemic physiological signals. Studies on resting state functional connectivity have often band-pass-filtered NIRS signals in the range of 0.01-0.1 Hz (or 0.01-0.08 Hz) to acquire the cerebral spontaneous hemodynamic change [24]. However, due to the nature of the simulation used in this study (~2 s each trial), the fNIRS signal was filtered in the range of 0.01-0.5 Hz to retain possible fast brain response to the stimulus. Our previously published work has shown that systemic physiological signals in the range of 0.1-0.5 Hz such as Mayer wave and respiratory rhythm were effectively removed by the application of PCA [25]. The preprocessed fNIRS signal was split into three datasets (baseline, simple, and emotional GNG). The Pearson correlation coefficient was then calculated from HbO data between every pair of fNIRS channels to generate a symmetric 16×16 correlation matrix per subject per condition. Finally, the correlation coefficient was converted to a z-value using Fisher transformation [24,26] to be used as functional connectivity values. A total of 120 connections were considered. A traditional approach to identify connections that are specific to a task is to compare the task and the baseline connectivity using a statistical test (i.e., t-test). A connection is selected when the statistical test results in a significant difference (e.g., the task connectivity is significantly greater than the baseline connectivity). However, this approach may draw a wrong conclusion, especially in the prefrontal cortex, a part of the default mode network, which is activated when resting [27]. Here, we suggest a new method to identify taskspecific connections by correlating the baseline connectivity with the GNG task connectivity. A high, positive correlation coefficient indicates that a connection with a high baseline connectivity has a high task connectivity and vice versa. In other words, that connection is activated/deactivated in both baseline and during the task (not task-specific connection). On the other hand, a low correlation coefficient indicates a disassociation between the baseline and task connectivity. As a result, a connection with a low correlation coefficient may be the connection that is specific for the task.
Statistical Test
To assess differences in GNG performance, a series of two-tailed, paired samples t-tests was conducted on functional connectivity values (z-values) comparing the simple GNG to the emotional GNG within each sex group, and a two-tailed, independent sample t-test was conducted between sex groups to compare the subject's performance using omission errors, commission errors, and reaction time as the dependent variables. The statistical test was considered to be significant when the p-value was less than or equal to 0.05.
A 3-way repeated measures analysis of variance (ANOVA) test and a series of post-hoc Bonferroni tests were performed to compare the connectivity of each connection across subjects between the baseline and the tasks for each sex group. Within subject factors in the ANOVA are functional connectivity values during the baseline, the simple GNG task, and the emotional GNG task. Bonferroni correction was applied on the ANOVA and post hoc analyses to compensate for a type I error. In addition, the correlation coefficient (r) between functional connectivity of each connection (Fisher's z value described above) and subjects' performance (omission error, commission error, and reaction time) was calculated across subjects. Similar to the statistical test, the correlation was significant when the p-value was less than or equal to 0.05.
Performance
The subject's performance, including omission errors, commission errors, and response time, during the simple and emotional GNG tasks was compared between sexes (Table 1). No statistically significant difference in performance was observed between sexes across any of these metrics (Table 1). Within sex comparisons showed that females committed significantly higher commission errors than omission errors in the simple GNG task (t = 4.2, p-value = 0.00013), but in the emotional GNG task, the omission errors were significantly greater than the commission errors (p-value = 0.04). Both sexes performed significantly better in the simple GNG task than in the emotional GNG task (fewer errors and faster response time). Figure 2 shows the prefrontal connectivity during a vanilla baseline, a simple GNG task, and an emotional GNG task. Strong connectivity (z-value > 0.5, red edge in Figure 2) was observed in 13 right prefrontal connections, one inter-hemispheric connection, and four left medial prefrontal connections in the baseline, 10 right prefrontal connections, one inter-hemispheric connection, and two left medial prefrontal connections in the simple GNG, and 13 right prefrontal connections, two inter-hemispheric connection, and two left medial prefrontal connections in the emotional GNG. All connections with strong connectivity in the simple GNG overlapped with the ones in the baseline. Similarly, 16 out of 17 strong connections during the emotional GNG overlapped with those seen during the baseline.
GNG
Omission Figure 2 shows the prefrontal connectivity during a vanilla baseline, a simple GNG task, and an emotional GNG task. Strong connectivity (z-value > 0.5, red edge in Figure 2) was observed in 13 right prefrontal connections, one inter-hemispheric connection, and four left medial prefrontal connections in the baseline, 10 right prefrontal connections, one inter-hemispheric connection, and two left medial prefrontal connections in the simple GNG, and 13 right prefrontal connections, two inter-hemispheric connection, and two left medial prefrontal connections in the emotional GNG. All connections with strong connectivity in the simple GNG overlapped with the ones in the baseline. Similarly, 16 out of 17 strong connections during the emotional GNG overlapped with those seen during the baseline. The repeated measures ANOVA (120 tests for 120 connections, Bonferroni corrected critical p-value = 0.0004) comparing connectivity strength in all connections revealed no statistical difference between the three conditions for the whole group, males, and females (all p-values > 0.0004). In addition, within condition student t-test (360 tests for 120 connections and three conditions, Bonferroni corrected critical p-value = 0.0001) resulted in no significant difference in connectivity strength between sexes in all three conditions (all p-values > 0.0001).
Correlation between Functional Connectivity and Performance
Correlations between the omission errors, commission errors, response time, and functional connectivity during the tasks were calculated to examine the relationship between subject's performance and brain connectivity. Figures 3 and 4 display brain maps of the correlations between connectivity and subject's performance during the tasks. In general, a negative correlation coefficient implies that fewer errors/shorter response time corresponds to higher connectivity (better performance → greater connectivity), while a positive correlation indicates the opposite (better performance → smaller connectivity). The repeated measures ANOVA (120 tests for 120 connections, Bonferroni corrected critical p-value = 0.0004) comparing connectivity strength in all connections revealed no statistical difference between the three conditions for the whole group, males, and females (all p-values > 0.0004). In addition, within condition student t-test (360 tests for 120 connections and three conditions, Bonferroni corrected critical p-value = 0.0001) resulted in no significant difference in connectivity strength between sexes in all three conditions (all p-values > 0.0001).
Correlation between Functional Connectivity and Performance
Correlations between the omission errors, commission errors, response time, and functional connectivity during the tasks were calculated to examine the relationship between subject's performance and brain connectivity. Figures 3 and 4 display brain maps of the correlations between connectivity and subject's performance during the tasks. In general, a negative correlation coefficient implies that fewer errors/shorter response time corresponds to higher connectivity (better performance → greater connectivity), while a positive correlation indicates the opposite (better performance → smaller connectivity). During the simple GNG task, all connections with significant correlation (thick edges, Figure 3a,b,d,e) show a negative relationship between connectivity-omission error and connectivity-commission error in both sexes. The connectivity-response time correlation in the female group is negative in all except one significantly correlated connection (thick edges, Figure 3c). This means that greater simple GNG connectivity is associated with better task performance across both sexes. During the simple GNG task, all connections with significant correlation (thick edges, Figure 3a,b,d,e) show a negative relationship between connectivity-omission error and connectivity-commission error in both sexes. The connectivity-response time correlation During the simple GNG task, all connections with significant correlation (thick edges, Figure 3a,b,d,e) show a negative relationship between connectivity-omission error and connectivity-commission error in both sexes. The connectivity-response time correlation The negative relationships in connectivity-omission error and connectivity-commission error are maintained in the male group during the emotional GNG (Figure 4d,e). A negative connectivity-omission error correlation is observed in all except one connection, and a negative connectivity-commission error correlation is observed in all connections with significant correlation. In contrast, a positive relationship in connectivity-omission error and connectivity-commission error appears in the female group. A positive connectivityomission error correlation is presented in all significantly correlated connections in the female group (Figure 4a). In general, when considering the omission and commission errors, greater emotional GNG connectivity corresponds to a better task performance in the male group but a poorer task performance in the female group.
Correlation between the Baseline Connectivity with the Simple and Emotional Connectivity
All connections in the male group and all except two connections in the female group have a positive correlation coefficient between the baseline connectivity and the task connectivity (data not shown). Figure 5 displays the connections that have low correlation coefficients (p-value > 0.05) between the baseline and the task connectivity, which are considered as task-specific connections. The female group recruited 11 connections during the simple GNG and nine connections during the emotional GNG, among which seven connections were common in both tasks (Figure 5a). The male group required four connections to perform the simple GNG and eight connections to perform the emotional GNG, among which one connection was common in both tasks (Figure 5b). All task-specific connections are either right hemisphere or inter-hemispheric connections.
Brain Sci. 2021, 11, x FOR PEER REVIEW 8 of in the female group is negative in all except one significantly correlated connection (thic edges, Figure 3c). This means that greater simple GNG connectivity is associated wi better task performance across both sexes. The negative relationships in connectivity-omission error and connectivity-commi sion error are maintained in the male group during the emotional GNG (Figure 4d,e). negative connectivity-omission error correlation is observed in all except one connectio and a negative connectivity-commission error correlation is observed in all connection with significant correlation. In contrast, a positive relationship in connectivity-omissio error and connectivity-commission error appears in the female group. A positive conne tivity-omission error correlation is presented in all significantly correlated connections the female group (Figure 4a). In general, when considering the omission and commissio errors, greater emotional GNG connectivity corresponds to a better task performance the male group but a poorer task performance in the female group.
Correlation between the Baseline Connectivity with the Simple and Emotional Connectivity
All connections in the male group and all except two connections in the female grou have a positive correlation coefficient between the baseline connectivity and the task co nectivity (data not shown). Figure 5 displays the connections that have low correlation coefficients (p-value 0.05) between the baseline and the task connectivity, which are considered as task-specif connections. The female group recruited 11 connections during the simple GNG and nin connections during the emotional GNG, among which seven connections were commo in both tasks (Figure 5a). The male group required four connections to perform the simp GNG and eight connections to perform the emotional GNG, among which one connectio was common in both tasks (Figure 5b). All task-specific connections are either right hem isphere or inter-hemispheric connections.
Discussion
A high connectivity in connections within the right hemisphere, inter-hemispher and left medial prefrontal cortex found during both GNG tasks is in agreement with pr vious research [28], which emphasized the critical role of the prefrontal cortex in the mot response inhibition. In addition, in line with the studies of Duann et al. [18] and Davido et al. [19], we found an association between a greater connectivity in the prefrontal co texes and a better simple GNG performance. The finding of a connectivity-performan relation switch from a simple GNG to an emotional GNG in the female group but not the male group suggests that the prefrontal functional connectivity of the male and fema groups may have responded differently to a combination of emotional and inhibitory co trol. In general, since the strength of the cerebral functional connectivity depends on bo
Discussion
A high connectivity in connections within the right hemisphere, inter-hemisphere, and left medial prefrontal cortex found during both GNG tasks is in agreement with previous research [28], which emphasized the critical role of the prefrontal cortex in the motor response inhibition. In addition, in line with the studies of Duann et al. [18] and Davidow et al. [19], we found an association between a greater connectivity in the prefrontal cortexes and a better simple GNG performance. The finding of a connectivity-performance relation switch from a simple GNG to an emotional GNG in the female group but not in the male group suggests that the prefrontal functional connectivity of the male and female groups may have responded differently to a combination of emotional and inhibitory control. In general, since the strength of the cerebral functional connectivity depends on both a subject's performance and sex, it is critical to consider these factors when comparing connectivity in different groups.
Based on the commonly used method, we found no connections that are specific for a GNG task, in that there was no significant difference in any connections between the baseline and task connectivity (ANOVA tests, Section 3.2). As aforementioned, the traditional method is not appropriate to compare resting and task connectivity in the prefrontal cortex, since this brain region expresses a high connectivity level in both conditions (Section 3.2, Figure 2). This study suggested a new method to explore GNG task-specific connections. With this approach, we have identified 13 connections in the female groups and 11 connections in the male groups that are specific to a GNG task. Interestingly, the female group prefrontal network recruited more connections (11 connections) during a simple GNG task than the male group (four connections), which implies that females may require more brain resources to perform a simple GNG than males. This finding is in line with the result found in Melynyte et al.'s study where they reported that females required more neural resources for a Go execution [29].
Most fMRI studies evaluating the brain circuits and areas involved in the response inhibition process have revealed multiple right prefrontal areas that are associated with this process including the right inferior frontal gyrus and right middle frontal gyrus [6][7][8]. Similar to these findings, most connections (nine connections in the female group and eight connections in the male group), which are found to be specific to the GNG task in the current study, are in the right hemisphere. The difference between the results from this study and previous fMRI studies is the involvement of the inter-hemispheric connections in the GNG task. We found four inter-hemispheric connections in the female group and three inter-hemispheric connections in the male group specific for the task. Our findings suggest that the left prefrontal cortex may be indirectly associated with the GNG task through its interaction with the right prefrontal cortex.
A limitation of this study lies in the use of the NIRS probe, which only covers the prefrontal cortex region. Currently, we can only investigate the prefrontal functional connectivity, but not inter-brain region connectivity (e.g., frontal-motor or frontal-sensory connectivity) during the GNG task. As the behavioral inhibition task may involve the pre-supplementary motor area [7], and the emotional task may activate additional neural regions, future study should cover other brain regions to examine the interaction between brain regions during a behavior inhibition and emotional regulation task.
Conclusions
This study investigated a sex-based functional connectivity in the prefrontal cortex during a simple and emotional Go/No-Go task, which was then correlated to the subject's performance. We found a strong connectivity in the right hemisphere, inter-hemisphere, and left medial prefrontal cortex in all conditions. No differences in the Go/No-Go performance nor the prefrontal connectivity were found between the male and female groups. Both sex groups had a positive correlation between the prefrontal connectivity and the simple GNG performance. However, although the male group had a positive correlation, the female group expressed a negative correlation between the prefrontal connectivity and the emotional GNG performance. Additionally, this study found that females recruited a greater number of brain connections to perform a behavior inhibition task than males. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restriction. | 2021-07-26T05:23:17.979Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "0db30290acd243108a499476f90c8df5b04e6235",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3425/11/7/909/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0db30290acd243108a499476f90c8df5b04e6235",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
270973466 | pes2o/s2orc | v3-fos-license | Predicting progression from subjective cognitive decline to mild cognitive impairment or dementia based on brain atrophy patterns
Background Alzheimer’s disease (AD) is a progressive neurodegenerative disorder where pathophysiological changes begin decades before the onset of clinical symptoms. Analysis of brain atrophy patterns using structural MRI and multivariate data analysis are an effective tool in identifying patients with subjective cognitive decline (SCD) at higher risk of progression to AD dementia. Atrophy patterns obtained from models trained to classify advanced AD versus normal subjects, may not be optimal for subjects at an early stage, like SCD. In this study, we compared the accuracy of the SCD progression prediction using the ‘severity index’ generated using a standard classification model trained on patients with AD dementia versus a new model trained on β-amyloid (Aβ) positive patients with amnestic mild cognitive impairment (aMCI). Methods We used structural MRI data of 504 patients from the Swedish BioFINDER-1 study cohort (cognitively normal (CN), Aβ-negative = 220; SCD, Aβ positive and negative = 139; aMCI, Aβ-positive = 106; AD dementia = 39). We applied multivariate data analysis to create two predictive models trained to discriminate CN individuals from either individuals with Aβ positive aMCI or AD dementia. Models were applied to individuals with SCD to classify their atrophy patterns as either high-risk “disease-like” or low-risk “CN-like”. Clinical trajectory and model accuracy were evaluated using 8 years of longitudinal data. Results In predicting progression from SCD to MCI or dementia, the standard, dementia-based model, reached 100% specificity but only 10.6% sensitivity, while the new, aMCI-based model, reached 72.3% sensitivity and 60.9% specificity. The aMCI-based model was superior in predicting progression from SCD to MCI or dementia, reaching a higher receiver operating characteristic area under curve (AUC = 0.72; P = 0.037) in comparison with the dementia-based model (AUC = 0.57). Conclusion When predicting conversion from SCD to MCI or dementia using structural MRI data, prediction models based on individuals with milder levels of atrophy (i.e. aMCI) may offer superior clinical value compared to standard dementia-based models.
Background
Alzheimer's disease (AD) is a progressive neurogenerative disease and the most common cause of dementia, with an increasing prevalence worldwide [1].The pathophysiological changes in AD begin years or even decades before the onset of clinical symptoms [2,3].The failure of many recent drug trials suggests that future effective therapeutic strategies may require timely intervention in a preclinical stage [4][5][6].To help with identification of individuals with increased risk of AD, the concept of subjective cognitive decline (SCD) has been proposed [7].Subjective complaints of cognitive decline are a standalone risk factor for the development of mild cognitive impairment (MCI) and dementia with up to twofold risk increase when compared to healthy individuals without complaints [8,9].Identification of individuals suffering from SCD due to ongoing neurodegenerative processes such as AD, as opposed to SCD due to other etiology, is a task of substantial clinical importance, because individuals before onset of clinical symptoms are the most likely to benefit from treatment when available [4][5][6].
Although the current clinical diagnostic algorithm doesn't recommend routine evaluation of pathophysiological biomarkers in cognitively unimpaired individuals [10], for research purposes, the framework separately evaluating individual biomarkers regardless of clinical syndrome, the "ATN framework", has been established.In this framework, the "A" stands for a β-amyloid biomarker (e.g.cerebrospinal fluid [CSF] β-amyloid [Aβ] 42 peptide levels, Aβ42/40 ratio, amyloid positron emission tomography [PET]), "T" for a tau biomarker (e.g.CSF P-tau levels, tau PET), and "N" for a neurodegeneration biomarker (e.g.structural MRI, 18 F-fluorodeoxyglucose PET) [11].In clinical practice, full evaluation of individuals with SCD may prove challenging due to limited availability of biomarkers, ethical and economic considerations.Structural MRI, however, is a widely available, non-invasive, and safe method to assess neuronal damage.
Early stages of AD are typically characterized by a pattern of atrophy with predominant involvement of the medial temporal lobe [12].A similar atrophy pattern has been observed in SCD individuals [13][14][15][16][17][18].Analyzing a specific pattern of atrophy rather than individual structures has been shown to yield high predictive value [12].We have previously used Orthogonal Projection to Latent Structures (OPLS) [19], a multivariate data analysis method, to discriminate both MCI and patients with AD from controls [12].We used OPLS to create a "disease severity index", using multiple structural MRI measures as input, allowing us to predict progression from MCI to dementia [20,21] and from SCD to MCI or dementia [22].
When the task is to predict progression from MCI to dementia, majority of published studies utilize models based on sets of healthy individuals and patients with AD dementia [23].However, this approach may have limitations in predicting progression from SCD to MCI.Though some SCD individuals show modest brain atrophy [24], hence they are much closer to healthy individuals than to patients with AD dementia.Such models are therefore more likely to treat SCD individuals with very mild levels of atrophy incorrectly as healthy.To the best of our knowledge, accuracy of prediction using datasets trained on individuals at different stages of the disease (e.g., MCI, AD dementia) has never been compared.We hypothesized, that it may be possible to further improve the prediction accuracy of SCD models, by training the models on individuals with the same pattern but milder levels of atrophy, such as MCI due to AD [25], as opposed to patients with AD dementia.Hence, (1) we used multivariate data analysis and structural MRI data to examine atrophy patterns of β-amyloid positive amnestic MCI patients or patients with AD dementia and β-amyloid negative cognitively normal (CN) individuals, and applied the resulting models to SCD individuals to classify them as CN-like or disease-like; (2) we used the resulting classification as a basis for prediction of progression from SCD to MCI using longitudinal clinical data; and (3) compared the accuracy of prediction of "MCI-based" models with prediction based on equally constructed models based on AD patients with dementia.
The group of CN participants consisted of 220 β-amyloid negative elderly individuals from the Bio-FINDER study, which were initially recruited from the population-based Malmö Diet Cancer Study [28].The inclusion criteria for the CN group were as follows: (1) Age ≥ 60 years; (2) Mini Mental State Examination (MMSE) score in range of 28-30 points [29]; (3) No cognitive symptoms as assessed by a physician with expertise Conclusion When predicting conversion from SCD to MCI or dementia using structural MRI data, prediction models based on individuals with milder levels of atrophy (i.e.aMCI) may offer superior clinical value compared to standard dementia-based models.Keywords Structural MRI, Subjective cognitive decline, Alzheimer's disease, Atrophy patterns, Multivariate analysis in cognitive disorders; (4) Participant did not fulfill the criteria for either MCI [30] or dementia [31]; (5) was able to speak and understand Swedish in sufficient level not to require an interpreter during the examination; and (6) had normal CSF levels of Aβ42 (> 530pg/ml) [32] at baseline.Exclusion criteria were: (1) Relevant unstable systemic illness or organ failure making it difficult to participate in the study (i.e.terminal cancer, etc.); (2) Relevant neurological or psychiatric illness (major depressive disorder, Parkinson's disease, stroke, etc.); (3) Current significant alcohol or substance abuse; and (4) Refusal to undergo either MRI or lumbar puncture procedures.Collection of the data took place between 2010 and 2014.In further assessment, we used subgroups of β-amyloid negative CN individuals who were one-to-one age-and sex-matched to the diagnostic group analyzed (i.e.MCI or AD dementia).We used exact matching for sex and loose matching for age, with minimal age difference as a selection criterion.
The group of β-amyloid positive amnestic mild cognitive impairment (aMCI) patients was recruited from the cohort with mild cognitive symptoms of the BioFINDER study and consisted of 106 individuals from the memory clinics at Skåne University Hospital and Ängelholm's Hospital in Sweden, between 2010 and 2015.All patients had been referred to the memory clinics due to cognitive symptoms experienced by patient or informant, as a part of routine clinical practice.All patients fulfilled the criteria of amnestic MCI -their normative z-score for episodic memory domain in neuropsychological assessment (see next section) was ≤ 1.5.Additional inclusion criteria for the aMCI group were defined as follows: (1) Referral to the memory clinic due to cognitive symptoms (including non-memory complaints); (2) Age between 60 and 80 years; (3) MMSE score of 24-30 points at baseline; (4) Participant did not fulfill the criteria for dementia [31]; (5) Ability to speak and understand Swedish in sufficient level not to require an interpreter during the examination; and (6) abnormal CSF levels of Aβ42 (≤ 530pg/ml) [32] at baseline.MCI patients were classified as amnestic single or multiple domains, based on the results of neuropsychological assessment (see next section) at the baseline.Exclusion criteria for MCI patients were: (1) Relevant unstable systemic illness or organ failure making it difficult to participate; (2) Current significant alcohol or substance abuse; (3) Refusal to undergo either lumbar puncture or neuropsychological assessment; and (4) Cognitive symptoms at baseline explainable by another condition (normal pressure hydrocephalus, brain tumor, major stroke, epilepsy, schizophrenia, past significant alcohol abuse and ongoing medication such as benzodiazepines).
The group of patients with SCD was recruited from the cohort with mild cognitive symptoms of the BioFINDER study and consisted of 139 individuals included between 2010 and 2015 from the memory clinics at Skåne University Hospital and Ängelholm's Hospital in Sweden.As in the MCI group, all patients had been referred to the memory clinics due to cognitive symptoms experienced by patient or informant, as a part of routine clinical practice.No further specific questionnaires to ascertain SCD were administered.Inclusion criteria were similar to the MCI group criteria 1-5.However, SCD individuals showed no objective impairment in neuropsychological testing based on established normative data.Exclusion criteria were equal to those of the MCI group.
The group of patients with dementia was recruited from the dementia cohort of the BioFINDER study and consisted of 39 individuals included between 2010 and 2015.Patients were diagnosed with dementia after thorough clinical investigation at the memory clinic from the Skåne University Hospital.All patients fulfilled the criteria of probable dementia due to AD [33], fulfilling at minimum the core clinical criteria.Most AD patients, though not all (n = 32; 82.05%), underwent lumbar puncture and had CSF evidence of abnormal levels of Aβ42 (≤ 530pg/ ml).The exclusion criteria were defined as (1) significant unstable systemic illness or organ failure such as terminal cancer, making it difficult to participate in the study; or (2) current significant alcohol or substance misuse.
Neuropsychological assessment
All participants underwent neuropsychological evaluation, which consisted of tests assessing verbal, visuospatial and construction skills, episodic memory, and executive functions.Individual test batteries varied between groups.Tests administered to all groups included measures of global cognition -MMSE and AD Assessment Scale-Cognitive subscale (ADAScog) [34].Global deterioration scale [35] was used as an outcome measure in further analyses.For further details, please see http://biofinder.se/data-biomarkers/clinical-evaluation/.
CSF sampling
The CSF analysis was performed in all participants in accordance with the Alzheimer's Association Flow Chart for CSF biomarkers [36].The samples were collected at baseline and stored in 1mL polypropylene tubes at temperature of -80 °C.The CSF levels of Aβ42 were analyzed simultaneously in a single laboratory with the INNOTEST ELISA set (Fujirebio Europe, Ghent, Belgium) [37].
MRI analysis
The acquired T1 images were analyzed using the Free-Surfer 6.0 imaging suite (https://surfer.nmr.mgh.harvard.edu/) with the in-house database system theHiveDB [38].For each individual, the thickness of 34 cortical regions [39] and the volumes of 23 subcortical structures [40] were obtained from FreeSurfer.All segmentations were visually checked prior to further processing, only the subjects that passed the visual inspection were included in subsequent analyses.The summary measures of CSF, white and grey matter volumes were not included in the model to avoid redundancy, as well as volume of brainstem and cerebellum, as these regions undergo minimal levels of atrophy in the early stages of the disease [41].Left and right-sided measures were averaged prior to analysis.We performed principal component analysis on these 34 + 17 measures within each study group (CN, aMCI, AD dementia), to detect possible outliers.We found no individuals with scores larger than 4 SD in first or second component within their respective group, indicating that this dataset did not have any outliers.
Statistical methods Participants
We used the R software (R Foundation for Statistical Computing, Vienna, Austria; www.r-project.org) to perform the statistical analyses.We used analysis of variance (ANOVA) to assess group differences in age and analysis of covariance (ANCOVA) using age and sex as covariates to assess differences in education, neuropsychological test results, MRI and CSF measurements.The Kruskal-Wallis test was used to assess the differences in sex and APOE ε4 distributions.For groups characterization, to reduce the number of reported volumetric measurements, we reported volumes or thickness of selected regions known to be affected in the earliest stages of AD (i.e.hippocampus, entorhinal cortex) according to Braak and Braak [42].We performed 2 separate ANOVA and ANCOVA analyses: First, for groups associated with AD dementia (SCD, AD dementia, matched β-amyloid negative CN), second, for groups associated with aMCI (SCD, β-amyloid positive aMCI, matched β-amyloid negative CN).
Training of the OPLS model
To calculate the "severity index" [22] that assesses the pattern of atrophy characteristic of patients with AD dementia (or aMCI) versus controls, we employed the OPLS [19] algorithm using the "ropls" package implemented within the R-programming environment (https:// bioconductor.org/packages/release/bioc/html/ropls.html).The implementation used original non-linear iterative partial least squares (NIPALS) [43] algorithms [19,44].The OPLS has been previously extensively used in CN vs. AD classification and SCD to MCI progression prediction [20,22,[45][46][47][48][49][50] and its performance has been shown be similar to that of other commonly used multivariate analysis algorithms [50].The procedure for the actual index has been described in detail previously [20,45].In brief, the data is preprocessed using standard steps, applying unit variance scaling and mean centering to the data.The OPLS algorithm then splits the systemic variation into two parts -predictive and orthogonal.The first, predictive component, contains information relevant for the classification between CN and aMCI/dementia groups.The second, orthogonal component, contains information that is not related to the classification problem.The ability to predict and the reliability of the model are evaluated through the 'goodness of fit' or explained variance (R 2 ) and the 'goodness of prediction' or predicted variance (Q 2 ) parameters.Q 2 represents a performance of the model outside of the training dataset and is therefore regarded as a more relevant metric.A value of Q 2 > 0.05 is regarded as significant, and a value > 0.5 represents a good model [51].We used a 10-fold crossvalidation [52] for training of the model.
We used a total of 51 variables from the baseline MRI FreeSurfer assessment as the input data, including the 34 cortical and 17 subcortical regions explained above (Fig. 1A, B).Prior to the analysis, all subcortical volumes were adjusted for the differences in head size by regressing out the estimated total intracranial volume (eTIV) [53,54].In addition, we applied a linear detrending algorithm based on age-related changes in the β-amyloid negative CN group to the data, assuming that thickness/volumetric changes in the CN group are mostly associated with aging, while changes in the aMCI and AD dementia groups may also be influenced by disease-related factors.This approach has shown to have a positive effect on the classification performance of OPLS models [49].For training of the OPLS model, participants from the CN group were assigned a value of 0, while aMCI and AD dementia individuals were assigned a value of 1 during training of their respective models.
In all MRI-based models, prediction accuracy of the model is limited by the heterogeneity of the underlying pathology.In AD, several different pathology phenotypes have been described [55], with correspondingly different atrophy patterns [56][57][58] including the minimal atrophy phenotype [58].To minimize the impact of heterogeneity on prediction accuracy of our model, we removed aMCI and demented individuals with the minimal atrophy phenotype [56,58,59] from their respective training dataset.Patients in this phenotype are known to have no or low levels of brain atrophy, which may introduce noise in our OPLS classification models.Since the OPLS approach is based on analyzing atrophy patterns, we hypothesized that removal of these individuals from the training dataset would improve the accuracy of the resulting model further.To identify individuals with a minimal atrophy phenotype, we projected all patients from the aMCI and AD dementia group onto their respective models (CN vs. aMCI, and CN vs. AD dementia, respectively), assigning them the predicted value of the "severity index" and classifying them as either CN-like or disease-like.For this classification we used the cutoff value obtained by identifying the point of maximum separation between the smoothed cumulative distribution function of the two groups (i.e., CN and aMCI or CN and AD dementia) [60].This way we identified 15 individuals from the aMCI group, classified as CN-like, showing minimal atrophy.These individuals were removed from the training dataset.We found no individuals with minimum atrophy in the AD dementia group.Hence, we then repeated the previously described procedures only for the aMCI group, and the model was retrained using an updated training set.The updated set for the aMCIbased model without minimal atrophy patients included 91 aMCI patients.The dementia-based model remained unchanged, including 39 AD dementia patients.For each model, we selected a subgroup of age-and sex-matched β-amyloid negative CN individuals.We used exact matching for sex and loose matching for age, with minimal age difference as a selection criterion.We used the cross-validated model to estimate Q 2 and R 2 and report sensitivity and specificity values.For more details on how removal of minimal atrophy group affected model performance, see the results.
In total, we built two models, (1) "dementia-based" model, trained using β-amyloid negative CN and AD dementia individuals; and (2) "aMCI-based" model, trained using β-amyloid negative CN and β-amyloid positive aMCI, excluding those with minimal atrophy phenotype.These two models did not differ in any other parameter.
Classification
We projected all participants from the SCD group (n = 139), regardless of their Aβ status, onto the models (1) and ( 2), and their values of Y or "severity index" for each model were estimated.The cutoff value for predicting observations as either CN-like or disease-like was obtained by identifying the point of maximum separation between the smoothed cumulative distribution function of the two groups (i.e., CN and aMCI or CN and AD dementia), as described above.The final cutoff values used were 0.413 for the dementia-based model and 0.384 for the aMCI-based model.
Longitudinal analysis
Next, we assessed the longitudinal clinical data of the SCD individuals over an 8-years follow-up period with regard to their clinical trajectory.We defined clinical trajectory as the progression from SCD to MCI or dementia using the Global deterioration scale.Participants who scored ≥ 3 during the yearly evaluation were treated as progressors.SCD participants were followed up until progression to MCI or dementia or censored on the last date observed.We did not have mortality data available.Longitudinal data were then used to assess sensitivity and specificity of the OPLS models to predict progression.We also used the calculated "severity index" value to compute receiver operating characteristic (ROC) and area under curves (AUC).Further, we evaluated the clinical trajectory of CN-like and disease-like SCD groups, by performing survival analysis using Kaplan-Meier estimate and log rank test and estimated the risk of progression to MCI or dementia by applying data to the Cox models.Then, we made a comparison of the ROC curves of models ( 1) and ( 2) using the implementation of DeLong algorithm [61] within the pROC package [62].
Finally, we compared the models (1) "dementia-based" and (2) "aMCI-based", regarding their sensitivity, specificity, and ROC AUC, as well as in terms of characteristics of SCD groups identified as "disease-like" by each model.Simplified overview of the data processing steps is available in Fig. 2.
Results
Participant's main demographical and clinical characteristics are summarized in Table 1.The AD dementia associated groups (SCD, AD dementia, matched β-amyloid negative CN) differed in cognitive performance, APOE ε4 allele frequency, volumetric measures, and CSF biomarkers.The aMCI associated groups (SCD, β-amyloid positive aMCI, matched β-amyloid negative CN) differed in age, cognitive performance, APOE ε4 allele frequency, volumetric measures, and CSF biomarkers.
Classification using Alzheimer's disease dementia patients (standard approach)
The cross-validated "AD-dementia-based" model reached a cumulative R 2 of 0.842 and a cumulative Q 2 of 0.807.The model reached 100% sensitivity and 100% specificity in discriminating patients with AD dementia from CN individuals.Detailed model characteristics are summarized in Figs.1A and 3A.Removal of patients with minimal atrophy did not affect this model since no patients were removed.When applied to the SCD data, the model labelled 96.4% of the SCD individuals as CN-like (n = 134; 31.3%β-amyloid positive) and 3.6% of the SCD individuals as AD dementia-like (n = 5; 40.0%β-amyloid positive).The AD dementia-like SCD group was older and had lower hippocampal volume than the CN-like SCD group after correcting for age and sex (P < 0.05).It did not differ from the CN-like SCD group in other characteristics (Table 2).
Classification using aMCI patients (new approach)
The cross-validated "aMCI-based" model reached a cumulative R 2 of 0.582 and a cumulative Q 2 of 0.536.The model reached 96.7% sensitivity and 80.2% specificity in discriminating patients with aMCI from CN individuals.More detailed model information is summarized in Figs.1B and 3B.Initial model, without removal of patients with minimal atrophy phenotype, reached lower cross validated sensitivity (87.74%), while having only marginally higher specificity (82.08%).This model also showed worse performance when applied to external dataset during cross validation (Q 2 = 0.425) and was therefore considered loss robust.
Further, to evaluate the effect of training set size on model performance -since both models were trained using different number of patients (39 vs. 91) -we retrained the aMCI-based model, using a subset of 39 randomly selected individuals from the aMCI dataset, keeping all other parameters identical.The resulting aMCI model was significant (Q 2 = 0.563), showing lower sensitivity (59.57% vs. 72.34%)but higher specificity (73.91% vs. 60.87%) and similar ROC AUC (0.719 vs. 0.72) when predicting progression from SCD to MCI and dementia (see next section), compared to the model trained on full number of participants.Comparing ROC curves, it didn't perform differently from the full model (p = 0.998).In further analyses, we only evaluated model trained on full number of participants, excluding patients with minimal atrophy phenotype.
Applying the model to the SCD data, 49.6% of individuals (n = 69; 26.1% β-amyloid positive) were labelled as CN-like and 50.4% (n = 70; 37.1% β-amyloid positive) as aMCI-like.The aMCI-like SCD group had lower hippocampal volume and thinner entorhinal cortex than the CN-like SCD group after correcting for sex and age (P < 0.05).The aMCI-like SCD group did not differ from the CN-like SCD group in other characteristics.(Table 2)
Longitudinal analysis
Next, we analyzed the longitudinal data of the 139 SCD participants collected within the 8 years period.Within this period, 47 patients (33.81%) progressed to MCI or dementia, while 92 (66.19%) remained in the SCD group.Most participants progressed within the first 1-2 years after baseline (n = 35, 74.4%), and no SCD individual progressed later than the 6th year.SCD progressors were older, had a higher percentage of APOE ε4 carriers and a higher percentage of Aβ42 positive individuals (P < 0.01).After correcting for sex and age, they scored higher in severity index, performed worse in ADAS 10 word delayed recall but not MMSE at baseline and had lower baseline hippocampal volume (P < 0.05).SCD progressors also had lower CSF Tau (P = 0.026) but not Aβ42 or P-tau levels at baseline (Table 3).
Longitudinal analysis using the Alzheimer's disease dementia model
100% of the SCD participants (n = 5) labelled as AD dementia-like using the "dementia-based" model progressed to MCI or dementia.This represented 10.6% of all progressors since 42 SCD participants classified as CN-like (31%) also progressed to MCI or dementia.Therefore, the dementia-based model reached 100% specificity but only 10.6% sensitivity in predicting progression from SCD to MCI or dementia in our dataset, resulting in AUC of 0.57 (Fig. 4).In the survival analysis using the Kaplan-Meier estimator and log rank test, we found that AD dementia-like SCD participants were more likely to progress to MCI or dementia (P < 0.001) than CN-like SCD participants (Fig. 5A).Fitting the data into the Cox-model, we found that AD dementia-like SCD participants were 10.8 times more likely to progress to MCI or dementia than CN-like SCD participants (confidence interval [CI]: 4.0-28.9;P < 0.001).β-amyloid positivity increased the risk of clinical progression to MCI or dementia 4.3 times (CI: 2.4-7.9;P < 0.001), while sex did not affect the risk of progression (P = 0.679).
Longitudinal analysis using the aMCI model
Out of the 70 SCD patients labelled as aMCI-like using the "aMCI-based" model, 48.6% (n = 34) progressed to MCI or dementia.The model thus identified correctly 72.3% of all SCD progressors.Out of the CN-like group, only 18.8% (n = 13) progressed to MCI.Therefore, the aMCI model reached 72.3% sensitivity and 60.9% specificity in predicting progression from SCD to MCI and dementia.The AUC reached a value of 0.72 (Fig. 4).Performing the survival analysis using the Kaplan-Meier estimator and log rank test, we found that aMCI-like SCD participants were more likely to progress to MCI or dementia (P < 0.001) than CN-like SCD participants (Fig. 5B).Fitting the data into the Cox-model, we found that aMCI-like SCD participants were 2.9 times more likely to progress to MCI or dementia than CN-like SCD participants (CI: 1.5-5.6;P = 0.001).β-Amyloid positivity increased the risk to progress to MCI or dementia 3.4 times (CI: 1.8-6.4;P < 0.001).Sex did not affect the risk of progression (P = 0.406).
ROC comparison
Comparing the ROC curves, we found that the models performed differently between groups (P = 0.037) (Fig. 4).
The AD dementia-based model identified a lower number of individuals (n = 5) at high risk of progression, most of which progressed by the first follow up visit, and all of whom progressed within first four years.The aMCIbased model identified a larger group of individuals (n = 70) with moderate risk of progression, progressing in up to 6 years after the initial scan.The aMCI-based
Discussion
In this study, we used multivariate data analysis and structural MRI to compare classification and prediction models for SCD.We assessed the frequency of diseaselike SCD individuals and their characteristics in comparison with CN-like SCD individuals and evaluated the accuracy of prediction of progression from SCD to MCI or dementia, using equally constructed models based on either β-amyloid positive aMCI or AD dementia patient data.
Comparing the dementia-based and the aMCI-based models, the dementia-based model achieved higher values of explained variance (R 2 ) and goodness of prediction (Q 2 ) metrics as well as better overall cross-validated sensitivity and specificity (100% and 100%, respectively) than the aMCI-based model (96.7% and 80.2%, respectively).This was expected, since overall levels of atrophy in AD dementia are higher than in aMCI [63], supposedly making classification of aMCI vs. CN individuals based on atrophy patterns more difficult than the classification of AD dementia vs. CN.This corresponds to our previous results on an external cohort [12], where the dementiabased model also reached higher cross-validated sensitivity and specificity values than the MCI-based model (81% vs. 66% and 82% vs.73%, respectively).Other previous works using the OPLS [20][21][22] based their models on AD dementia patients only, reaching cross validated sensitivity between 84 and 87% and specificity between 90 and 100%.Both our models therefore reached higher sensitivity and specificity values than similarly built models in the previous studies [12,[20][21][22].Part of this improvement may be explained by factors such as smaller size of the AD dementia training dataset (n = 39) or overall homogeneity of our dataset (all participants come from a single center, MRI scans were performed using the same scanner) leading to a slight overfitting.However, we believe other factors to be of more importance.Unlike the previous studies, we used training datasets based on biomarker defined individuals -β-amyloid positive aMCI and AD dementia patients with age and sex matched β-amyloid negative CN individuals.We have also introduced several methodological improvements into the model creation, most importantly removal of individuals with minimal atrophy phenotype from the training dataset, which has led to a notable improvement of the aMCI-based model.Further methodological improvements included identification of optimal cutoff value and age detrending.This contributes to the novelty of the current study, but also provided high sensitivity and specificity values for the MCI vs. CN classification (96.7% and 80.2%), which are usually around 75-85% in the literature [12,[64][65][66], though some authors report both sensitivity and specificity as high as 100%, using a combination of multiple MRI-based features [67].
Looking at the individual variable loadings, among the most important variables contributing to the dementiabased model were thickness of inferior and middle temporal gyrus, volumes of hippocampus, pallidum, corpus callosum and inferior lateral ventricle (Fig. 1A).In the aMCI-based model, some of the most important variables were volumes of hippocampus, amygdala and inferior lateral ventricle, thickness of entorhinal cortex, inferior temporal gyrus and fusiform gyrus (Fig. 1B).The atrophy patterns in both groups were similar, but not identical, sharing 3 out of 6 variables with highest loading.Comparing our variable loadings with the previous study [12], which combined over 1000 individuals from two multicentric studies, AddNeuroMed [68] and Alzheimer's Disease Neuroimaging Initiative (ADNI; .loni.usc.edu/),we found variable loadings in all utilized datasets (AddNeuroMed, ADNI, combined) to be similar to our current models, particularly to the aMCI-based model, which shared 5 out of 6 variables with highest loading.The most important variables in the combined dataset were volumes of hippocampus, amygdala, and interior lateral ventricle, and thickness of entorhinal cortex, inferior and middle temporal gyrus.Although our current dataset comes from a single center in Sweden and is based on comparatively smaller number of participants (total 399 vs. 1074), similarity of the observed patterns of atrophy suggests they are stable across multiple populations in Europe and North America.Our models may therefore be well applicable to the data based on other populations.
We found further differences between the models when we applied them to predict progression from SCD to MCI or dementia.The dementia-based model achieved 100% specificity, but sensitivity was extremely low (10.6%).This finding makes this model partially less useful for clinical application unless the aim is to identify SCD patients with an extremely high risk of progression to MCI.In contrast, the aMCI-based model reached 72.3% sensitivity and moderate specificity of 60.9% in predicting progression from SCD to MCI.These finding suggests that the more advanced pattern of atrophy of patients used in training of dementia-based model identifies a small number of individuals at very high risk of clinical progression, while the milder yet developed atrophy pattern of aMCI patients results in superior sensitivity at the cost of specificity of the model.This suggests that models could be employed for different purposes.The AD dementia-based model could, for example, be utilized in identifying highrisk individuals for purpose of drug trial, while the aMCI model would be better used as a non-invasive population screening tool.However, comparing the ROC AUC directly between the models, the aMCI-based model was clearly superior to the AD dementia-based model, reaching 0.72 vs. 0.56 AUC (P = 0.037).
Though there are multiple studies using supervised learning and multivariate analysis to predict progression from MCI to dementia using structural MRI data [12,20,21,[69][70][71][72], there is limited number of studies attempting to predict progression from SCD to MCI [22,73,74].
Previously [22], we used OPLS to predict progression from SCD to MCI using a model trained on healthy controls and patients with probable AD dementia from the Another recent study used support vector machines and multimodal data, including structural MRI data from FreeSurfer, to predict progression from SCD to MCI over 7 years period [73].In comparison, MRI-based model in this study reached lower sensitivity (41.8%) and higher specificity (73.1%) than our aMCI-based model, while our dementia-based model was less sensitive and more specific.This study used a different approach, training the algorithm using longitudinal data of the evaluated SCD individuals.
Another study from the same group [74] used machine learning to create regression framework by combination of sparse coding and random forest to assess and predict cognitive performance in SCD and MCI individuals by predicting global cognition test scores change (i.e.MMSE and Montreal Cognitive Assessment) using structural MRI.Predicted values correlated with real scores with Pearson's coefficients up to 0.35.These results are not directly comparable to our current results -global cognition scores are only roughly transferable to a clinical syndrome, and do not consider some important factors such as age and education of the patient.
Predicting progression from SCD to MCI or dementia is a task of high clinical significance.With upcoming availability of new treatment options [75], predicting progression from SCD to MCI or dementia will be crucial to effectively screen individuals in earliest stages of the disease to commence the treatment as soon as possible to achieve a maximum effect [4][5][6].While currently there is a number of highly specific diagnostic methods available (i.e.CSF sampling and PET imaging), these are largely unsuitable for screening purposes due to their cost and invasiveness.Emerging blood-based biomarkers [1,76] are yet to be integrated into the routine clinical practice.Structural MRI in conjunction with pattern atrophy analysis could therefore be employed in selection of patients at high risk of clinical progression for further diagnostic workup.We argue that for this purpose, the utilized model should be optimized to be highly sensitive while maintaining moderate specificity.Based on our results, we argue that models based on aMCI patients would be better suited for this task than the current models based on AD dementia patients.Using a training dataset based on biomarker-defined aMCI and CN individuals and by optimizing the model creation we can train our model to detect patterns of 'early AD-related atrophy' rather than 'developed AD-related atrophy' .
One of principal strengths of this study was our dataset.For the model training we included β-amyloid positive aMCI, AD dementia and β-amyloid negative CN participants.Longitudinal data then consisted of SCD individuals with over 8 years of longitudinal monitoring.Further, we used a well-established method of multivariate data analysis -an OPLS generated "disease severity index", that has been repeatedly proven an effective tool in predicting progression from SCD to MCI and from MCI to dementia [12,[20][21][22].The processing of structural MRI data was performed using widely available automated software package (FreeSurfer 6.0), facilitating the application of our model on external datasets, and minimizing the risk of bias or human error in data processing.
Limitations
This study also has limitations.Prediction models based on structural MRI, though achieving high specificity in predicting development of MCI and dementia, do not reflect the underlying pathology and therefore need to be used in combination with other methods that allow the assessment of amyloid or tau pathologies.While achieving moderate amounts of sensitivity and specificity, the current model still fails to identify significant portion of future progressors (~ 28%).Arguably, the model could be further improved by inclusion of segmentation of structures affected early in course of AD such as hippocampal subfields, transentorhinal and perirhinal cortex, anterolateral and posteromedial entorhinal cortex and basal forebrain nuclei [77], automated methods for segmentation of some [78,79], but not all of these structures are publicly available.However, addition of further MRI processing steps would take away one of the major advantages of our current approach, that is a relative simplicity and reproducibility of MRI processing involved.Further, performance of our dementia-based model could be negatively affected by the fact that CSF biomarkers were not available in part of AD dementia group (n = 7; 17.95%).Another concern might be the reproducibility of our results.Since our data come from homogenous population from single center in Sweden, we cannot rule out the possibility that we are detecting a population specific pattern, that would not apply to other datasets.However, as discussed above, atrophy patterns we observed are very similar to the atrophy patterns observed in previous large multicenter studies assessing individuals across multiple populations across Europe and North America [12].Therefore, we believe that the observed patterns are not specific to only our current population.
Conclusions
In this study, we found that the prediction models based on brain atrophy patterns of individuals with milder levels of atrophy (i.e.aMCI) offer higher sensitivity and moderate specificity compared to standard dementiabased models for the prediction of clinical progression from SCD to MCI or dementia using structural MRI data.Thus, these models may offer superior clinical value and should be further refined and explored.
Fig. 1
Fig. 1 Variable loadings.p1 = Contribution of individual variables to the predictive component in the model (A) trained on the Alzheimer's disease dementia patients (B) trained on the aMCI patients
Fig. 2
Fig. 2 Simplified overview of data-processing steps.Processing preceding computation of the "disease severity index" and prediction of progression; aMCI = β-amyloid positive amnestic mild cognitive impairment; CN = β-amyloid negative cognitively normal participants; DEM = dementia due to Alzheimer's disease; OPLS = Orthogonal Projection to Latent Structures; SCD = subjective cognitive decline; Individual steps are described in detail in the manuscript
Fig. 3
Fig. 3 Characteristics of the model.(A) trained on the Alzheimer's disease dementia patients (B) trained on the aMCI patients ; R 2 = explained variance; Q 2 = predicted variance; (1) Permutation plot: Comparison of R 2 and Q 2 values of the model with other models, where random permutations of Y (diagnostic information) have been performed while X-data (input data) stayed intact; (2) Q 2 and R 2 values of individual components: p1 = predictive component; o1 = first orthogonal component (3) Score plot: individual scores of participants used in training; t1 = predictive component score; to1 = first orthogonal component score; (4) Loading plot: loadings of individual variables; p1 = predictive component; o1 = first orthogonal component
Fig. 4
Fig. 4 Receiver operating characteristic curves.Curves of the 'disease severity index' generated using aMCI-based (green) and dementia-based (blue) models; AUC = area under curve
Fig. 5
Fig. 5 Longitudinal progression of SCD groups.(A) using the model based on Alzheimer's disease dementia patients (B) using the model based on aMCI patients; The survival event was defined by either progressing to MCI or dementia at the time of annual follow-up.The log rank test was used to test the difference between the curves
Table 1 Participant characteristics SCD CN(-) aMCI aMCI(+) CN(-) DEM DEM Total P aMCI P
aMCI = β-amyloid negative cognitively normal participants age-and sex-matched to the aMCI group; DEM = dementia due to Alzheimer's disease; CN(-) DEM = β-amyloid negative cognitively normal participants age-and sex-matched to the AD dementia group; P aMCI = p-value of the analysis performed on groups associated with amnestic mild cognitive impairment (SCD, CN(-) aMCI , aMCI(+)); P DEM = p-value of the analysis performed on groups associated with dementia due to Alzheimer's disease (SCD, CN(-) DEM , DEM); MMSE = Mini-Mental State Examination, ADAS = Alzheimer's Disease Assessment Scale; Aβ42 positivity: Percentageof individuals with CSF level of β-amyloid 42 peptide lower then 530pg/ml; Aβ42 level: CSF levels of β-amyloid 42 peptide in pg/ml; Tau level: CSF levels of tau protein in pg/ml; P-tau level: CSF levels of phosphorylated tau protein in pg/ml
Table 3
SCD progressors versus SCD non-progressors within the 8 years follow-up periodValues are expressed as: mean (standard deviation) unless indicated otherwise; *: P < 0.05; **: P < 0.01; ***: P < 0.001; + Selected volumetric measures based on their early involvement during Alzheimer's disease onset according to Braak & Braak, 1991; a Kruskal-Wallis test; b ANOVA (analysis of variance); c ANCOVA (analysis of covariance; covariates: sex, age); MMSE = Mini-Mental State Examination, ADAS = Alzheimer's Disease Assessment Scale; Aβ-42 positivity: Percentage of individuals with CSF level of β-amyloid 42 peptide lower then 530pg/ml ; Aβ-42 level: CSF levels of β-amyloid 42 peptide in pg/ml; Tau level: CSF levels of tau protein in pg/ml; P-tau level: CSF levels of phosphorylated tau protein in pg/ml; Australian Imaging Biomarkers and Lifestyle flagship study of ageing (AIBL).In line with our expectations, our aMCI-based model achieved lower specificity (60.9% vs. 95.4%)but a superior sensitivity (72.3% vs. 38.1%) to the previous model.The ROC AUCs could not be directly compared, as it was not reported in the previous study.Our dementia-based model, on the other hand, was more accurate in predicting clinical progression (100% vs. 95.4% specificity).It was however less sensitive than the previous model (10.6% vs. 38.1%).This was despite similar overall cognitive performance (mean MMSE 20.2 vs. 20.4) and APOE status (71.8% vs. 75.0%ε carriers) of AD dementia participants in both studies.Yet, the different results could partially be explained by larger percentage of APOE ε4 carriers in CN group in the AIBL cohort (46.0 vs. 15.4%). | 2024-07-06T13:18:32.951Z | 2024-07-05T00:00:00.000 | {
"year": 2024,
"sha1": "0b1a4d832fda0a3d235a3cdb2060e41bf8fe49b5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "a5776f7b7db0aef9d28c80876a49856649b55182",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266456343 | pes2o/s2orc | v3-fos-license | Development of Acridone Derivatives: Targeting c-MYC Transcription in Triple-Negative Breast Cancer with Inhibitory Potential
Breast cancer, especially the aggressive triple-negative subtype, poses a serious health threat to women. Unfortunately, effective targets are lacking, leading to a grim prognosis. Research highlights the crucial role of c-MYC overexpression in this form of cancer. Current inhibitors targeting c-MYC focus on stabilizing its G-quadruplex (G4) structure in the promoter region. They can inhibit the expression of c-MYC, which is highly expressed in triple-negative breast cancer (TNBC), and then regulate the apoptosis of breast cancer cells induced by intracellular ROS. However, the clinical prospects for the application of such inhibitors are not promising. In this research, we designed and synthesized 29 acridone derivatives. These compounds were assessed for their impact on intracellular ROS levels and cell activity, followed by comprehensive QSAR analysis and molecular docking. Compound N8 stood out, significantly increasing ROS levels and demonstrating potent anti-tumor activity in the TNBC cell line, with excellent selectivity shown in the docking results. This study suggests that acridone derivatives could stabilize the c-MYC G4 structure. Among these compounds, the small molecule N8 shows promising effects and deserves further investigation.
Introduction 1.Epidemiology and Character of Triple-Negative Breast Cancer
According to a report from the World Cancer Research Fund International, in 2020, breast cancer surpassed lung cancer as the most common cancer worldwide.Approximately 2.3 million new cases of breast cancer are diagnosed annually, accounting for about 11.7% of all new cancer cases.Breast cancer was associated with approximately 6.9% of cancer-related deaths [1].In 2013, the International St. Gallen Breast Cancer Conference introduced an important molecular classification system for breast cancer based on immunohistochemistry and molecular biology characteristics.This system categorizes breast cancer into several main molecular subtypes: Luminal A, Luminal B (HER2 positive), Luminal B (HER2 negative), HER2 positive, and triple negative [2].Triple-negative breast cancer (TNBC) is a subtype that does not express ER, PR, or HER2.Compared to other breast cancer subtypes, TNBC tends to be more aggressive, characterized by a high degree of invasiveness and a propensity to spread to surrounding tissues and lymph nodes.It also carries a higher risk of recurrence [3].Given the absence of hormone receptors and HER2 expression, TNBC typically does not derive benefit from hormone therapy or targeted therapeutics such as Herceptin [4].Therefore, standard treatment approaches often involve chemotherapy and radiation therapy, which can impose a significant physical burden [5].
TNBC is currently a focal point of breast cancer research, with scientists actively seeking more effective treatment strategies, including novel therapies targeting specific molecular markers, to improve patient outcomes.
Role of c-MYC in Biology and G-Quadruplex Structure of c-MYC
The c-MYC oncoprotein, recognized for its role as the primary orchestrator of gene expression [6], stands as a versatile transcription factor with intricate involvement in the intricate control of numerous physiological processes.The c-MYC gene exhibits overexpression in a staggering 70% of human cancers, including TNBC [7].Consequently, the downregulation of c-MYC has emerged as an enticing strategy for cancer treatment.
Within the genetic architecture of the c-MYC gene lies a crucial element known as the nuclease hypersensitivity element III1 (NHE III1), which is located upstream of the P1 promoter, responsible for approximately 90% of the gene's transcriptional activation Notably, the purine-rich strand of DNA in this region can assume a distinctive secondary structure known as the G-quadruplex (G4) (Figure 1).G4 structures are formed from singlestranded guanine (G)-rich sequences and can be considered four-stranded DNA secondary structures.In the presence of monovalent cations, particularly potassium ions (K+) and sodium ions (Na+), four guanines in the same strand can form a G-tetrad structure by the Hoogsteen hydrogen bond.Three G-tetrads further form the G4 structure of c-MYC [8].The G4 structure represents a transient structural entity.A stabilized G4 structure holds the potential to impede the binding of RNA polymerase to NHE III1, consequently curtailing the expression of the c-MYC gene [9].Moreover, the 3 ′ and 5 ′ flanking regions of G4 in c-MYC contribute to a capping structure that envelops the corresponding terminal tetrads, creating an attractive binding pocket for small molecule ligands [10].Consequently, the pursuit of small molecule compounds capable of stabilizing the G4 structure of c-MYC emerges as a promising avenue for significantly reducing c-MYC protein expression, thereby exerting inhibitory effects on the growth and proliferation of TNBC cells.
often involve chemotherapy and radiation therapy, which can impose a significant physical burden [5].TNBC is currently a focal point of breast cancer research, with scientists actively seeking more effective treatment strategies, including novel therapies targeting specific molecular markers, to improve patient outcomes.
Role of c-MYC in Biology and G-Quadruplex Structure of c-MYC
The c-MYC oncoprotein, recognized for its role as the primary orchestrator of gene expression [6], stands as a versatile transcription factor with intricate involvement in the intricate control of numerous physiological processes.The c-MYC gene exhibits overexpression in a staggering 70% of human cancers, including TNBC [7].Consequently, the downregulation of c-MYC has emerged as an enticing strategy for cancer treatment.
Within the genetic architecture of the c-MYC gene lies a crucial element known as the nuclease hypersensitivity element III1 (NHE III1), which is located upstream of the P1 promoter, responsible for approximately 90% of the gene's transcriptional activation Notably, the purine-rich strand of DNA in this region can assume a distinctive secondary structure known as the G-quadruplex (G4) (Figure 1).G4 structures are formed from single-stranded guanine (G)-rich sequences and can be considered four-stranded DNA secondary structures.In the presence of monovalent cations, particularly potassium ions (K+) and sodium ions (Na+), four guanines in the same strand can form a G-tetrad structure by the Hoogsteen hydrogen bond.Three G-tetrads further form the G4 structure of c-MYC [8].The G4 structure represents a transient structural entity.A stabilized G4 structure holds the potential to impede the binding of RNA polymerase to NHE III1, consequently curtailing the expression of the c-MYC gene [9].Moreover, the 3′ and 5′ flanking regions of G4 in c-MYC contribute to a capping structure that envelops the corresponding terminal tetrads, creating an attractive binding pocket for small molecule ligands [10].Consequently, the pursuit of small molecule compounds capable of stabilizing the G4 structure of c-MYC emerges as a promising avenue for significantly reducing c-MYC protein expression, thereby exerting inhibitory effects on the growth and proliferation of TNBC cells.
Challenges Encountered in the Development of c-MYC G4 Stabilizer
In recent years, hundreds of small molecules stabilizing the c-MYC G4 structure have been reported [11][12][13][14][15].However, to the best of our knowledge, none have gained FDA approval.This is attributed in part to drug activity and, on the other hand, to the selectivity of the drugs [6,12].G4 structures are widely present throughout the human genome, with approximately 700,000 genes having the potential to form G4 structures [16].This presents a significant challenge in the development of c-MYC G4 stabilizers.Fortunately, experimental evidence has confirmed that the binding affinity of the same compound to G4 structures of different genes varies [11,13,17].Therefore, the development of a drug with high selectivity for c-MYC G4 structure and robust anti-tumor activity holds promise as a reliable choice for TNBC therapy, bearing crucial significance in improving the prognosis of TNBC patients [16].
In recent years, hundreds of small molecules stabilizing the G4 structures of c-MYC have been reported [11][12][13][14][15].However, to the best of our knowledge, none have gained FDA approval.This is attributed in part to drug activity and, on the other hand, to the selectivity of the drugs [6,12].G4 structures are widely present throughout the human genome, with approximately 700,000 genes having the potential to form G4 structures [16].This presents a significant challenge in the development of c-MYC G4 stabilizers.Fortunately, experimental evidence has confirmed that the binding affinity of the same compound to G4 structures of different genes varies [11,13,17].Therefore, the development of a drug with high selectivity for c-MYC G4 structure and robust anti-tumor activity holds promise as a reliable choice for TNBC therapy, bearing crucial significance in improving the prognosis of TNBC patients [12].
Characteristics of Acridone Derivatives and Their Potential Opponent in G4 Stabilizing
Acridone alkaloids such as aconycine [18] are derived from rutaceae and possess three planar rings, enabling their insertion into base pairs of double-stranded DNA-a crucial feature in the development of effective anticancer chemotherapy [19][20][21][22].Given the structural similarity between the G4 of c-MYC and the DNA double helix, both consisting of stacked structures formed by base pairs, recent studies have reported the stabilizing effect of acridone derivatives on the G4 structure of c-MYC [23,24]; this further underscores the potential of acridone derivatives as a core scaffold for c-MYC G4 stabilizers.Building upon this foundation, the present study employs fragment-based growth techniques to design and synthesize novel acridone derivatives, thereby enhancing the structural diversity of molecules acting as G4 stabilizers.Subsequently, these compounds underwent further screening to identify those capable of inducing ROS production and promoting apoptosis in tumor cells.
Fragment-Based Drug Design
We employed the fragment module of MOE software (MOE 2019.0102)for drug design.Initially, the parent nucleus was subjected to constrained docking to identify conformations capable of forming a π-π stacking interaction with the guanine.Subsequently, this conformation was used for fragment growing: the N atom of the acridone was designated as the growth point, and unoccupied cavities within the active site were marked as the target for fragment growth.Pharmacophore features were constructed based on the hydrophobic/hydrophilic properties of the atoms comprising these cavities.Finally, hit compounds were obtained through screening the linker database in MOE.
General Procedures
1 H-NMR and 13 C-NMR spectra were recorded on a Varian NMR spectrometer operating at 600 MHz for 1 H and 151 MHz for 13 C.All chemical shifts were measured in DMSO-d6 as solvents.All chemicals were purchased from Sinoreagent Chemical Reagent (Beijing, China) and were used as received unless stated otherwise.Analytical TLC was performed on Haiyang (Qingdao Haiyang Chemical Co., Ltd., Qingdao, China) silica gel 60 F254 plates and visualized by UV and potassium permanganate staining.Flash column chromatography was performed on Haiyang (Qingdao Haiyang Chemical Co., Ltd.) gel 60 (40-63 mm).
Synthesis
Ethyl azidoacetate (3) was prepared from ethyl acetate and sodium azide according to the reported method [25].The reaction of azides 4a-k involves a diazotization reaction, which was prepared according to the reported method [26].To a solution of ethyl 2azidoacetate (3, 1 mmol) or aryl azide derivative (4a-4k, 1 mmol) and 10-(prop-2-yn-1yl)acridin-9(10H)-one (2) (1 mmol) in EtOH/H 2 O (3:1, 10 mL) at room temperature, copper (II) sulfate pentahydrate (0.1 mmol) and sodium ascorbate (20 mg) were added.The reaction mixture was stirred at room temperature for 2 h until the starting material disappeared, as indicated by TLC.Then, the mixture was diluted with water (10 mL) and filtered to give a crude product, and the crude product was purified by column chromatography to afford the corresponding L series compounds (70-80%).The NMR data of these compounds and the additional synthesis steps required for L6-L9 were included in the Supplementary Materials (Scheme S1, Figures S1-S58).
The benzoyl amide derivative (1 mmol) was dissolved in toluene (15 mL), and 1,3dichloropropanone was added, followed by a reflux reaction for 4 h.After removing a small amount of solvent by vacuum evaporation, the mixture was allowed to stand for precipitation.Upon filtration, the intermediates 6a-6h were obtained.The acridone (7) was dissolved in DMF, and NaH (1.2 mmol) was added slowly until no more bubbles were produced.Then, 6a-6h were added, and the mixture was stirred at 60 • C for 1 h.The reaction mixture was poured into water, and filtration yielded off-white solid N1-N8 (30-50%).
ROS Detection Assay
The detection of intracellular ROS levels was performed by a 2 ′ -7 ′ dichlorofluorescin diacetate (DCFH-DA) kit (S0033S; Beyotime, Nanjing, China).Cells were plated in 100 µL of culture medium at a density of 5000 cells per well in 96-well plates with transparent bottoms and black walls (FCP965; Beyotime, Nanjing, China), with 3 replicate wells in each experimental group.The plates were then incubated for 24 h.Subsequently, the compounds were individually dissolved by PBS to 10 mM and then introduced into each well at concentrations of 0.1 µM, 1 µM, 10 µM, 100 µM, and 1000 µM.The plates were further incubated at 37 • C for 1 h.Next, cells were washed with PBS twice and then incubated with DCFH-DA at a concentration of 10 µM.Following a 30 min incubation in darkness, fluorescence measurements were obtained using a multi-function microplate reader (INFINITE E PLEX, Tecan, Männedorf, Switzerland).The EC 50 values were calculated using GraphPad Prism 8. ROS levels were further assessed utilizing inverted fluorescence microscopy (DMi8; Leica, Wetzlar, Germany).
Cell Viability Assay
We evaluated cell viability using MTT assay (ST316; Beyotime, Nanjing, China).MDA-MB-231 cells were evenly distributed at a density of 5000 cells per well in triplicate within 96-well plates.Following a 24 h incubation period, the cells were subjected to various concentrations of the compounds (0.1, 1, 10, 100, 1000 µM), which were initially dissolved by PBS to 10 mM.The positive control group received Quarfloxin (CX-3543) (A12380, Adooq Bioscience, Nanjing, China).The negative control group received an equivalent volume of DMEM.After an additional 48 h incubation, 5% MTT solution was added to each well (20 µm of MTT and 100 µL of medium) and incubated at 37 • C for 4 h.The optical density (OD) was measured using a multi-function microplate reader (INFINITE E PLEX, Tecan, Männedorf, Switzerland) at an absorbance wavelength of 490 nm.The cell viabilities were calculated using GraphPad Prism 8 software as follows: (OD experiment − OD background)/(OD negative − OD background) × 100%.
Quantitative Real-Time PCR
MDA-MB-231 cells were seeded at a density of 4 × 10 5 cells per well in a 60 mm dish and allowed to adhere for 24 h.Subsequently, the cells underwent treatment with varying concentrations of N-8 (ranging from 0 to 10 µM) or CX3543 at a fixed concentration of 10 µM for a duration of 24 h.Following the treatments, total RNA extraction was carried out using TRIzol reagent (R0016; Beyotime, Nanjing, China), and complementary DNA (cDNA) was synthesized using the PrimeScript II 1st strand cDNA synthesis kit (6210A; TAKARA Bio, Beijing, China).Quantitative real-time polymerase chain reaction (QRT-PCR) analysis was conducted with the Quanti-Tect SYBR Green PCR kit (RR820A; TAKARA Bio, Beijing, China) on a Roche Light Cycler 480 II sequence detection system (Roche, Basel, Switzerland).The investigation focused on determining the expression levels of c-MYC in TNBC cells.Data analyses were performed employing the cycle threshold (Ct) method and the 2 −∆∆Ct formula.Primer sequences were synthesized by Synbio Tech (Suzhou, China).
The qPCR thermal cycling conditions were as follows: initial denaturation at 95 • C for 30 s, followed by 40 amplification cycles consisting of denaturation at 95 • C for 15 s, annealing at 62 • C for 1 min, and extension at 40 • C for 30 s.
Western Blotting Assay
MDA-MB-231 cells were cultured in 6-well plates (1 × 10 6 cells per well) and incubated with different concentrations of N-8 (0, 2.5, 5, and 10 µM) or 10 µM CX-3543 for 24 h.Protein extraction was performed using the protein extraction kit (78835; Thermo Fisher Scientific, Inc., Waltham, MA, USA), and the protein concentrations were determined using the BCA protein assay kit (23225; Thermo Fisher Scientific, Inc.).Subsequently, cytoplasmic protein extracts were reconstituted in a loading buffer and boiled for 5 min.Proteins (20-50 µg per sample) were separated by electrophoresis on 8-12% SDS-PAGE and then transferred onto polyvinylidene difluoride membranes.The membranes were incubated in a 5% nonfat milk solution at room temperature for 1 h, followed by an overnight incubation at 4 • C on a shaker with specific primary antibodies.Subsequently, the membranes were incubated with secondary antibodies for 2 h at room temperature.Immunoblots were developed using chemiluminescence and detected using the ImageQuant Analyzer (ImageQuant LAS 4000, GE Healthcare, Phoenix, AZ, USA).The Western blot assay employed primary antibodies targeting β-actin (AF0003; Beyotime, Nanjing, China), c-MYC (ab32072; Abcam, Cambridge, UK), and SOD2 (1ab68155; Abcam, Cambridge, UK).Secondary antibodies utilized in the assay were HRP-conjugated anti-mouse (A0216; Beyotime, Nanjing, China) and anti-rabbit (A0216; Beyotime, Nanjing, China).
Molecular Docking Study
The Genetic Optimization of Ligand Docking (GOLD) module in MOE (version 2019.0102)software was used to perform the molecular docking study and analyze the interaction between the ligand and receptor.The DNA-G4 structures were corrected, protonated, and stage minimized by QuickPrep function in the MOE panel: the Protonate3D was set to on, and the water farther than 4.5 Å from Ligand or receptor was deleted.The binding pocket was defined by the site finder module.The selectivity score is calculated according to the following formula: Selective score(i) = 10 DockingScore|i−cMYC| In the formula, i is the target with a G4 structure other than c-MYC, and this formula is based on the calculation method of docking scoring of MOE software (version 2019.0102,Chemical ComputingGroup ULC, Montreal, QC, Canada).The heat map was drawn using the online website: http://www.heatmapper.ca(accessed on 16 October 2023) [27].
Statistical Analysis
All data were expressed as the mean ± standard deviation (SD) based on a minimum of three independent experiments.Statistical analysis was conducted using ANOVA followed by Dunnett's test to assess the significance of differences.A p-value less than 0.05 was regarded as statistically significant.Statistical computations were carried out utilizing GraphPad Prism 8 software.
Fragment-Based Drug Design
To design selective c-MYC stabilizers, the conformation of the MYC G-quadruplex bound with two small molecules was selected for further investigation (PDBid: 5W77 [28]).The small molecules lay flat on a platform formed by four guanine bases (DG7, DG11, DG16, and DG20).Due to the presence of guanines, the center of this platform constitutes a hydrophilic region, while the periphery forms a hydrophobic region.The trichloromethyl benzene moiety of the small molecule perfectly extends into this region.However, the benzofuran of the small molecule fails to establish reliable hydrogen bonds with this platform despite its structural resemblance to guanine.Inspired by the binding properties of small molecules to DNA, the insertion of a small molecule between the bases, forming π-π stacking, enhances the binding stability.Compared to benzofuran, a larger and more rigid skeleton appears to be more suitable, provided it spans across adjacent bases and engages in π-π stacking interactions with them.
In recent years, there have been a series of reports indicating that acridone derivatives exhibit a stabilizing effect on the G-quadruplex (G4) structure of c-MYC.Notably, acridone derivatives precisely meet the requirements outlined in the structural analysis of the skeleton [23,24].We initiated this study by employing acridone as a ligand for docking into the G4 structure of c-MYC.The results demonstrated a perfect alignment with our expectations, forming two pivotal pp interactions with DG7 and DG11.Subsequently, we identified pharmacophores in the hydrophobic region of the receptor platform as targets for fragment growth (Figure 2).It is worth noting that we observed successful molecular generation exclusively in regions formed by DG6, while the other regions were hindered by spatial constraints.
Chemical Synthesis
The synthetic route was divided into two due to the structural characteristics of the target compound (Scheme 1).For compounds in the L series, alkynylation on the 10-N atom of picolinic ketone was achieved using bromopropyne, resulting in N-propargyl picolinic ketone.Subsequently, a copper-catalyzed click chemistry reaction with diazonium derivatives yields the corresponding target compounds.The synthesis of diazonium derivatives involves alkylation and diazotization reactions.For compounds in the N series,
Chemical Synthesis
The synthetic route was divided into two due to the structural characteristics of the target compound (Scheme 1).For compounds in the L series, alkynylation on the 10-N atom of picolinic ketone was achieved using bromopropyne, resulting in N-propargyl picolinic ketone.Subsequently, a copper-catalyzed click chemistry reaction with diazonium derivatives yields the corresponding target compounds.The synthesis of diazonium derivatives involves alkylation and diazotization reactions.For compounds in the N series, derivatives of benzoyl amide were subjected to cyclization with 1,3-dichloropropanone, producing intermediate 6a-6h, which is then alkynylated with acridone to obtain the respective N1-N8 compounds.
Acridone Derivatives Introduce ROS Production in MDA-MB-231 Cell Line
To investigate whether oxidative stress played a pivotal role in the activity of acridone derivatives in the MDA-MB-231 cell line, we performed the DCFH-DA assay.As is shown in Figure 3a-e, compounds L1, L2, L3, L11, and N8 can increase ROS production in a dose-dependent manner, with the EC50 (43.0 µM, 161.1 µM, 209.4 µM, 26.05 µM, 4.026 µM, respectively) (Table 1).Among these, compounds L11 and N8 exhibited the highest level of activity.Subsequently, we employed fluorescence microscopy to further scrutinize the disparities in DCFH-DA fluorescence between the control and experimental groups (Figure 3f).It became apparent that post-administration, both sets of cells exhibited a marked increase in fluorescence intensity compared to the control group.This observation suggests that under the influence of these compounds, there is a significant elevation in the ROS levels within MDA-MB-231 cells.This finding highlights the potential of
Acridone Derivatives Introduce ROS Production in MDA-MB-231 Cell Line
To investigate whether oxidative stress played a pivotal role in the activity of acridone derivatives in the MDA-MB-231 cell line, we performed the DCFH-DA assay.As is shown in Figure 3a-e, compounds L1, L2, L3, L11, and N8 can increase ROS production in a dose-dependent manner, with the EC 50 (43.0µM, 161.1 µM, 209.4 µM, 26.05 µM, 4.026 µM, respectively) (Table 1).Among these, compounds L11 and N8 exhibited the highest level of activity.Subsequently, we employed fluorescence microscopy to further scrutinize the disparities in DCFH-DA fluorescence between the control and experimental groups (Figure 3f).It became apparent that post-administration, both sets of cells exhibited a marked increase in fluorescence intensity compared to the control group.This observation suggests that under the influence of these compounds, there is a significant elevation in the ROS levels within MDA-MB-231 cells.This finding highlights the potential of compounds L11 and N8 as effective agents in modulating cellular oxidative stress.The detailed ROS data are included in the Supplementary Materials (Table S1).
Antioxidants 2023, 12, x FOR PEER REVIEW 9 of 16 compounds L11 and N8 as effective agents in modulating cellular oxidative stress.The detailed ROS data are included in the Supplementary Materials (Table S1).
Inhibition of Acridone Derivatives to the Growth of MDA-MB-231 Cell Line
To investigate the cytotoxic effects of all the acridone derivatives we synthesized, we treated the MDA-MB-231 cell line with all 29 compounds at different concentrations separately (0, 0.1, 1, 10, 100, 1000 µM) and analyzed them by performing the MTT assay.The IC 50 results of all the compounds measured by MTT assay are listed in Table 1.We set 100 µM as the cutoff value.The measurements larger than 100 µM were regarded as low antitumor activity.The analysis shows that the IC 50 of 9 compounds of session L (21 compounds totally) and 8 compounds of N series (8 compounds totally) is less than 100 µM, and that of 2 compounds of L series and 6 compounds of session N (8 compounds totally) is less than 10 µM.Altogether, the two series of compounds can significantly inhibit the proliferation of MDA-MB-231 cells, and the N series is even better.The detailed data of the MTT assay were included in the Supplementary Materials (Table S2).
N8 Increases ROS Levels by Downregulating the Expression Levels of the c-MYC/SOD2
To further clarify the mechanism by which these compounds regulate intracellular ROS through c-MYC, we used the qRT PCR and Western blot methods to detect changes in the content of related mRNA and proteins.N8 was selected to explore this mechanism as it is the most active compound.As shown in Figure 3h, the results of the qRT-PCR method indicate that N8 can downregulate the mRNA of c-MYC in MDA-MB-231 cells.The Western blot results showed that the content of c-MYC protein in cells also decreased after N8 treatment (Figures 3i,j and S59).After comparing the above results with the positive drug CX-3543, we found that the inhibitory effect of N8 on c-MYC was similar to CX-3543 and even better than CX3543 at low concentrations.SOD2 is a crucial gene regulated by c-MYC and encodes superoxide dismutase, a key enzyme involved in clearing harmful oxidative substances within cells.Reduced SOD2 expression leads to an increase in intracellular ROS levels, inducing apoptosis [29,30].We assessed the impact of CX-3543 and different concentrations of N8 on SOD2 expression using the Western blot method and found a correlation between c-MYC expression decreases and SOD2 expression decreases (Figures 3k,l and S60).The experimental results suggest that the compound N8 can stimulate the ROS generation by regulating the expression of c-MYC/SOD2.
Discussion
The transcription factor c-MYC plays a pivotal role in regulating fundamental cellular mechanisms, including but not limited to controlling the cell cycle, modulating apoptosis, promoting protein synthesis, managing cell adhesion, and overseeing various other critical biological processes [31][32][33].An elevated expression of c-MYC is observed in various tumors, marking it as a key oncogene.Theoretically, designing drugs targeting the c-MYC protein holds significant promise.However, akin to other transcription factors, c-MYC lacks a distinct binding site for potential regulators, compounded by its remarkably short half-life of only 20-30 min [34].These characteristics pose a formidable challenge in devising compounds capable of directly interacting with the c-MYC protein.Consequently, exploiting the G4 structure of c-MYC as a novel target for developing anticancer therapeutics holds substantial research value in oncology, with current drug development efforts predominantly concentrated in this arena.
Despite the significance of c-MYC G4 as a therapeutic target, it is crucial to note that G4 structures are not exclusive to c-MYC.Medications designed to influence c-MYC gene transcription may inadvertently impact the expression of other genes, giving rise to a cascade of adverse reactions.This inherent challenge significantly complicates the development of c-MYC G4 stabilizers, contributing to the current state where drugs in this category remain confined to clinical trial phases.The dual challenge of achieving specificity while avoiding unintended consequences underscores the intricate nature of advancing c-MYC-targeted therapeutic interventions.As research continues, overcoming these hurdles will pave the way for innovative and precise anticancer treatments.
In our investigation, the cell activity results indicate that compounds from the N series generally exhibit superior activity compared to those from the L series, suggesting that oxazole is favorable for activity as a linker between the core pyridinone and the aromatic group.A comparison between compound N1 and compound L4 supports this conclusion, with strong evidence also provided by N2 and L2.Among compounds with triazoles, L11 stands out as the most active, featuring a carboxylic ester adjacent to a methyl-substituted aromatic group.The length of the ester bond significantly impacts molecular activity, with shorter ester bonds resulting in higher activity.However, when the ester bond is removed to create derivatives with shorter carboxylic groups, such as L12, activity is noticeably reduced to as low as one-tenth of that of L11.Further support comes from L19 and L20.Moreover, compounds with meta-methyl substitutions generally exhibit higher activity compared to those without methyl groups, as seen with L11 having ten times higher activity than L20 and L12, exhibiting higher activity than L19, suggesting a contribution of meta-methyl substitutions to activity.Nevertheless, for compounds with carboxylic esters bearing aromatic groups in ortho positions, the relationship between ester length and activity appears irregular.
Compounds without aromatic groups, such as L5-L9, are derivatives of ethyl esters linked with triazoles.Among these compounds, L9 exhibits the highest activity, with only a positional difference in the carbonyl group compared to L5, resulting in a 20-fold difference in activity.The molecular docking results show that the pyridinone core of L9 and the triazole interact with DG7, DG11, and DA6, while the carbonyl of L5 can form hydrogen bonds with 6DA bases.However, this hydrogen bond seems to induce a change in the position of the entire molecule, preventing interactions between the pyridinone core and DG7 or DG11, possibly explaining the disparity in their activities.
In the N series of compounds, where thiazole substitutes for triazole, the most active compounds are N8, N4, and N7, with substituents on the phenyl ring affecting activity.Compounds with ortho substitutions exhibit higher activity than those with para substitutions, and this activity difference appears unrelated to the electronegativity of the substituent, as observed with N6 showing higher activity than N2 and N8 exhibiting higher activity than N4.This trend aligns with the L series of compounds, suggesting a common mode of interaction with the receptor.
To further investigate the structure-activity relationship, we computed molecular descriptors for each compound and established a regression model: QSAR model: where SlogP represents the compound's partition coefficient between lipids and water, which is positively correlated with the IC 50 value for small molecules.This suggests that the lipophilicity of compounds is favorable for activity, explaining why carboxylic derivatives are less active compared to their corresponding carboxyl esters.The electrostatic interaction energy of molecules refers to the energy generated by the mutual attraction or repulsion of charges between atoms within a molecule.If the charge distribution within the molecule is uneven, it complicates electrostatic interactions.Polar molecules (those with positively and negatively charged regions) generally have stronger electrostatic interaction energy because of their uneven charge distribution.This indicator is negatively correlated with the IC 50 value, indicating that fewer heteroatoms or simpler substituents are advantageous for activity.Compounds in the N series, both in the linker and aromatic segments, contain fewer heteroatoms than L series compounds, explaining their generally higher activity.
In addition to MYC, there are other genes in the genome capable of forming Gquadruplex (G4) structures; the selectivity of MYC-G4 stabilizers is a crucial metric for assessing their performance.In order to further explore the selectivity patterns of the small molecules we have synthesized, we conducted an initial selectivity assessment using molecular docking.We compiled a list of ten genes, apart from MYC, known to form G4 structures, and used them as receptors for docking with the small molecules synthesized in this study (Figure 4; Table S3).The results revealed that, from the receptor perspective, derivatives of pyridinone exhibited significant selectivity towards RET (PDB ID: 7YS7), c-kit (PDB ID: 2O3M), VEGF (PDB ID: 2M27), k-ras (PDB ID: 7X8N), and Bcl-2 (PDB ID: 6ZX7), especially in the case of the RET gene, where all molecules demonstrated selectivity exceeding a tenfold difference.Conversely, selectivity was notably lower with PDB ID: 7NWD and PDB ID: 6V01.From the ligand perspective, it appears that this may partly explain why certain compounds exhibited strong activity in cellular experiments but lacked activity in ROS experiments.For instance, compounds L20, N1, N3, N6, and N7 may be targeting alternative pathways beyond MYC, inhibiting tumor cell proliferation.
Conclusions
In pursuit of functional molecules with therapeutic potential against derivatives synthesized to modulate intracellular ROS levels.The variations in cellular ROS content and their inhibitory effects on TNBC were evaluated.Notably, compounds L11, N7, and N8 exhibited significant potentiation of TNBC suppression.In light of the regulatory relationship between MYC gene expression and intracellular ROS levels, promising candidates were identified.By detecting the expression of related genes and proteins, the mechanism of the promising candidate increasing intracellular ROS and inducing apoptosis of breast cancer cells through the c-MYC/SOD2 pathway was clarified.Utilizing QSAR modeling, a structure-activity relationship study was conducted, revealing that lipophilic groups with fewer heteroatoms constitute advantageous pharmacophores.Computational docking studies provided insights into the selectivity of MYC-G4 stabilizers.In conclusion, our endeavors highlight compounds L11 and N8 as potential small molecules for promoting TNBC apoptosis through ROS modulation, offering a promising avenue for the treatment of TNBC.
Supplementary Materials:
The following supporting information can be downloaded at: www.mdpi.com/xxx/s1,Scheme S1: The NMR data of our compounds and the additional synthesis steps required for L6-L9.Figure S1: 13 C NMR of compound L1; Figure S2: 1 H NMR of compound L1; Figure S3: 13 C NMR of compound L2; Figure S4: 1 H NMR of compound L2; Figure S5: 13 C NMR of compound L3; Figure S6: 1 H NMR of compound L3; Figure S7: 13 C NMR of compound L4; Figure S8: 1 H NMR of compound L4; Figure S9: 13 C NMR of compound L5; Figure S10: 1 H NMR of compound L5; Figure S11: 13 C NMR of compound L6; Figure S12: 1 H NMR of compound L6; Figure S13: In the experiments detailed in this study, by assessing the levels of the transcription factor c-MYC and its target gene product SOD2, we observed a significant decrease in the content of the c-MYC transcription factor and a corresponding downregulation of its target gene SOD2 expression following treatment with N8 in breast cancer cells.Despite the adaptation of breast cancer cells to elevated levels of ROS beyond normal levels, the accumulation of ROS can still induce apoptosis in tumor cells if intracellular ROS are not promptly cleared [35,36].We have confirmed the effectiveness of certain compounds in inhibiting the c-MYC protein within MDA-MB-231 cells, resulting in suppressed proliferation, elevated intracellular ROS levels, and the induction of apoptosis.Our findings indicate that compounds based on the acridone scaffold have immense potential as stabilizers of c-MYC's G4 structure.Notably, the N-8 compound exhibited the most significant activity and demonstrated a high degree of selectivity for the G4 structure of the c-MYC gene, presenting itself as a promising candidate for a novel therapy for triple-negative breast cancer.It is worth mentioning that, in the Western blot experiments, N8 exhibited higher activity at low concentrations.We speculate that this may be attributed to its poor solubility, resulting in the actual concentration of the high-dose group not being as high as indicated.
In other words, the activity of N8 may actually be superior to CX-3543, providing a crucial direction for our upcoming structural optimization studies.
Moreover, leveraging the acridone scaffold opens avenues for designing more compounds targeting c-MYC's G4 structure, thereby providing additional potential options for treating triple-negative breast cancer.In the context of our research design, the strength of this study lies in targeting triple-negative breast cancer, which is commonly associated with poor treatment outcomes and a relative lack of therapeutic targets.By focusing on the pivotal gene c-MYC, we have designed and synthesized a novel series of compounds, demonstrating high anti-tumor activity and selectivity in preliminary experiments.However, the solubility of these small molecules is not promising enough, as mentioned in the above discussion, which is a key factor limiting their further research, such as in vivo experiments and ADMET experiments.Addressing these gaps in our research constitutes a crucial aspect of our next steps.
Conclusions
In pursuit of functional molecules with therapeutic potential against TNBC, a series of acridone N-substituted derivatives were synthesized to modulate intracellular ROS levels.The variations in cellular ROS content and their inhibitory effects on TNBC were evaluated.Notably, compounds L11, N7, and N8 exhibited significant potentiation of TNBC suppression.In light of the regulatory relationship between MYC gene expression and intracellular ROS levels, promising candidates were identified.By detecting the expression of related genes and proteins, the mechanism of the promising candidate increasing intracellular ROS and inducing apoptosis of breast cancer cells through the c-MYC/SOD2 pathway was clarified.Utilizing QSAR modeling, a structure-activity relationship study was conducted, revealing that lipophilic groups with fewer heteroatoms constitute advantageous pharmacophores.Computational docking studies provided insights into the selectivity of MYC-G4 stabilizers.In conclusion, our endeavors highlight compounds L11 and N8 as potential small molecules for promoting TNBC apoptosis through ROS modulation, offering a promising avenue for the treatment of TNBC.
Figure 1 .
Figure 1.G-quadruplex stabilizer mediates transcription of c-MYC gene (CNBP: cellular nucleic acid binding protein, hnRNPK: heterogeneous nuclear ribonucleoprotein K, TBP: TATA-binding protein, RNA Pol II: RNA polymerase II).The black box displays the conformation of the interaction between
Figure 1 .
Figure 1.G-quadruplex stabilizer mediates transcription of c-MYC gene (CNBP: cellular nucleic acid binding protein, hnRNPK: heterogeneous nuclear ribonucleoprotein K, TBP: TATA-binding protein, RNA Pol II: RNA polymerase II).The black box displays the conformation of the interaction between CX-3543 (green molecule marked as * 1) and the G4 structure of c-MYC (gray white ribbon composed of white molecules).
Figure 2 .
Figure 2. (a) Fragment growing-based virtual screening; (b) the hit compound docked with G4 structure (the dotted line represents the pocket of G4 structure); (c) the hydrophobicity of the platform formed by four guanine bases (Gray white molecules and ribbons: G4 structure of c-MYC; cyan molecule: the ligand of 5W77); (d) horizontal view of the platform.
Figure 2 .
Figure 2. (a) Fragment growing-based virtual screening; (b) the hit compound docked with G4 structure (the dotted line represents the pocket of G4 structure); (c) the hydrophobicity of the platform formed by four guanine bases (Gray white molecules and ribbons: G4 structure of c-MYC; cyan molecule: the ligand of 5W77); (d) horizontal view of the platform.
Figure 4 .
Figure 4. Heat map of selective results of small molecules towards other DNA-G4 structures: (a) receptor-based clustering; (b) small molecule-based clustering; Z-score is used to evaluate the distance between sample points and population mean), as well as docking results (green molecule: compound N8, gray white molecule and ribbon: the G4 structure of (c) RET; (d) PARP1; (e) k-ras, respectively).
Figure 4 .
Figure 4. Heat map of selective results of small molecules towards other DNA-G4 structures: (a) receptor-based clustering; (b) small molecule-based clustering; Z-score is used to evaluate the distance between sample points and population mean), as well as docking results (green molecule: compound N8, gray white molecule and ribbon: the G4 structure of (c) RET; (d) PARP1; (e) k-ras, respectively). | 2023-12-22T16:07:15.414Z | 2023-12-20T00:00:00.000 | {
"year": 2023,
"sha1": "1711c9e9293a6391292f1b64787adc3c6bfb9b03",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9b629a2188a8adbfa368f6b6b8b1785e7fbac74b",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259577569 | pes2o/s2orc | v3-fos-license | The Attitude of Students and the Mediating effect of Acceptance, Interactivity and LMS on Integration of Technology
Although the phenomenon of technology is gradually being integrated into tertiary education in Ghana, the perceptions of students to adopt and adapt to learning technologies for smooth integration of technology into academic programmes in public universities is an issue of concern. Using the Constructivist and Positivist paradigm, this study adopted the quantitative approach and the purposive and quota sampling technique to solicit data from 1704 level 400 students in six (6) accredited public universities. Adopting the regression analyses approach with ten hypotheses tested the results were analyzed with PLS-SEM. The study found that the attitudes of students significantly impact the integration of technology. Indirectly, Students’ Acceptance and Adjustment (AA) to use technology and Learning Management System (LMS) usage significantly mediates the relationship between the Attitude of Students (AS) and the Integration of technology (IG). Furthermore, students’ acceptance and adjustment to adopt technology and the use of the LMS, are key predictors of the integration of technology, but Interactivity is a weak predictor of the integration of technology into academic programmes in the topmost public universities in Ghana.
INTRODUCTION
Integration of technology into the curriculum is a huge issue in Education Technology and requires urgent attention to ensure a smooth infusion of learning technologies into academic programmes. Integration of technology is an ontological phenomenon that must be carefully studied and understood within the context of higher education. Undoubtedly, developing countries have come a long way in embracing digital platforms for education, especially with the widespread use of e-learning for knowledge transfer. 1 According to the Association of African Universities (AAU), the development and application of ICT in African higher education institutions are essential for the region to flourish and remain relevant on a global scale. New flagship research by the World Bank and the African Development Bank, named e-Transform Africa, has been made public with help from the African Union and details the best methods for utilizing ICTs in important African economic sectors. 2 This indicates that technology is gradually being integrated into all sectors of the educational ecosystem. Within the higher education ecosystem, students are expected to possess some ICT skills before they are admitted or complete tertiary education. Therefore, technology must be integrated into higher education. The key stakeholders, including students, professors, decision-makers, and IT staff, are mandated to ensure that technology is seamlessly incorporated into the curriculum to facilitate, enhance, and transform the teaching and learning process. Technology is used in the classroom to support, expand, and enhance student learning through computer integration. Incorporating ICT into education entails more than only instructing students on computer usage. Technology serves as a tool to enhance education, not as its own goal. 3 In the process of integrating technology to enhance teaching and learning, students encounter numerous issues, which begin with their own beliefs, attitudes, and degree of acceptance of the use of technological innovations. 4 Quite a considerable amount of these innovations includes the Learning Management Systems (LMS) which also wield numerous tools integrated for teaching and learning. These tools are also endowed and even customized to facilitate interactivity among students and lecturers. 5 The purpose of this research is to attempt to examine the attitudes of students and the impact on the Integration of technology as well as examine the mediating effect of Acceptance and Adjustment, Interactivity and LMS usage on the integration of technology into academic programmes. The overarching research objective is primarily to investigate whether students' attitudes impact the integration of technology into academic programmes. In achieving this objective, the following research questions were posed and addressed; to what extent do the attitudes of students impact the integration of technology into academic programmes? Secondly, to what extent do the mediating variables impact the relationship between the attitudes of students and the integration of technology? Previous studies in Ghana rarely touched on the impact of students' attitudes on the integration of technology into academic programmes in Ghanaian public universities. Addressing this gap is necessary for public universities to understand the effect of Students' attitudes on the integration of technology. The problem is that in most Ghanaian universities, technology is being used in a variety of ways. Public university faculty members and students are gradually adjusting to the incorporation of technology through online learning. Previous studies by Tagoe, Afari-Kumah and Tanye concerning the integration of technology in Higher Educational Institutions (HEIs) in Ghana have proven that the attitudes of students are a potential threat to the success of the Integration of Technology. 6 In similar studies beyond Ghana, Law et al found that both negative attitudes like -request for a 'face-to-face discussion to encourage more interaction', in an online learning setting were observed in contrast with positive attitudes -where students asserted 'asynchronous discussion was considered the most effective feature to engage students in online interaction.' 7 Even though using technology in the classroom has incalculable advantages, perceived misuse and non-use by students create a crisis for stakeholders. 8 The above statements suggest the need for further examination of students' attitudes to understand the issues and to suggest improvements on the negative attitudes while building on the positive attitudes when it comes to integrating technology into academic programmes. This is the gap that this study addresses.
LITERATURE REVIEW
Understanding students' attitudes toward e-learning and investigating important aspects that influence students' behaviours toward technology integration may aid instructional designers in creating more successful online courses. Alharthi examined university students' attitudes toward technologies used in online courses and how employing these technologies benefit the learning environment and found that students were not satisfied with the technology used in the distance learning courses because many of these technologies did not consider the learning preferences of learners and they were not easy to navigate, nor flexible in their implementation. 9 According to Smith, Caputi, and Rawstorne, "a person's general evaluation or feeling of favourability or unfavourability toward computer technologies (i.e. attitude towards objects) and specific computer-related activities (i.e. attitude towards behaviours)" is what is meant by "computer attitude." 10 Smith et al., also mention there is frequently a link between students' attitudes and their computer usage experiences, and there are two aspects of the computer experience that have a direct impact on the students' attitudes: (i) Subjective experience, which is related to the student's feelings and thoughts about their computer usage, and (ii) objective experience, which is related to individual computer interaction. 11 Notwithstanding, the attitudes of students, whether subjective or objective experience, can be categorized into positive and negative as discovered by other scholars. A few of the positive attitudes have been reviewed beginning with the study outcome of Orgaz, Moral and Domínguez who found that students' attitude toward technology influences their perception of technology and that students' attitude toward social networks has a positive influence on the use of technology. 12 That indicates that there are instances where students' attitudes have positively stimulated the perceptions of students and subsequently impacted social networks towards technology use. Romero et al.,in examining preconceived notions of attitudes toward technology affecting the teaching-learning process and the academic-professional performance of students, found that the parameters that relate self-perceived digital competence with attitude were not significant but the parameter that related frequency of use with attitude was significant where high use of technologies produces more positive and developed attitudes toward ICT. 13 The parameter that related frequency of use with attitude is also confirmed in the work of Mundir and Umiarso because their research indicated that the students' attitudes in the implementation of LMS were positive due to the report of easily accessing learning material and other sources more frequently. 14 In addition, Mundir attitudes toward LMS were influenced by three factors: individual (e-learning self-efficacy), social (subjective norm), and organizational (accessibility system) factors. 15 More intuitively, Mahyiddin and Amin in their study showed that the students have a positive attitude toward technology integration into education, as they were actively involved during discussions and exhibited positive attitudes and responses towards online learning. 16
Positive Attitudes and Integration of Technology
Positive attitudes of students towards technology integration were also assessed in terms of gender in the works of Balta and Duran in an earlier study found interactive whiteboards to be highly rated by both teachers and students with male students having more positive attitudes toward the interactive whiteboards than the female students. 17 However, Balta and Duran showed that as students get older their positive attitudes decrease. 18 Similarly, Al-Emran and Salloum found significant differences in students' attitudes toward the use of mobile technologies for e-Evaluation in terms of gender. 19 However, there was no significant difference in terms of age, degree, and department in which the researchers examined students' attitudes toward the use of mobile technologies for e-Evaluation. In Ghana, Nukpetsi found significant differences among colleges of education when these colleges' attitudes towards ICT education were examined. 20 Nukpetsi further observed that there was a significant difference between male and female attitudes toward the use of ICT colleges in Ghana. 21 In a college of education, Alabdullaziz et al. investigated instructors' and learners' attitudes toward e-learning. 22 They revealed the most favourable sentiments regarding e-learning as a setting for multimedia training. The students gave almost identical ratings to animations, movies, and photographs. Performance in an e-learning environment is significantly impacted by learners' attitudes toward self-regulated learning. 23 Even though there was a strong association between some e-learning and learners' technology experience. Additionally, Jan Ardies et al. cautioned achievement. 26 Vasbieva and Saienko discovered that 85% of students have a positive attitude toward a technologically enhanced language learning environment. 27 This illustrates that there is no connection between students' chosen learning styles and their attitudes toward the usage of technology.
Negative Attitudes and Integration of Technology
Some negative attitudes of students have been demonstrated in studies by Johansson admitted that young people's enthusiasm for modern things is appreciable, yet they have unfavourable views on technological education. 28 Some of the negative attitudes of students were also enumerated by Asunka in his study that students do not respond favourably to online constructivist teaching approaches such as asynchronous discussions and ill-structured project-based learning activities and perceived collaborative online learning within their context as complex, more demanding and time-consuming experience. 29 Similarly, Zhwan et al. investigated students' attitudes toward information technology and its link to academic accomplishment and found that students had favourable views toward IT, which facilitated their academic achievement. 30 When the underlying dimensions of the attitude toward IT were measured via the PCA method, there appeared to be three dimensions for attitudes toward IT; affection, behaviour, and cognition. The behaviour component recorded the highest score compared with the affection and belief components, also the affection component seemed to be the lowest among all the attitude components. Hence, the need to look at students' attitudes toward information technology and whether they affect the integration of technology into academic programmes.
Theoretical Framework
Theories underpinning this study are discussed at this stage. According to the Theory of Planned Behaviour, the best method to forecast someone's behaviour is to inquire about their intentions. 31 Here, it is important to highlight that an intention won't manifest itself in behaviour if it is physically impossible to carry it out or if unanticipated obstacles get in the way. If intention can explain behaviour, how can intention be justified? Ajzen claims that three factors help to understand behavioural intention:1. The attitude (one's perceptions of the behaviour); 2. The subjective norm (others' perceptions of the behaviour); and 3. Perceived behavioural control (self-efficacy regarding the behaviour). Attitudes, subjective standards, and perceived behavioural control are all predicted by the model to influence intention, which in turn influences behaviour. The TPB is relevant to this study because it explains the attitude as the exogenous variable in this study whiles investigating its role and relationship with perceptions, intentions and the subsequent observable behaviour of tertiary students. Considering all these issues associated with attitude, this study examined the extent to which the attitude of students tends to impact or affect the integration of technology into academic programmes in HEIs in Ghana.
The next is the Technology Integration Matrix (TIM) was introduced by the Florida Center for Instructional Technology (FCIT) at the University of South Florida, as a guide for teachers and administrators in the practice of integrating technology. The TIM is based on the theory of social constructivism in which new learning occurs when students interact with each other to build new knowledge or gain new understanding. 32 This Matrix is relevant to this study in two major delineations; first, it defines and authenticates technology integration as the key dependent variable understudy at the selected public universities and; second, it defines the processes involved in the integration and the extent to which students' attitudes are affecting the adoption and adaptation of the five learning domains (entry, adoption, adaptation, infusion, transformation). It must be reiterated that an earlier qualitative study conducted by Gyau and Gyan revealed that public universities in Ghana are at the Adaptation level of deploying the TIM as the main method for integration of technology into academic programmes. 33 The theory of online learning introduced by Anderson, offered a paradigm of e-learning. 34 He claims three types of Online learning should be considered: Collaborative, Community-of-Inquiry, and Community-of-Learning models. Additionally, the model identifies the two main human actors; learners and teachers, as well as how they interact with one another and the content. Interactivity is a major construct and striking characteristic of a web-based learning environment. 35 In the instructional context, interactivity refers to sustained, two-way communication between students and an instructor. The objective of interaction may be completing a learning task or creating social relationships. 36 A technology-based interactive learning environment incorporates four types of interaction: learnercontent, learner-instructor, learner-learner, and learner-interface. 37 This interaction can take place within a community of inquiry, using a variety of net-based synchronous and asynchronous (video, audio, computer conferencing, chats, or virtual world) interactions within an interface known as the Learning Management Systems (LMS). This theory is relevant to this study because it serves as the underpinning theory for Interactivity (INT) and Learning Management Systems (LMS) usage, which form part of the three mediating variables understudy. A few studies have shown the connections between the user acceptability of a system and some other significant criteria, notwithstanding the paucity of information on the usage of LMS and its levels of acceptance in developing countries. For instance, Claar et al. investigated how various demographic factors, including age, race, gender, and educational attainment, affect students' acceptance of new learning management systems (LMS), and they discovered that the higher the educational attainment, the more likely it is that new LMS systems will be accepted. 38 Dias and Diniz (2014) also observed that an effective LMS has three characteristics: 1) it allows for a dynamic ecosystem that can integrate a variety of interactive learning activities; 2) it makes it easier for teachers to become familiar with ICT to boost their intrinsic motivation; and 3) it provides training strategies for students to improve their learning performance and level of satisfaction.
To what extent, therefore, do Interactivity and LMS usage impact the integration of technology? Previous studies in Ghana hardly ever discussed the impact of the attitude on the incorporation of technology into academic programs in Ghanaian public universities. This research sought to fill this knowledge gap to ascertain the impact of the attitude through acceptance, interactivity and LMS usage, on technology integration. It is against this knowledge gap that the authors of this study formulated and tested the following hypotheses based on the empirical review of related works and the research question, to serve as the main constructs for developing a conceptual framework.
Attitude of Students (AS)
Although Attitude is a multi-dimensional term, this study examined the attitude of students in the context of how they react and respond to learning technologies that have been introduced by policymakers to facilitate the integration of technology into academic programmes in the HEIs understudy. 39 Therefore, the attitudes of students is formulated as the exogenous variable based on the works of some scholars. 40 As such, this study sought to provide additional insight into the influence of Acceptance and Adjustment, Interactivity and LMS usage within the context of the Attitude of Students (
Learning Management Systems (LMS) Usage
Learning Management System usage in this study is defined as an interactive learning environment embedded with learning technologies that facilitate inter/intra-action, cooperation, training, communication, and exchanging information among students, and the effect of usage on the integration of technology. 41 The LMS usage is formulated as a construct based on the works of some scholars. 42 As such, this study sought to provide additional insight into the influence of LMS usage within the context of AS and IG necessitating the question: 'To what extent does LMS usage mediate the linkage between AS and IG?' as posed in hypotheses 7 and 8: H7: Learning Management Systems (LMS) usage has a significant positive impact on the Integration of Technology (IG). H8: Learning Management Systems (LMS) usage significantly mediates the relationship between the Attitude of Students (AS) and Integration of Technology (IG).
Interactivity (INT)
Interactivity in this study refers to sustained, two-way communication between students and an instructor. A technology-based interactive learning environment incorporates four types of interaction: learner-content, learner-instructor, learner-learner, and learner-interface. 43 as a hypothetical construct based on the works of Anderson, Liaw and Huang;Wang. 44 Although studies in interactivity have examined two-way communication between students and an instructor there is the need to provide additional insight on the mediating effect hence the question: To what extent does Interactivity mediate the linkage between PS and IG? This is followed by the hypotheses: H6: Interactivity (INT) has a significant positive effect on the Integration of Technology (IG). H9: Interactivity (INT) significantly mediates the relationship between the Attitude of Students (AS) and the Integration of Technology (IG).
Acceptance and Adjustment to Technology (AA)
Students accepting and adjusting to the introduction of new learning technologies and adopting upgraded versions is dependent on whether they perceive that technology to be useful and easy to use. Acceptance and Adjustment to the use of technology is formulated as a hypothetical construct based on the works of some scholars. 45 This study builds on and contributes to works in Acceptance by formulating and examining the mediator-oriented hypotheses: H5: Acceptance and Adjustment (AA) to use technology have a significant positive impact on the Integration of Technology (IG). H10: Acceptance and Adjustment (AA) to use technology significantly mediates the relationship between the Attitude of Students (AS) and Integration of technology (IG).
Integration of Technology (IG)-refers to the use of digital tools and technologically based procedures for routine duties, employment, and educational administration. After making technology accessible and available, the next step is to integrate it. It is a goal-in-process, not an end state. (Schmitt, NCES, 2002) CONCEPTUAL FRAMEWORK The conceptual model of this study is shown in Figure 1. Accordingly, Technology Integration Matrix (TIM) presents five learning domains and corresponding levels of integration that determine the depth of technology Integration in HEIs. In an earlier study conducted by Gyau and Gyan, it was discovered that the method of technology integration that is predominantly being used by public universities is the Technology Integration Matrix (TIM) and the level of integration is currently at the 'Adaptation level' of the TIM. 46 It is against this background that the TIM became the base model and most ideal definition for Integration of Technology (IG) which is also better positioned as the dependent variable for the study. The Attitude of Students (AS) is the exogenous variable being investigated and its impact on the integration of technology in the universities under study. The authors proposed conceptual framework, therefore, attempts to investigate the relationship between the two key variables; Integration of technology (IG) and Attitude of Students (AS), mediated by the role of students' "Acceptance and Adjustment (AA)" to use technology, "Interactivity (INT)," and "Learning Management Systems (LMS) usage".
RESEARCH METHODOLOGY
The quantitative approach was used to enable an objective measurement of the variables for this study and further examine the relationship between them numerically and statistically. Primary data was collected through questionnaires from students across six Public Universities in the country. The research approach was deductive reasoning through sophisticated statistical tests.
Sampling techniques
Sampling techniques used were purposive sampling, quota sampling and convenience sampling for data collection, based on knowledge of subjects understudy. According to the Ghana Tertiary Education Commission (GTEC), there are 16 public universities in Ghana. The population for this study, therefore, considered sixteen (16) public universities (GTEC 2021). However, a purposive and convenient sample of six (6) public universities was selected from the Ashanti, Greater Accra and Northern regions for this study respectively. They were selected purposively, based on their status and rank in the adoption and integration of technology into mainstream university education. The overall intent was to identify HEIs which have attained a considerable or reasonable amount of penetration in their integration process, especially after the impact of the COVID-19 pandemic. A situation that forced all HEIs to integrate technology or improve the level of integration. The study, therefore, focused on the public institutions which were known to be conventional universities that had to adopt the dual-mode or blended mode of teaching and learning to ensure some level of integration of technology.
Quota sampling was used to select students based on particular attributes so that the sample size would not be different from the population. Convenience sampling was used to collect data. A quota of 300 students was allocated to each of the selected universities irrespective of their large sizes. They are UG, UCC, UEW, UDS, UPSA and KNUST. 47 Additionally, the level 400 students were also purposively selected based on their rank as final-year students and their vast experience in the technology integration process; having adopted and engaged various learning technologies for various academic activities over their 4-year tenure, pursuing various programmes. Next, based on the proportions of the subgroups (level 400) necessary for the final sample, the researchers allocated 300 units to solicit from the level 400 students and conduct the survey. Quota sampling was the best method for this study since it allowed the researchers to select students proportionally from all the universities. A specific number of questionnaires are distributed proportionally with the help of the faculties, to encourage those who will fill out the questionnaires. That makes it possible for a representative sample of universities who took part in the study. Students were, therefore, selected using quota sampling and adopted convenient sampling and snowball methods for data collection.
Data Collection Methods
However, the sample size was based on the criteria of Gay, Mills and Airasian who recommended that for a population of 5000 or more, a sample size of 400 is adequate. 48 Factually, this sample size was not practicable for the researchers, because it did not conform to the researcher's resources, and it posed a huge financial burden on the researchers. By quota sampling, the researchers targeted 300 students to be drawn from each of the 6 universities, which should culminate into a total of 1800 students. Therefore, a total of 1800 students were targeted and reached out to participate in the survey. After a thorough screening of the filled questionnaires, a total of 100 questionnaires were rejected due to inadequacies and incomplete answers. Eventually, a total of 1704 questionnaires were appropriate for data analyses. This formed 94% of the population (1800) reached. In the event that some students declined to fill out the questionnaire, the college/faculty was asked to recommend other level 400 students, within the same group, to fill the questionnaire by consenting: therefore, adopting the Snowball method as an additional method. 49
The Instrument
By best practices, quantitative studies of this nature, are best conducted as a survey, deploying questionnaires as the ideal instrument. 50 A structured questionnaire with specific scales of measurement, drawn from validated instruments of Mundir and Umiarso and Mahyiddin and Amin for attitudes, was modified for this research. 51 Validity and Reliability of the instrument were achieved by pre-testing and piloting the instrument, through the scrutiny of six experts in the Educational and Instructional technology field; therefore, subjecting the structured questionnaire to intense screening led to the validation and invalidation of some of the questions. Pre-testing was further conducted with a cross-section of students (N20) from a sister university, to test the validity and reliability of the questions and refine some questions; to avoid respondent biases and researcher biases. Scales of measurement in the questionnaire, consisting of many definite/close-ended statements as options in a 5-point Likert scale, where (1) = Strongly Disagree and (5) = Strongly Agree.
Data Analyses
The data collected through questionnaires were first screened and a total of 100 questionnaires were dropped due to incomplete entries and inadequacies in the information provided by some respondents. Based on the rudiments of PLS-SEM software, which was used for the analyses, each item in the scales of measurement, representing the various constructs was first coded in the Microsoft Excel Software and advanced to the PLS-SEM Software for statistical analyses. This study adopted linear regression analyses and the Partial Least Square-Structural Equation Modelling (PLS-SEM) statistical tool, was ideal because SEM performs more robust and reliable statistical analyses for multiple latent constructs. Notwithstanding, this study implored the need for structural models to be tested via the hypotheses and crystalized for a possible conceptual framework. Considering the proposed conceptual framework and hypothesis of this study, a structural model was, therefore, formulated to guide the various tests relevant to the study.
RESULTS AND ANALYSES
The main objective that this study sought to achieve was to examine the impact of the attitudes of students on the integration of technology. To address this objective, the students were asked to respond to pertinent questions. The data gathered from the respondents, were tested by the hypothesis in the regression analyses. Findings indicate that a 51.3% variation in Students' Acceptance and Adjustment to technology can be attributed to the Attitude of Students. As indicated in Table 3, by the regression analyses, 28.4% variation in Integration of technology (IG) can be attributed to the Attitude of Students. Moreover, and 23.3% variation in Interactivity (INT) can also be attributed to the Attitude of Students. Finally, a 22.6% variation in the use of Learning Management Systems (LMS), can also be attributed to the Attitude of Students in the public universities in Ghana. Although the relationship coefficient of the IG, INT and LMS, are quite low, they are still above the threshold, and by the regression analyses, there is some impact among the key variables under study. This section presents the categorised results emerging from the analysis of the data.
Demographic Analyses of Quantitative Data: Demographic profile of students are detailed as shown in Table 1: Table 2: Attitude of Students. (Source: Field data, 2022) According to the standards set forth by Fornell and Larcker and the Heterotrait -Monorait Ratio (HTMT) recommended by Teo et al, exploratory analysis such as scale reliability, convergent and discriminant validity, and other factors must be evaluated while measuring the data. 52 To identify the multi-collinearity among the variables, the study first used a preliminary test for the common method bias, and the result of Variance Inflation Factors (VIF) varies from 1.421 to 2.489, which is less than 3.3 recommended by Kock. 53 Secondly, the study examined the convergent validity, discriminant validity and reliability of the structural model by adopting the Hair et al. criterion. 54 Convergent validity is the degree to which multiple attempts to measure the same concept are in agreement. When the AVE value is greater than or equal to 0.50 convergent validity is established. 55 Convergent validity for this study was achieved with all the variables tested. Discriminant validity is established when the square root of AVE for a construct is greater than its correlation with all other constructs. This study used the Heterotrait -Monorait Ratio in the PLS-SEM statistical instrument for validity to attain acceptable values for the AVE. 56 Reliability is the extent to which a measuring instrument is stable and consistent. A threshold of 0.70 or above is recommended by Hair et al. 57 The reliability of the data was tested using the PLS-SEM Cronbach Alpha statistical instrument to determine the reliability coefficient of data collected and analysed. According to Ghazali and Nordin, the threshold for factor loadings should be 0.6 or higher. 58 Based on that, reliability for this study was achieved with all the variables tested. Table 3 indicates the confirmatory factor analysis result, outer/inner VIF values, Composite reliability, Cronbach Alpha, AVE, R-squared and the Fsquared values attained.
Constructs
Indicator
Measurement of Model
The impact of the predictor variable is high at the structural level if (F-squared is >=0.02 is small, >=0.05 is medium, >=0.35 is large: Cohen, 1988). The results revealed (in Table 3 Table 3. The R 2 value is illustrated in the structural model below.
Mediation Analyses -Learning Management Systems (LMS), Interactivity (INT), Acceptance and Adjustment (AA).
Mediation analyses were performed to assess the mediating role of Learning Management Systems (LMS) on the linkage between the Attitude of Students (AS) and the Integration of Technology (IG). The results indicate that the indirect effect of AS on IG through LMS usage is found to be significant (H8: β = 0.109, t =3.565, p = 0.000). Hypothesis H8 was therefore supported as shown in Table 6. This shows that the relationship between AS and IG is fully mediated by LMS. Meanwhile, the results, in Table 5, revealed that the direct effect of (LMS) on (IG) is also significant (H5: β = 0.476, t = 3.768, p = 0.000). Hypothesis H5 was therefore supported.
Pursuant to these results, mediation analyses were also performed to assess the mediating role of Interactivity (INT) on the linkage between AS and IG and the indirect effect of AS on IG through the INT was found to be not significant (H9: β =-0.042, t =1.288, p=0.198). Hypothesis H9 was therefore not-supported as indicated in Table 6. This indicates that Interactivity does not have any significant impact on the relationship between AS and IG. Meanwhile, the results, in Table 5, revealed that the direct effect of (INT) on (IG) is also not-significant (H6: β = -0.087, t = 1.386, p = 0.172). Hypothesis H6 was therefore not supported.
Finally, mediation analyses were performed to assess the mediating role of Acceptance and Adjustment (AA) to technology, on the correlation between AS and IG. The indirect effect of AS on IG, through the AA was found to be significant (H10: β = 0.179, t = 2.374, p = 0.010). Hypothesis H10 was therefore supported as indicated in table 6. Meanwhile, Students' Acceptance and Adjustment to technology had a direct significant impact on the Integration of Technology (H7: β = 0.229, t = 2.552, p = 0.018). Therefore, hypothesis H7 was supported as indicated in Table 5. This shows that the Attitude of Students directly impacts on Integration of Technology and the linkage is fully mediated by Students' Acceptance and Adjustment to use technology. Therefore, Acceptance and Adjustment to the use of technology by students is a strong predictor of Integration of technology.
As regards the impact of the Attitude of Students on the Integration of Technology and the mediating variables, it is evidenced in Table 5 that the Attitude of Students has a direct positive impact on the Integration of Technology (H1: β = 0.716, t = 22.240, p = 0.000). Hypothesis H1 is therefore supported. This reveals that the Attitude of Students in public universities in Ghana is a strong predictor of the Integration of Technology into academic programmes.
Likewise, the Attitude of Students also has a direct positive effect on Students' Acceptance and Adjustment to the use of technology as indicated in Table 5. (H2: β = 0.716, t = 20.702, p = 0.000). Hypothesis H2 is therefore supported. This reveals that the Attitude of students in the public universities in Ghana is a strong predictor of Students' Acceptance and Adjustment to the use of technology in their academic pursuits.
Similarly, the Attitude of Students also has a direct positive effect on Interactivity (INT) in the integration process as indicated in Table 5. (H3: β = 0.199, t = 10.027, p = 0.000**). Hypothesis H3 is therefore supported. This discloses the fact that the Attitude of students in the public universities in Ghana is a strong predictor of Students' Interactivity in the entire technology integration process. Finally, the Attitude of Students also had a direct positive effect on the use of Learning Management Systems (LMS) in the integration process, as indicated in Table 5. (H4: β = 0.483, t = 10.560, p = 0.000). Hypothesis H4 is therefore supported. The Attitude of Students, therefore, is a strong predictor of Students' patronage of the LMS in the entire technology integration process in the public universities in Ghana.
Proposed Conceptual Framework based on the hypothesis and the Technology Integration Matrix.
The authors of this article, in a quest to determine the variables that will be necessary for developing a new conceptual framework, took into account and tested the relationship between the independent variables (AS) and the dependent variable (IG), vis-a-vis the mediating variables (AA, INT, LMS). Based on the research conducted and the hypothesis tested through regression analyses in the PLS-SEM software, the authors propose a Conceptual Framework underpinned by variables of the Technology Integration Matrix (TIM) and the ten hypotheses that indicates the path analyses. Each path is represented by the hypotheses tested and labelled (H1-H10). The darker lines are paths that indicate direct effects whiles the dotted lines indicate indirect effects.
FINDINGS AND DISCUSSIONS
This section discusses the findings and implications of the study which have been categorized into theoretical, practical, policy, and major implications. It represents the findings by summarizing the outcomes of the hypotheses. First, the main objective of this study is to examine the impact of the attitude of students on the integration of technology into academic programmes in the topmost public universities in Ghana. Primarily, the attitude of students has a direct effect on the integration of technology. Again, the acceptance and adjustment to the use of technology and the usage of the Learning Management Systems, are strong predictors to the integration of technology. The study concludes that the attitude of students is positively related to the integration of technology. Amazingly, Interactivity does not significantly mediate the linkage between the attitude of students and the integration of technology. Interactivity, again, does not directly impact integration. The study posits that Interactivity does not have any mediating effect on the Integration of technology into academic programmes. Meanwhile, it is imperative to note that the attitude of students has a direct positive and significant impact on the three mediating variables -AA, LMS usage and INT. So, the study affirms that the attitude of students positively affects students' acceptance and adjustment to the use of technology, the use of Learning Management Systems and the interactivity that transpires among students and their instructors.
Following that is the predictive power of Students' Acceptance and Adjustment to the use of technology and the fact that it has a direct positive and significant effect on the integration of technology. The study affirms that Students' Acceptance and Adjustment (AA) to use technology has a direct effect on the Integration of Technology into academic programmes. Likewise, the use of Learning Management Systems has been found to directly impact the integration of technology into academic programmes. Therefore, LMS usage is a positive predictor of the Integration of Technology into academic programmes of the HEIs under study. The study discusses the implications of these findings in the following categories.
Theoretical Implications
Theoretical implications can be explained by discussing the findings in tandem with the theories underpinning the study. Ajzen's assumptions that attitudes, subjective standards, and perceived behavioural control are all predicted by the model to influence intention, which in turn influences behaviour has been confirmed in this study because students have demonstrated their intentions to accept and use learning technologies through attitudes about the integration of technology whiles being exposed to the stimuli of new technologies and their interactions with those technologies as juxtaposed with their subjective norms and behaviour in the integration process. 60 Some of the attitudes that this study found about the integration of technology among students include the fact that their attitude towards technology integration is positive, much as their responses to technology integration are also positive. This study, therefore, confirms the assumptions of the Theory of Planned Behaviour.
Anderson's, theory of Online learning seems to emphasize interactivity among all the key stakeholders in the integration process. 61 Alluding that there is the need for interactivity to be sustained as two-way communication between students, instructors, content, policymakers and school administrators. Testing the effect of attitudes on integration, through interactivity as a conduit, has proven that, though vital in the integration mix, interactivity is a weak predictor of the integration of technology because it does not have any direct or indirect effect on the Integration of technology into academic programmes. Collaborative, Community-of-Inquiry, and Community-of-Learning models are the three types of online learning as enshrined in the theory of online learning and interactivity is the default construct among these online learning methods. Within the Community-of-Inquiry, a 60 Ajzen, "The Theory of Planned Behavior." 61 Matthew J. Schultz et al., "Importance and Capability of Teaching Agricultural Mechanics as Perceived by Secondary Agricultural Educators," Journal of Agricultural Education 55, no. 2 (June 30, 2014): 48-65, https://doi.org/10.5032/jae.2014 variety of net-based tools are integrated into an interface known as the Learning Management Systems (LMS), to ensure seamless interactivity. That is why this study sought to examine the role of interactivity in the integration mix. However, it has turned out that interactivity has a significant relationship with the attitude of students but does not impact integration in any way. This is a major contribution to the theory of online learning and confirms the theory's position on interactivity but contradicts its efficacy in advancing technology integration.
This study has proven that the TAM as a theory, is consistent in the Ghanaian context, as some of its assumptions have been demonstrated in the Ghanaian learning environment, especially after the effects of the COVID-19 pandemic. This is because, the introduction of the mediating variable Acceptance and Adjustment (AA), pre-supposes that, by the assumptions of the TAM, students, to some extent, have accepted and adjusted to the Perceived Use (PU) and Perceived Ease of Use (PEU) of new technologies, among others in the public universities, especially during the COVID19 era. This study has discovered that Students' acceptance to use technology has a three-fold effect on the integration of technology. Because there is a significant positive relationship between the attitude of students and the acceptance and adjustment to the use of new technologies, still based on what they perceive technology to be and how easy it is to use technology. Secondly, acceptance also has a direct positive impact on integration even without the input of perception. Thirdly, AA as a mediator has been proven to have a direct positive impact on the integration of technology. These outcomes confirm the assumptions of the TAM in the public universities in Ghana and confirm the study outcomes of previous studies. 62 The study concludes that Students' acceptance and adjustment to technology use is critical to the advancement of technology integration in HEIs in Ghana.
Practical Implications
The practical implications of the outcome of this study cannot be overlooked. Much as AA to technology impact integration, LMS usage has also been proven to be a strong predictor of Integrating technology as indicated by Hypotheses H7 and H8. Because it has both direct and indirect significant positive effects on integration in the public universities under study. Mundir and Umiarso's study indicated that the students' attitudes toward the implementation of LMS were positive due to the report of easily accessing learning material and other sources more frequently and that students' attitudes toward LMS were influenced by e-learning self-efficacy, subjective norms, and accessibility system. 63 In their study, Mahyiddin and Amin, demonstrated that students had a good attitude toward the integration of technology into education since they actively participated in the discussions and showed positive attitudes and reactions to online learning. 64 Over time, LMS usage in colleges and universities has increased significantly to support educational activities. 65 Due to their substantial contribution to the delivery of instruction, students with time have become familiar with the use of the LMS, especially in the COVID era and this may have accounted for the high positive impact of the use of LMS as a strong influence on the integration of technology. Consistent with Zhwan et al's study (2015), students' attitudes toward information technology and its link to academic accomplishment had favourable views toward ICT, which facilitated their academic achievement. 66 Moreover, Fathema et al. and Walker et al., assert that LMSs provide a variety of tools such as the Big blue button, chatbots, discussion threads, video conferencing, lecture materials, learning modules, grading, and course assessments, all of which can be tailored to meet educational objectives. 67 Policymakers and management teams in the universities must therefore begin to consider LMS usage as a strong predictor and critical mediator between the attitude of students and the integration of technology into academic programmes.
Policy Implications
Concerning policy implications, the attitude of students has been found to have a direct effect on technology integration. So, policymakers cannot overlook it because numerous research, including Other authors have demonstrated the direct correlation between students' attitudes towards technology integration. 68 However, attitude can further impact integration when significant consideration is given to acceptance and LMS usage which is consistent with the study outcome of Claar et al. 69 Policymakers must take a cue from the study outcomes of their researchers, whose works have been confirmed in this current study. As Chen found, Learners' attitudes toward self-regulated learning have a major influence on performance in an e-learning environment, even though certain e-learning and students' technological experience were strongly associated. 70 Additionally, if teachers wish to support their students' attitudes toward technology, Jan Ardies et al. advised them to better grasp the elements impacting attitudes. 71 Policymakers must also consider this assertion and empower teachers to practically support students' attitudes toward technology integration. Policymakers and school administrators must, therefore, take pragmatic steps to retune or refine the attitude of students and encourage them to accept, adjust and patronize integration through the LMS. This is because ideological and cultural idiosyncrasies of students and prospective students can have a significant effect on the integration process if they find the LMSs to be laborious and stale; consistent with the recommendations of Asunka and Ansong et al. 72 More training sessions for students can neutralise these cultural idiosyncrasies of students. Policymakers, and the management team of the universities under study, must begin to understand the effect of Students' attitudes on the learning technologies that are installed to stimulate and ensure the integration of technology to improve academic performance.
Major Contribution
One major contribution of this study to the field of educational technology is the introduction of a proposed Conceptual framework -which consists of 10 hypothetical paths that have been tested with the intent to provide an additional conceptual guide to the integration of technology into HEIs at least within the context of the attitude of students and its effect on the integration of technology. According to Rallis and Rossman, a conceptual framework is a summary of various research findings from the literature sources that have been evaluated regarding the study, outlining the study's research agenda for better comprehension of its objectives. 73 This study adopted the Schematic presentation to present the framework because it used inferential statistics to establish cause and effect between variables based on theories and the research questions of this study. 74 It also considered the four characteristics of a good conceptual frameworkcomparability, verifiability, timeliness, and understandability. Figure 4 presents the conceptual framework depicting attitude as the independent variable, integration of technology (incorporating the domains of the TIM) as the dependent variable and acceptance, interactivity and LMS usage as the mediating variables. The validity, reliability and efficacy of their relationships and effects on each other have been duly tested through the confirmed hypotheses as indicated in Tables 3, 5, 6, 7 and 8. The authors of this article, therefore, propose this conceptual framework to stakeholders in the educational technology field of study.
LIMITATIONS AND FUTURE DIRECTIONS
Other researchers should conduct more studies that use a mixed method approach to take into account the cultural quirks of prospective and current students as well as the influence of students' attitudes and obstacles on the incorporation of technology in HEIs.
RECOMMENDATIONS
The authors recommend students' acceptance and adjustment, LMS usage and Interactivity, must be given critical attention, as they have the efficacy to cause perception to impact the integration of technology. Because the attitude of students does have significant ramifications on the students' acceptance and adjustment to the use of technology and the usage of the LMS for academic work as well as their interactivity with peers, instructors, and content, which in turn impact integration. This implies that when students' perceptions of using technology are positive or negative, they are more likely to incline towards accepting or rejecting the use of that technology. Again, if students' attitude toward the use of the LMS is positive or negative, it will directly determine the usage or rejection of the LMS. Although attitude directly impacts integration, these three variables (AA, LMS, INT) have the propensity to cause attitude to further impact technology integration.
The article also recommends that, even though policymakers and management of the universities have provided the impetus to boost interactivity among students and the learning technologies, they should not expect the attitude of students to drive Interactivity as a stimulus for integrating technology into academic programmes. Rather, they should regard Interactivity as one of the numerous digital activities that enable students to share ideas, content, and information to facilitate learning and improve academic work and not necessarily to impact the process of integrating technology tools and applications into the Learning management systems. That notwithstanding, policymakers should continue to boost interactivity by conscientiously directing students to heighten their use of social media (Facebook, YouTube, Instagram, LinkedIn, Twitter and search engines), among others, to improve their search for information, interactivity and research, thus closing the digital divide.
To a very large extent, public universities in Ghana have invested and installed various forms of LMSs with integrated tools to improve the integration of technology and better engage students in the teaching and learning process, especially during the pandemic. The Kwame Nkrumah University of Science and Technology has already invested a whopping Ghc20 million in technology integration, between 2020 to date. The mandatory use of the LMS and other learning platforms, during the COVID-19 era, may have accounted for the highly significant impact of the positive attitude of LMS usage and the resultant positive effect on the integration of technology in the top public universities in Ghana. The authors recommend that Policymakers and management teams of public universities must practically invest more in the LMS and other learning platforms to improve technology integration than brick and mortar.
CONCLUSION
The article has discussed the attitude of students and the mediating effect of acceptance, interactivity and LMS on the integration of technology. The authors of this article wish to reiterate that the perception of students is a weak predictor of technology integration and cannot directly affect it. Policymakers and school administrators in the HEIs, must therefore, give full consideration to the management of the mediating variables (AA, LMS, INT), by charging the IT Directorates to provide more training sessions and Help Desks to stimulate students' acceptance, provide regular notifications and updates about new applications, as well as instructional videos to enable students to learn quickly about new learning technologies; upgrades and software applications that they need for online learning and demonstrate self-efficacy in independent learning. | 2023-07-11T15:43:58.176Z | 2023-07-05T00:00:00.000 | {
"year": 2023,
"sha1": "9cb00e2e5eb13d77e0cb9e5d6f5e5491ab61041c",
"oa_license": "CCBY",
"oa_url": "https://noyam.org/?download_id=9384&sdm_process_download=1",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4de6f0b99dc56170de8a968360fd7ef852962a29",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": []
} |
211796934 | pes2o/s2orc | v3-fos-license | The Impacts of Policy on Energy Justice in Developing Countries
Access to modern energy is vital to societal wellbeing and to economic development. Still, the majority of rural households in developing countries do not have access to improved energy systems for basic household energy services. Many energy policies have been devised and several energy projects have been implemented to improve the access. However, many of these policies and energy projects were not successful due to the socioeconomic, cultural, resource and technical conditions present in particular contexts. Major barriers were attributed to the weak understanding of local contexts and societal needs. Nevertheless, some projects considering local social needs through innovative approaches were successful. Hence, improving access to improved energy technology needs to understand local contexts, linking to income generation activities and poverty alleviation and inclusion of women to benefit from the system. A bottom-up approach is sustainable to increase energy access while contributing to poverty alleviation and livelihood improvement.
cooking technology will be due to the increasing population in developing countries. Approximately 45% of the people deprived of access to a clean cooking facility live in Sub-Saharan Africa. Moreover, by the year 2030 about 70% and 50% of the people in this region will remain dependent on traditional biomass energy and without electricity supply. About 50% of the people without access to a clean cooking facility and electricity in Sub-Saharan Africa live in Nigeria, Ethiopia, Democratic Republic of Congo, Tanzania and Kenya. In Ethiopia, more than 95% of the households depend on biomass energy for cooking and over 70% do not have access to reliable electrical energy at least for basic purposes (lighting and appliances). Overall, this represents an enormous electrification challenge, and one that carries real energy policy and energy justice implications. This chapter will explore the level of accessibility of improved energy in developing countries and discuss the associated socioeconomic and health problems. It also identifies the availability of renewable energy sources which could be used to solve the energy problems. These missed opportunities and continuing challenges as a result of low local community involvement and top-down government and donor policies will be given due emphasis and discussed. Finally, an alternative policy options will be discussed and recommended through which local interests are addressed and access to improved energy technologies are increased.
Variation in Energy Demand
Typically, household energy demand originates from cooking, lighting, heating and appliances. The variation between household's energy demands exists at the intensity of the energy used and preferences for the technologies providing the services. The variation in the amount of energy for lighting reflects the number of lighting facilities, their efficiency and length of time of use, for example. Hence, households using similar light arrays for the same period consume equal amounts of energy irrespective of their location.
Generally speaking, the energy demand for electronic appliances in rural areas of developing countries is very small due to lack of high energy demanding appliances. Nevertheless, high income urban households tend to use proportionally higher energy for appliances. The energy need for heating depends on the geographic condition. However, most households without access to improved energy services are living in tropical climates where energy for heating is not a big issue. Thus, important demand diversity exists between western and developing countries, as well as between urban and rural households in terms of the energy used for cooking.
In western and temperate climates, the main share of the household energy demand is for heating and appliances (IEA 2008). Energy demand from cooking constitutes the smallest share of the demand presumably due to consumption of processed food. Moreover, the energy system is well organized with efficient stoves using high-quality energy from electricity and gas. In contrast, the demand variation between rural and urban areas in developing countries is complex. In rural areas, the energy demand for cooking is relatively high as people generally consume unprocessed food that require long cooking hours. Cooking unprocessed food also requires suitable cooking stoves matching local cooking habits. In addition, people do not typically have a facility to store cooked food, thus they cook more frequently. Long cooking hours together with frequent cooking leads to a high cooking energy share in terms of demand, which can account for up to 90% of a household's energy share (Tucho and Nonhebel 2017).
In contrast, urban households' cooking energy demand depends on the socioeconomic condition of the households. Urban households are generally connected to the grid, which enables them to use electricity for cooking (though charcoal still remains one of the main energy sources for regular cooking in certain areas). Poor urban households who are unable to afford electricity charges mostly rely on biomass energy for all cooking practices. Though positively, it is also reported in literature that urban households are more likely willing to shift to a modern energy supply following their increasing income (Heltberg 2003). It is noted that urban households are more accessible to semi-processed food compared to rural households who rely on their own food produce. This variation in food type allows the urban household, this to cook with modern stoves.
Various studies illustrate the effect of local energy use, cooking behaviour and customs on adoption and sustained use of improved energy technologies (Wüstenhagen et al. 2007;Kowsari and Zerriffi 2011). Energy technologies failing to fit local cooking habits and foods are frequently not accepted by users. Thus, satisfying the rural developing country's cooking energy demand with western technologies not matching local cooking habit and conditions may not to be possible. Hence, energy technologies fitting local food and cooking habits need to be identified and provided, taking into account their modification with the available local materials, for instance biogas cooking stoves can be modified with local materials to the prevailing cooking contexts.
Impacts of Poor Accessibility to Improved Energy Supply
A lack of access to improved energy technologies carries many socioeconomic and environmental impacts. In developing settings, the most important environmental impacts are decline of common forests, exposure to indoor air pollution and increase in greenhouse gases emissions (Ruiz-Mercado et al. 2011;Kaygusuz 2012). The socioeconomic impact of lack of access to improved energy access is high particularly on women and young girls traditionally in charge of household activities.
Women spend most of their productive time on extraction of firewood rather than using for income generating activities, going to school and other social activities (Wodon and Blackden 2006).
Most households in developing countries frequently cook food with wood obtained from common forests. Common forests are public resources with unrestricted access where everybody can have the right to use without any limitations. Common forests also serve as an additional means of income for poor urban and rural households selling firewood and charcoal. For instance, a large proportion of people in urban areas and those in small business such as local coffee sellers, use charcoal despite having access to grid connections. This heavy reliance of both rural and urban population on biomass for energy and income aggravates the intensity of firewood scarcity and deforestation (Allen and Barnes 1985;Arnold et al. 2006). In particular, the rate of deforestation is high in countries where large groups in the population depend on biomass energy for cooking. Ethiopia, Nigeria and Uganda are among the countries of the world with the highest wood fuel biomass pressure and high rate of deforestation (Putti et al. 2015). Nevertheless, with a good policy and integrated management these public resources can be one of the sustainable alternative energy sources.
The vast majority of households in rural areas of developing countries continue to depend on biomass energy used in open fire stoves for cooking. Open fire stoves convert only about 10% of the energy content of the biomass (Bhattacharya and Abdul Salam 2002;MacCarty et al. 2010).This means that 90% of the energy contents of the biomass dissipates into the open air without providing any gain. Satisfying the cooking energy demand of people using open fire stove requires a large quantity of wood from public forests. Continuous abstraction of large quantity of forest wood will have significant impact on the availability of wood supply. When firewood is critically scarce, households are often forced to shift to crop residues and dung to supply their firewood demand. For instance, in highland areas of Ethiopia where firewood is scarce, dung substitutes about 30% of the firewood demand (Bewket 2005;Duguma et al. 2014). In most cases, crop residues are burned on agricultural land to supply nutrient to soil for the next farming season. Removing and using of crop residues directly for the energy purpose affects the availability of nutrients to the soil (Lal 2009;Duguma et al. 2014). Thus, heavy dependence on both common forest and agricultural bio-wastes for use in traditional stoves significantly affects the local environment.
A lack of availability of biomass and its use in inefficient stoves may also carry a range of socio-economic and health consequences. When fuel woods are scarce, people will be forced to walk long distance to where sufficient firewood is available. This imposes huge burden on women and young girls who are traditionally in charge of household chores and firewood collection in addition to other activities. Due to the additional burden, women daily spend much more time on domestic activities than their male counterparts. Women may spend about 2-4 h per day to collect fire wood, heightening the risk of becoming deprived of education and other more productive activities (Blackden and Wodon 2006). A comparative overview of daily working hours of men and women in Sub-Saharan Africa is shown in Table 7.1. As shown in the Table and related literature, women in the region are forced to work more than 12 h per day due to increasing time for firewood collection. As a result, more than 50% of them are time-strapped for other activities (Wodon and Blackden 2006). Spending such an amount of time on household chores will have huge impacts on the productive time these people have in order to contribute to the economy of the family. As a consequence, they are deprived of the time needed for education, to take care of children, generate income, farm and interact socially. In particular, a lack of sufficient time for children will have significant effects on the development of the children and on their health condition.
It was shown that one-third of the world's population burn wood, dung or charcoal for cooking, heating and lighting (Birol 2011).The use of biomass in inefficient stoves produces incomplete combustion by-products (ICB), which are hazardous to human health. Burning biomass involves smoke containing a large number of pollutants of known health hazards including particulate matter, carbon monoxide, nitrogen dioxide, formaldehyde and polycyclic organic matter, including carcinogen agents. Exposure to indoor air pollution from the use of solid biomass fuels has been reported as a causal agent of several diseases in developing countries. Exposure to these indoor air pollutants is associated with an increase in the incidence of respiratory infections, including pneumonia, tuberculosis and chronic obstructive pulmonary disease, low birthweight, cataracts, cardiovascular events and all-cause mortality both in adults and children, for example (Ezzati and Kammen 2002;Fullerton et al. 2008).A recent report by World Health Organization (WHO) shows that more than four million people die prematurely annually due to illnesses attributed to the indoor air pollution caused by inefficient use of solid fuel for cooking (WHO 2016). In addition, a lack of access to improved energy services also includes energy for lighting. Most rural households use kerosene wick lamps, which produce several hazardous by-products such as black carbon, with serious implications for human health (Lam et al. 2012). Kerosene is also very expensive, meaning it carries detrimental economic, environmental and health effects (Pokhrel, Bates et al. 2010;Lam et al. 2012).
The vast majority of Africans lack of access to modern energy services, which constitutes amajor obstacle for achieving wellbeing and economic development. Improved access to energy for the poor and marginalized communities would make a significant difference in the fight against poverty. More than in any other region in the world, access to affordable and suitable energy services in Sub-Saharan Africa particularly needs to grow in order to improve the standard of living of the region's growing population. Blackden (2006) Although the foregoing discussion has focused largely on the domestic sector, access to modern energy is not limited to the household services for cooking, lighting and powering of small appliances; it also extends to the energy use for agriculture. The paradox is that the agricultural sector-the major sector in the region-accounts to a very small modern energy use, despite employing the largest number of working population and contributing to the major share of the national domestic products (GDP) in most African countries. Like cooking methodologies and fuels, agricultural production remains largely traditional, being implemented by human and animal power. Modern, reliable and clean energy would enable living conditions to be transformed and in turn, would increase industrial, agricultural, urban and rural development. 1 However, in many countries the electrical energy loss surpasses 30% and not reliable. The unreliable and costly supplies of electricity and modern fuels hampers production, growth and development. Hence, increasingly high oil import bills and financial losses experienced by many Sub-Saharan African constitute huge economic growth lag in the region.
Evolving Energy Policies in Developing Countries
For much of the last 200 years, the steady growth in modern energy consumption has been closely linked to rising levels of prosperity and economic opportunities across the globe (Sokona et al. 2012). However, high inequalities persist in the worldwide distribution of access to modern energy services. In particular, people in Sub-Saharan Africa experience the lowest per capita access to modern energy compared to others in the developing world. In this context, the most immediate energy priority is to expand access to meet the population's social and economic development agendas. Yet despite recognition of the need to expand energy access, the definition of energy access is ambiguous and not universal.
Access to electricity is often defined as the proportion of households supplied by the electricity system compared to the total number of households overall, or as a lack of access to clean cooking facilities. Yet it neglects the energy for productive services. In principle, access to energy for the poor needs to include the energy required to drive economic growth and to generate income. Access to modern energy also needs to consider the provision of and ability to afford and use modern and clean fuels for basic human needs, productive uses and modern societal needs like entertainment. Thus, a more comprehensive definition addresses the need for modern energy services to improve the livelihoods of the poor while at the same time, using modern energy to drive local economic development on a sustainable basis.
Many energy policies and programs have been attempted to provide improved energy facilities to households living in rural areas of developing countries. Major progress began during the 1970s and 80s, when the relationship between woodbased fuel and deforestation became known (Allen and Barnes 1985;Arnold et al. 2006). The mitigation efforts made ever since and still ongoing vary across the regions and countries due to differential priorities and capacities yet in principle, it is a well acknowledged fact that providing improved energy facilities requires a lot of investments (Brew-Hammond 2010; IEA 2011).The principal energy sector transition of the 1990s made through privatization and reform of energy supply utilities, has helped the utilities improve their accessibility and guarantee a provision of electricity to those able to pay but did not include the poor (to a certain extent) (Sokona et al. 2012). This success implies the need for further policy, social and institutional support in order to harness the great potential natural energy resources of the region and promote sustainable development. Particular emphasis needs to go towards the evaluation of existing practices and processes, national development policies and institutional set-up, and to the problem context and conditions across the region. Further, a new approach is required to achieve regulatory reforms targeting poverty reduction agendas and the needs of the local populations within microeconomic contexts. In this way, the energy access agenda needs to solve these fundamental problems and be broadened to an inclusive vision of the economy rather than narrowly focusing on the household energy sector.
Largely speaking, most of the improved energy implementation programs were organized centrally as a top-down approach. Such programs usually neglect local practices and user interests, and yet failing to take public interests into account in these programs strongly affects the implementation and sustainability of the system (Ni and Nyns 1996). Recent literature on the global improved biomass cook stove programs shows lack of success in most of the top-down approach programs due to their failure to address local interests and culture, for instance (Urmee and Gyamfi 2014). Similar constraining factors are observed with the biogas implementation programs (Mwirigi et al. 2014;Getachew et al. 2016). India and China are the leading countries in developing cost-effective biogas digesters and dissemination (Bond and Templeton 2011). Many of the Sub-Saharan African and other Asian countries who also followed their footsteps and developed their biogas programs, now remain at a standstill, with only a few success stories to be found (Parawira 2009;Mengistu et al. 2015). To mention some, Ethiopia, Uganda, Tanzania and Rwanda are some of the Sub-Saharan African countries that adopted the biogas technology and implemented this through their national biogas program (NBP). The problem is worse in rural Sub-Saharan Africa where about 95% of the population still rely on the traditional use of biomass energy. Failure to articulate local interests and conditions affects the progress of modern energy technology adoption, installation and use (Ni and Nyns 1996;Urmee and Gyamfi 2014). Hence, these factors should be understood from the grassroots level before starting any improved rural energy programs.
The practice of providing access to modern energy services to the poor in Africa is complex, mainly due to the dual nature of the energy system across Sub-Saharan Africa where in some instances, traditional and modern energy systems and practices co-exist. As noted above, rural household energy is often dominated by traditional modes of production and use. On one side, there is overlapping of modern and traditional energy use in urban areas where access to electrical energy and traditional technologies exist. In many urban areas across Africa, the simultaneous use of biomass fuels, kerosene or electricity is also common including among economically better-off households. Thus, when other socio-cultural dimensions are taken into account, the distribution of modern energy access across Sub-Saharan Africa and increasing access for a growing population becomes even more challenging. Provision of modern energy requires availability of sufficient renewable energy resources which can achieve sustainable service provisions.
Renewable Energy Resources
The provision of improved energy services for developing countries requires the availability of a sufficient amount of energy resources, which can be obtained from a variety of sources. In western and oil-rich countries, the demand for household energy is still largely met by fossil fuel sources. Yet due to growing environmental concerns and rising oil price, fossil fuel sources cannot be a long-term realistic option. Instead, and positively, most of the countries deprived of access to improved energy services have tropical climates, presumably with abundant renewable energy resources.
Of course, renewable energy resources are not evenly distributed across the world, regions and countries and within the country itself, but they (solar, wind and hydro) are suitable to provide electrical energy for large and small scales, where the small-scale application of hydro and wind depends on their local availability. 2 The effects of these conditions can be small on local solar energy because of the uniformly arriving solar radiation on certain locations. However, their local potential sufficiency for the demand is very necessary.
Until sixteenth century, humanity depends on biomass energy for all household services. The first transition started in England with the introduction of chimneys and suitable grates when consumers in urban areas started switching from wood fuels to coal (Fouquet 2010). Since then, mankind has gone through several energy transitions in the past five centuries, although the most significant progress is made around the late 1950s (Fouquet and Pearson 2012).
Given the known negatives of biomass energies, initiatives to formulate biomass as alternative energy policies based on recognition, formalization and modernization of the sector are not appreciated by decision-makers in government, whose vision of economic growth and poverty reduction is usually based on fossil fuels and electricity (Owen et al. 2013). The same applies to the use of charcoal. The production, use and trade of charcoal for domestic cooking and heating is also char-acterized by contradictions, stereotyping, and misconceptions of its negative consequences (Mwampamba et al. 2013). Nevertheless, biomass can be a better alternative to meet the demand if income generation activities are integrated and promoted through an enabling framework involving sustainable biomass supplies and value chain considering better technologies.
Biomass can be obtained from different plant sources in different forms and can be divided into "common" and "produced" resources. Common resources are those owned collectively for free access. These are public forest resources open for everybody to use without limitations. Produced biomass, on the other hand, comes from the private land resources and its availability relies on the presence (ownership) of land resources and its yield (Berndes et al. 2003). The availability and productivity of land varies from place to place and from scale to scale, usually being unevenly distributed. Thus, the availability of biomass at household level depends to some extent on a specific households' resource ownership. Households holding a large area of land can produce an excess of biomass, while those holding a small parcel of land can hardly produce enough. In addition, households can freely decide whether to use their bio-wastes for energy supply or for soil mulching and for animal fodder and food. The potential at individual household level is vital for the implementation of biomass energy technology that could deliver continuous functionality and sustainability. Hence, the potential at household scales needs to be understood, taking into account household ownership and competing purposes. It is also important to note the land-use system of the households to identify free and non-agricultural lands. Households may have degraded, marginal or extra lands for tree planting and use for energy, as well as for income generation. Planting tree for both energy and income helps households to have sustainable energy supply while providing an economic incentives (Gebreegziabher and
Drivers of Energy Transition in Developing Countries
It has been indicated that mankind has gone through several energy transitions in the past five centuries, with major progress in the twentieth century (Fouquet and Pearson 2012). The twentieth century transitions were driven by the invention and the development of more fuel types, technological changes and service restructurings. These historical transitions are relevant in understanding the perspective of the current energy transition in developing countries which may follow similar trends. Energy transition is a slow process taking over 100 years (Fouquet 2010), a complete transition may not be expected in short period of time. However, a good progress can be achieved due to increasing penetration of electronic appliances operated on electricity.
Currently, people have different choices of energy sources and technologies. In particular, the present advanced communication technologies are vital to make people aware of the advantage of modern energy services. The increasing demand for micro-electronic appliances is a good example. A recent estimate shows that more than one in three Africans had at least one mobile subscription and about 76% of the population has Global System for Mobile communication (GSM) coverage although the electrification rate in their countries is still about 30% (Smertnik et al. 2014;IEA 2015). According to this estimate, more than 358 million people in Sub-Saharan Africa are covered by mobile networks despite not having access to an electricity grid. All these technologies require electrical energy for their operation which may foster expansion of electricity and its access. Mobile phone provides a lot of economic and social benefits to rural households. This attracts an innovative mobile phone charging business in some Sub-Saharan African countries. The mobile phone charging micro-business in Tanzania and Uganda could be a good example (Collings 2011). Experiences from Uganda and Kenya shows an expansion of solar technologies to rural areas for phone charging which also significantly contributed to reduction in kerosene use (Stojanovski et al. 2017).
In addition, improvement in education can be a stimulus to aim for a better life and improved energy access. In many developing countries, education is considered as a basic human right where every child should go to school at least for basic education. It has been shown that adoption of improved energy technology increases with the level of education and technology penetration (Lewis and Pattanayak 2012). To date, it is not uncommon to find a television set in remote rural areas among households able to afford diesel generators or solar PV. Thus, an increase in demand of micro-electronic appliances and awareness can be big drivers for improved energy technology adoption and use. What is more, these conditions can shorten the duration of the transition to low carbon energy and efficient technologies. These technologies require electrical energy, which can be managed by standalone solar energy technologies.
Energy Transition and Donor Policy in Developing Countries
Innumerate energy transition policies have been proposed at a global scale to transform the traditional energy system towards more efficient energy technologies. For a long time, the transition of household energy system was explained with the prominent energy ladder model (Fig. 7.1), which considers the household socioeconomic situation as the driver of the transition (Leach 1992).This model has been criticized for its linear transition mode since an increase in household economy does not necessarily achieve a complete transition from traditional to modern energy system (Masera et al. 2000;van der Kroon et al. 2013). It is obvious that an increasing income helps households to afford the costs of the technology. However, the decision to adopt and use the technology depends on the local conditions. A survey report from different villages in Mexico affirms that stove types, cooking practices, fuel economy, accessibility conditions and cultural preferences came to be the main household decision tools, for instance (Masera et al. 2000). This decision also varies between urban and rural households. Urban households with better incomes more likely adopt and use improved energy technology than rural households (Heltberg 2003). It is evident that, socio-cultural factors related to the demand and the availability of local resources can be more important than increasing income to achieve the transition in rural areas (Kowsari and Zerriffi 2011). This indicates the complex behaviour of rural energy transition which money alone cannot solve. The development and implementation of clean cooking energy technology for households in developing countries are relevant to at least five of the Sustainable Development Goals (SDGs), including Goal3: Good health and well-being; Goal 5: Gender equality; Goal 7: Affordable and clean energy; Goal 13: Climate action and Goal 15: Life on land (Rosenthal et al. 2018).Yet addressing the SDGs related to energy and achieve the low-carbon energy transition requires an understanding of the trade-offs and synergies between the opportunities they present. In particular, an increasing access to renewable energy consumption has a positive impact on economic development and poverty reduction. Nevertheless, at the same time, most energy policies in developing countries focus on large-scale grid electrification directly and the majority of the big energy projects are often linked to foreign donations. Getting access to these foreign donations involves a lot of bilateral gaming and discussions and the negotiation process is mostly led by donors' interest and objectives. In the processes, the receiving countries may compromise the citizens' interest to comply with the donor interest. In addition, some researchers argue that the transition possibilities in countries with low access to modern energy are shaped by post-colonial legacies and political agendas, where non-western traditions of thought are overlooked (Broto et al. 2018). Thus, the effectiveness of aid can be further affected when political ideology differs between the donor and the recipient (Dreher et al. 2015). In this regard, addressing poverty, climate change and energy security requires awareness on association between energy systems and social justice, typified by the situation in which all individuals have safe, affordable and sustainable energy access.
A sustainable energy transition in a developing country context requires an integrated policy approach involving local resources availability and viable technological options that match local demands and contribute to livelihood improvements. To achieve this, the following specific basic questions need to be taken into account. Are there sufficient renewable energy resources available for the demand? Are there efficient technologies available to convert renewable resources into suitable type of energy to meet the demand? Are the technologies affordable and do they match local socio-cultural conditions? Are they applicable with low labour requirements? Do the technologies and projects contribute to economic development and poverty alleviation?
The energy transitions processes is further geared towards questions of ethics and justice, which include the notion of fair distribution of energy infrastructure, allowing equal and equitable access to decision-making and services, and participation of marginalized groups. Failure to adequately engage with the questions of justice and energy transition process may aggravated poverty, entrenched gender bias and non-participation of the locals (Jenkins et al. 2018). As a particular strategy then, energy justice focuses on the evaluation and identification of the affected and the existing processes to provide solution and reduce injustice (Jenkins et al. 2018). It does so by focusing on distributional, recognition, and procedural justice issues, considering the equitable distribution of benefits and costs by stressing the need for inclusion and equal participation in decisions through recognition, in this case, the diversity of needs, values and interests of the locals (Williams and Doyon 2019).
Sustainable Policy Alternatives
The provision of affordable and sustainable energy supply is one of the key options of improving the livelihoods of millions of poor people in Africa. Small-scale biogas technology has huge potential to satisfy domestic energy needs and provide numerous economic and environmental benefits. Most Sub-Saharan African countries have adapted the Chinese and Indian biogas technologies and tried to disseminate through their national biogas programs, with the financial support of funding agencies. However, widespread adoption of the technology and its continuous functionality were not achieved due to various socioeconomic, cultural, technical and attitudinal factors (Mwirigi et al. 2014;Getachew et al. 2016). This implies the importance of local conditions and missing opportunities as a result of ignoring or giving them less considerations.
Sustainable supply should give greater emphasis to productive uses of energy and energy for income generation. This helps to contribute to generation of higher incomes through the mobilization of local resources, technologies and financial resources (Brew-Hammond 2010;Rupf et al. 2015). There are some successful energy projects with special innovative approach integrating income generation based on local needs. One of these is a Solar Sister project in Uganda, Tanzania and Nigeria implemented through innovative women-to-women entrepreneurial networks providing a wide range of high-quality clean energy products. This projects provides access to clean energy alongside its value in terms of a long life-cycle, and the creation of a new value chains through micro-entrepreneurship and networking of multi-stakeholder partnerships (Heuër 2017). Solar Sister always consults with the community leader first and then seeks to include households in their initiatives. The Solar Sister field agents consist of local women recruited, trained and mentored by Solar Sister to set up their independent clean energy micro-enterprises. This focus on woman-to-woman sales is an innovative way of introducing new technology in rural households where women are the primary users and managers of household energy. It has been recommended that there should be promotion of the biogas technology through empowering females and female headed households by providing access to credit and income beyond promoting adoption of biogas technology (Getachew et al. 2016).
The production of biogas can be resource-efficient and viable for cooking when arranged at a village scale in a co-digestion mode. Arranging co-digestion at the village scale enables use of any available organic wastes. This approach is helpful to avoid inter-household variation and resource scarcity owing to sharing of resources and increasing performance efficiency of the digester. Co-digesting different waste also streams increases the performance efficiency of the digester and its biogas yield (Giuliano et al. 2013), thus improving the productivity of the system and reducing the amount of feedstock needed to meet the demand. Furthermore, applying co-digestion at a village scale provides the possibility of using any organic wastes such as human excreta and crop residues. A community biogas system established on shared household resources may involve several challenges since households have vested interests in their resources. Households living in rural areas tend to have strong family cohesion, and a culture of social cooperation and dependence. Socio-cultural bonds are powerful in influencing the individuals' living conditions and to solve any societal problems. Households generally closely follow the rule of socio-cultural obligation or else they are considered deviant. This dynamic can be harnessed in the development of the community energy system, to embrace households and influence them to follow the rules of the system. A community biogas system in rural India serves as a good example in which households within the village shared their resources for the common benefits (Reddy 2004). In this project, households contributed their cow dung for communal biogas digester installed to provide light energy for the village. Households can find a solution for their problems if they are allowed to participate in decision processes. For instance, inequality in resource sharing can be easily avoided through the exchange of labour. This means that households with small amounts of feedstock can contribute labour for the collection of feedstocks and feeding of the digester. This approach is essential to reduce the costs of distribution and labour for collection of feedstocks. Accordingly, households can hand-over their bio-wastes and collect biogas in return. Hence, households do not necessarily settle densely in a village to qualify for a pipeline distribution system. Households living nearby can cooperate and install the technology to get the energy and slurry benefit out of it. A communal energy system may not be affected by inequality among households and their living conditions.
The sustainability of biogas production depends on the labour spent on daily collection of feedstock, water and removal of its slurry (Tucho et al. 2016). Yet the issues of labour are not a significant concern with a community biogas system, given that the working load is well distributed. What is more, a larger reduction of resources and labour can also be achieved through biogas system integration. The integration of latrine and livestock farming with a biogas digester, for example, can reduce the demand for additional feedstock and water. With this, biogas production will become part of livestock farming, where the water used for the cleaning purpose can be directly applied to the digester. This mechanism also reduces operational costs that would be incurred when run as an independent system. In this way, biogas system integration helps to overcome possible limitations related to resources and operation of the system (Chen et al. 2010). This means that provision of technical and financial support is easier to arrange at a community level than at individual household level.
The integration of energy systems into income generation activities can also be a better approach to enhance the households' financial capacity to afford related costs (Brew-Hammond 2010). This approach can be vital for improving the economic capacity of the households, and especially that of women through provision of targeted financial support to activities contributing to both income and energy. Livestock smallholder businesses can be a better option to provide sufficient dung and urine at nearby locations in addition to income. These businesses are best known for their pro-poor, less capital-intensive, quick economic return, and sustainability in addition to the benefit for energy provision and use (Wambugu et al. 2011). Transforming and improving feeding from field grass to stall-fed conditions would substantially improve the quality of dung and provide easy access to the livestock's urine. The income generation activities can be applied at a household scale but are easier to apply at a community scale in many aspects (labour, technical and financial support). The application of a community energy system may not be straightforward, but it is critical. In general, an enabling policy with better understanding of local resources, demands, business, social relation and customs is needed.
Conclusion
The provision of modern energy access to people in developing countries requires a better understanding of the local socioeconomic, cultural, availability of resources and capacity and needs to adopt the technology. Many of the past policies tried to solve the energy problems through top-down project implementation approach. This approach neglects the needs of the people, ignores their participation in planning and decision-making. As a result, many of these projects were not successful in their goal to provide modern energy access to the poor. It is apparent that provision of modern energy access to the poor requires an understanding of their needs and thorough integration into income generation activities. An integration of energy supply with income generation will be achieved by involving households (and particularly women) in the process of planning, local resources mobilizations, decisionmaking and implementation. As a result, the integration of the energy supply with income generation streams will help towards poverty alleviation and improve the economic capacity of the households for better technology adoption and realization of energy transition: key developments towards the attainment of the Sustainable Development Goals.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. | 2019-10-24T09:17:27.085Z | 2019-10-19T00:00:00.000 | {
"year": 2019,
"sha1": "defc1a865f6200de79d546803e1e2a89efbb773b",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-24021-9_7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "8f48b9faeca9cce97df6a9456a028750aeae2450",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
221511120 | pes2o/s2orc | v3-fos-license | HIV-1 infection among women in Israel, 2010–2018
Introduction Although women comprise 33% of the HIV-1-carriers in Israel, they have not previously been considered a risk group requiring special attention. Immigration waves from countries in Africa and in East Europe may have changed the local landscape of women diagnosed with HIV-1. Here, we aimed to assess viral and demographic characteristics of HIV-1-positive women identified in Israel between 2010 and 2018. Methods All > 16 year-old, HIV-1-infected women, diagnosed in Israel in 2010–2018, (n = 763) registered in the National HIV reference laboratory were included in this cross-sectional study. Demographic and clinical characteristics were extracted from the database. Viral subtypes and transmitted drug resistance mutations (TDRM) were determined in 337 (44.2%) randomly selected samples collected from treatment-naive women. Results Median age at diagnosis was 38 years. Most (73.3%) women were immigrants from the former Soviet Union (FSU) (41.2%, 314) or sub-Saharan Africa (SSA) (32.2%, 246) and carried subtype A (79.7%) or C (90.3%), respectively. Only 11.4% (87) were Israeli-born women. Over the years, the prevalence of women from SSA decreased while that of women from FSU increased significantly (p < 0.001). The median CD4+ cell count was 263 cells/mm3, and higher (391 cells/mm3) in Israeli-born women. TDRM were identified in 10.4% of the tested samples; 1.8, 3 and 7.1% had protease inhibitors (PI), nucleotide reverse transcriptase inhibitors (NRTI) and non-nucleoside reverse transcriptase inhibitors (NNRTI) TDRM, respectively. The prevalence of women with NNRTI TDRM significantly increased from 4.9% in 2010–2012 to 13.3% in 2016–2018. Israeli-born women had the highest prevalence (16.3%) of NNRTI TDRM (p = 0.014). NRTI A62 (5.6%), NNRTI E138 and K103 (5.6 and 4.2%, respectively) were the most prominent mutated sites. Conclusions Most HIV-1-positive women diagnosed in Israel in 2010–2018 were immigrants, with the relative ratio of FSU immigrants increasing in recent years. The high proportion of women diagnosed with resistance mutations, particularly, the yearly increase in the frequency of NNRTI mutations, support the national policy of resistance testing at baseline.
Introduction
Women comprise more than half (51.2%) of the 36.7 million people worldwide carrying HIV-1 [1]. However, the proportion of newly infected women varies around the world [2], with the majority (56%) living in Sub-Saharan Africa (SSA) [2], a region suffering from a generalized HIV-1 epidemic (> 1% HIV-1 prevalence) [3]. The second major region with a high proportion of HIV-1positive women (42%) is Eastern Europe, particularly countries in the Former Soviet Union (FSU), which experienced the fastest growing HIV-1 epidemic in the world [4] between 2003 and 2009, and is currently regarded as a region of concentrated HIV-1 infection [3].
Interventions aiming to reduce the global spread of HIV-1 require understanding modes of HIV-1 transmission, viral subtype distribution and circulation of drugresistant viruses. Viruses harboring drug resistance mutations are a major obstacle to successful HIV treatment, even in the current era of HIV treatment simplification and the shift to dual therapy regimens [5,6]. Immigrants from countries with high rates of HIV-1 infection and of viruses with resistance mutations, may be infected and continuously transmit drug-resistant viruses after immigration [7].
Israel is a multicultural country with a continuous influx of immigrants from across the globe. Until 2010, as a result of massive immigration waves, 41.3% of all HIV cases were immigrants from SSA [8]. Between 2010 and 2018, 174,934 people immigrated to Israel, more than of 50% of whom were women [9,10]. During this period, most immigrants (59.5%, 104,086) [9,10] were from the FSU. In comparison, immigrants from SSA [9] constituted only 6.7% of the total number of immigrants in 2010-2017 (9829/146,835), with a decline from 1918 immigrants in 2010 to 318 in 2017.
Gender is known to be a factor that significantly impacts migration experiences [11]. As a result of economic insecurity, limited education, linguistic and cultural barriers, migrants most often present late to care. These factors may also place migrants, especially women, at risk for acquiring HIV-1 infection [12]. Women immigrating from countries with high rates of HIV-1 infection, unaware of their HIV status, are also at higher risk for delivering infants with perinatally acquired HIV-1 [13], especially as in Israel, where universal HIV-1 prenatal screening is not mandatory [14].
According to the Israeli Ministry of Health, women comprise 33% of the reported HIV-1-positive individuals [15]. In a report that summarized HIV-1 diagnosis in Israel between 1981 and 2010, most HIV-1-positive women were from countries in Africa, mainly from Ethiopia. Those infected by injecting drugs or through heterosexual transmission comprised only a small minority of the reported cohort [8]. The characteristics of HIV-1 positive women population and the rate of transmitted resistance mutations (TDRM) in women diagnosed in more recent years have not been evaluated. The goal of this study was to profile the demographic and viral characteristics of HIV-1-positive women diagnosed between 2010 and 2018, and to estimate the proportion of women carrying HIV-1 TDRM in Israel.
Methods
In this cross-sectional study, the database of the National HIV Reference Center, which has demographic and clinical documentation on all newly diagnosed HIV-1 patients in Israel, was screened for women diagnosed between January 2010 and December 2018. Men, transpeople, women below the age of 16 years and women diagnosed in years other than 2010-2018 were excluded. Demographic (age, birth place and route of HIV-1 transmission) and clinical (year of HIV diagnosis, HIV-1 viral load, CD4 + cell counts, HIV-1 subtype and TDRM) characteristics were collected.
The final cohort included 763 women. As not all treatment-naïve, HIV-1-positive women are routinely tested for resistance, the first available sample collected < 6 months after initial HIV-1 diagnosis of 337 women (44.2%), selected each year by a stratified random selection design were analyzed by sequencing of HIV-1 protease (PR, codons 4-99) and reverse transcriptase (RT, codons 38-247). PR and RT TDRM were determined using the World Health Organization (WHO) consensus list of drug resistance mutations updated in 2009 [16] in the HIVdb Program v.8.8 [17]. The polymorphic RT-E138 and accessory mutation A62 sites were also assessed. Subtypes were defined by the REGA HIV-1 subtyping tool version 3.0 and Stanford University HIV Drug-Resistance Database [17].
Descriptive statistics was used to assess the study cohort. Variables with non-Gaussian abnormal distribution (assessed by Kolmogorov-Smirnov test) were expressed by median and interquartile range and the Kruskal Wallis test was performed to test the quality of means of several distributions. Categorical variables were expressed by frequencies and compared using chisquared or Fisher's exact test. The Bonferroni method was applied to check whether multiple testing could lead to the risk of type 1 errors. Logistic regression was used to test factors associated with TDRM rates. Multivariable analysis included factors found to be related (p < 0.01) to the dependent variable with the forward technique covariate selection and was based on unstandardized effect-size statistics. Potential interactions were controlled by stratification on effect-measure-modifiers to assess heterogeneity of a measure across the levels of another factor. Variables with missing values (e., g missing CD4 results) were ignored. Poisson segmented regression (that typically aggregates individual-level data by time points and estimates dynamic changes over time, while adjusting for secular changes [18]) was performed to examine the change in the frequency of HIV TDRM in the study years. Statistical analysis was performed using IBM SPSS statistics version 20. Table 1 While the total number of women identified remained stable over the study period, a significant yearly decline in the proportion of SSA immigrants versus a constant increase in women originating from the FSU was observed (p < 0.001). Similarly, while the overall prevalence of subtype C (41.8%, 141/337) and A (38.6%, 130/337) diagnosis was similar, the later years of the study were associated with a decline in the number of subtype C carriers and an increase in the number of subtype A carriers.
Results
A comparison of the characteristics of women born in SSA, FSU, Israel or elsewhere ( Table 2) showed that most women from the FSU (79.7%) were carriers of subtype A, while 90.3% of those from SSA carried subtype C (p < 0.001). Women were diagnosed with low (< 350 cells/mm*3) CD4 + cell counts (Table 1), with lower median counts among women immigrating from SSA and the FSU (246 cells/mm*3 and 262 cells/mm*3, respectively) as compared to Israeli-born women (391 cells/ mm*3, p = 0.042, Table 2).
Resistance analysis revealed that 10.4% (35/377) of women carried viruses with resistance mutations, with 7.1, 3, and 1.8% of women carrying NNRTI, NRTI and PI TDRM, respectively. While the proportion of women with NNRTI TDRM increased significantly (p = 0.017) between 2010 and 2012 and 2016-2018, paralleling a non-statistically significant increase in the overall prevalence of women with any HIV-TDRM diagnosed in these years, the rates of women with NRTI and PI TDRM remained stable. Moreover, in 2016-2018, no women with PI TDRM were identified ( Fig. 1). All these results were further corroborated by Poisson segmented regression. No significant difference was observed in the prevalence of women with any TDRM between the different birth-places (p = 0.170). Interestingly, the proportion of native Israeli women born in carrying a NNRTI TDRM virus (16.3%, p = 0.014) was significantly higher compared its prevalence among women born in other countries ( Table 2). Logistic regression was used to assess factors associated with TDRM carriage and carriage of specific TDRMs by drug class. Factors included in this analysis were birthplace (FSU, SSA, Israel or other), HIV-1 subtype (A, C or non A/C), viral load, age at diagnosis and year of diagnosis (supplemental Table S1). Significant association between recent diagnosis and NNRTI TDRM as found by both univariate (OR: 1.23, 1.05-1.45 of 95% CI, p = 0.01) and multivariate analysis (OR: 1.23, 1.03-1.43 of 95% CI, p = 0.020). Other associations could not be found. Table 3 lists the type of TDRMs identified in the study cohort according to drug class. The polymorphic RT-E138 and the accessory mutation A62 sites, were also included due to their clinical relevance and high prevalence. A62V which was the most prominent NRTI mutation (5.6%, 19/337), was significantly more common in HIV-1 subtype A-as compared to HIV-1 subtype Cinfected women (13%, 17/130 versus 1.4%, 2/141, p < 0.001). E138 was the most frequently identified mutated NNRTI position (5.6%, 19/337), detected in 8.5% (n = 11), 4.3% (n = 6), 3% (n = 1) and 4.3% (n = 1) of subtype A, C, B and G/AG carriers, respectively. The NNRTI K103N/S mutation was identified in 4.2% (14/337) of women, and was significantly more prominent in those carrying HIV-1 subtype B compared to those carrying subtype C (11.8%, 4/34 versus 2.1%, 3/141, p = 0.010). The most prominent PI mutation was M46I, identified in 1.5% (5/337) of patients.
Discussion
Analysis of the demographic profiles of women diagnosed with HIV-1 in Israel between the years 2010 and 2018 revealed that most were not born in Israel. In 2010-2012, 44.7% were immigrants from SSA and 31.3% were from the FSU. In more recent years (2016-2018), 50% were from the FSU, while only 27.7% originated from SSA (p < 0.001). The most prevalent viral subtype, changed accordingly, from subtype C, characteristic of HIV-1 in SSA, in 2010-2012, to subtype A, characteristic to FSU, in 2016-2018 (p < 0.014). These results are in concordance with the waves of immigration from SSA and Eastern Europe to Israel in 2010-2018. A similar increase in the prevalence of subtype A carriers was recently reported in Germany and in other west-European countries, due to an increased flow of refugees, mainly from the FSU, into Europe, and especially into Germany [19]. The low CD4 counts noted in this cohort of HIVpositive women, suggest late diagnosis. Moreover, women from SSA, as well as those who immigrated from the FSU, had significantly lower CD4 counts at diagnosis compared to Israeli-born women. Missed opportunities for early diagnosis has already been reported for at least 33% of the Israeli HIV population [20]. Late diagnosis was also recently reported to characterize over half of the women diagnosed in Europe in 2018 [21]. Our data corroborate these results and highlight the need for improved HIV diagnosis policies targeting new female immigrants. These can include offering HIV testing soon after the arrival of all women immigrating from concentrated and generalized HIV epidemic regions, such as the FSU and SSA, respectively. Also, as most of the women are diagnosed at the reproductive age (median age at diagnosis was 38 years), universal testing for HIV-1 infection during pregnancy should be employed, without limiting it to a selected group, e.g., immigrants from SSA, as is currently performed [22]. It was already demonstrated that a universal approach to perinatal HIV testing achieves the best health outcomes and is cost-effective across a range of HIV-1 prevalence settings [23].
TDRMs were identified in 10.4% of women diagnosed in the years 2010-2018. Prevalence of women with NNRTI, NRTI and PI TDRMs was 7.1, 3 and 1.8%, respectively. The proportion of women diagnosed with any TDRM and especially with NNRTI TDRMs increased significantly in more recent years, reaching 14.4 and 13.3%, respectively, among women diagnosed in 2016-2018. In a recent analysis of HIV diagnoses in 2017 in 9 European countries, the overall prevalence of resistance mutations in treatment-naïve patients was 13.5% and that of NNRTI was 7.7% [24]. Although these results are similar to our findings in women, they are likely an overestimation of the actual TDRM rate in Europe, as all resistance mutations included in the Stanford HIVdb were considered [16,17]. In general, changes in prescribing practices over the study period, the high genetic barrier of PI and the lower genetic barrier of NNRTIs, most likely explain the changing rates of drug class-related TDRMs [25]. However, the overall high rate of resistance mutations, the ongoing increase in transmission of resistant viruses, especially in more recent years, and the high rate of individuals on antiviral therapy worldwide, mandates continuous monitoring of pretreatment resistance mutations in Israel and around the world.
NNRTIs are not considered preferred first-line therapies, but are still included in at least some regimens [5,25]. In the current study, K103N/S, which confers high-level or intermediate cross-resistance to the NNRTIs efavirenz, nevirapine and delavirdine, was the most prominent NNRTI TDRM (4.2%) and more prevalent in HIV-1 subtype B carriers, as previously reported [26]. As current guidelines permit the use of efavirenz among women of childbearing potential, this rather frequent TDRM should not be disregarded. The polymorphic E138 was the most frequently mutated NNRTI site. This naturally occurring polymorphism that blocks the NNRTI-binding pocket, is known to affect rilpivirine binding and may cause lower susceptibility to this drug [27]. A systematic review that assessed the prevalence of rilpivirine-related TDRMs in 65 countries, already reported an association between E138 mutations and HIV-1 subtypes C (6.1%), and A (3.3%) [28]. In the current study, it was identified in 5.6% of all women, irrespective of the viral subtype. As rilpivirine-based dual therapy is still considered a legitimate treatment option, resistance testing in all patients prior to rilpivirine therapy should be performed. The most prominent NRTI accessory non-polymorphic mutated site was A62V (5.6% prevalence), which influences replication fidelity and viral fitness in the context of multi-drug resistance mutations [17]. A62V, which was reported to be widespread in subtype A viruses in the FSU [17], was also significantly more prominent in HIV-1 subtype A in the present analysis. However, according to current guidelines, A62V does not interfere with therapy. While there was no significant difference between overall TDRM rates in women originating from different countries, significantly higher NNRTI TDRM rates (16.3%) were identified in women born in Israel compared to those born in SSA (8.9%) or FSU (4.9%, p = 0.014). In an earlier study that assessed HIV-positive patients diagnosed between 1999 and 2003 in Israel, resistance mutations were reported in 14.8% of newly diagnosed, treatment-naïve patients, 28.6% of whom were known to have been infected in Israel [7]. Together, these results suggest continuous ongoing local circulation of drug-resistant viruses. An in-depth characterization of all HIV-1 patients identified in 2010-2018 is ongoing.
Our study has several limitations. The main inherent limitation was the overall small number of women positive for HIV diagnosed in Israel. Also, resistance analysis was not performed for all women. However, a stratified selection design was used to selected samples from each year for sequencing and TDRM analysis. However, this study was the first to focus on women diagnosed with HIV in Israel. Women are a subgroup of patients not previously considered a risk group, despite reports on biological sex being an important determinant of risk of HIV infection and of subsequent viral pathogenesis, as well as of treatment responses [29].
Conclusions
The epidemiology of HIV-1-infected women in Israel is changing, showing a shift toward higher prevalence of women from FSU with subtype A HIV-1, infected through heterosexual contact. The proportion of women with any TDRM exceeded 10%, a level which, according to WHO, requires resistance testing, especially as the increase in NNRTI rates (13.3% in 2016-2018) seems to be ongoing. Moreover, when also considering the RT A62 and E138 polymorphic resistance-related sites, as suggested elsewhere [7,30], the overall prevalence of women with drug-resistance mutations increased to 18.4%, an alarming rate of resistance mutations. These results support the national policy of universal resistance testing soon after diagnosis and call for implementation of appropriate measures, including testing all at-risk pregnant women for HIV-1. | 2020-05-21T09:19:25.585Z | 2020-05-15T00:00:00.000 | {
"year": 2020,
"sha1": "8b95327ed2fa50d2197558e53f06c4783bd7e796",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-020-05389-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8be7fe3f45e8f7033ce4e6401b7ee64044f54a5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
195798908 | pes2o/s2orc | v3-fos-license | Spin-orbit torque induced electrical switching of antiferromagnetic MnN
Electrical switching and readout of antiferromagnets allows to exploit the unique properties of antiferromagnetic materials in nanoscopic electronic devices. Here we report experiments on the spin-orbit torque induced electrical switching of a polycrystalline, metallic antiferromagnet with low anisotropy and high N\'eel temperature. We demonstrate the switching in a Ta / MnN / Pt trilayer system, deposited by (reactive) magnetron sputtering. The dependence of switching amplitude, efficiency, and relaxation are studied with respect to the MnN film thickness, sample temperature, and current density. Our findings are consistent with a thermal activation model and resemble to a large extent previous measurements on CuMnAs and Mn$_2$Au, which exhibit similar switching characteristics due to an intrinsic spin-orbit torque.
INTRODUCTION
The discovery of the electrical switching of antiferromagnetic CuMnAs via an intrinsic spin-orbit torque has triggered immense interest of researchers working in the field [1,2]. Experiments verified the proposed switching mechanism via direct imaging and that the remarkable properties of antiferromagnets, such as insensitivity to external magnetic fields and terahertz dynamics, can be exploited in devices [3][4][5]. The so-called Néel-order spinorbit torque (NSOT) has initially been predicted [6] for another material, Mn 2 Au, which is an antiferromagnet with a very high Néel temperature [7]. Several works verified that the NSOT is also present in this material [8][9][10]. Recent studies by some of the authors of this article have demonstrated that thermal activation and thermal assistance via Joule heating are key features to the understanding and realization of stable multi-level devices made of Mn 2 Au or CuMnAs [9,11].
Only few metallic materials with suitable magnetic and crystallographic symmetry for the NSOT are known [12], which poses a significant challange for the development and integration of devices based on these materials. Very recent work demonstrated that spin-orbit torque induced switching of insulating epitaxial NiO layers via the spin Hall effect (SHE) of an adjacent Pt layer results in very similar switching characteristics [13][14][15][16]. While the details of the underlying mechanism are under debate, it nevertheless opens a new route in antiferromagnetic spintronics. Similarly, it was shown that α-Fe 2 O 3 can be switched and that Mn 2 Au can be manipulated via the SHE in a way distinct from the intrinsic NSOT [17][18][19].
In the present article, we demonstrate that electrical switching is possible with polycrystalline, metallic antiferromagnets and an adjacent Pt layer. Thereby, we show that a much larger class of antiferromagnetic thin films can be manipulated via the SHE, including metallic and polycrystalline materials; the read-out is possible via either the planar Hall effect (PHE) [1,9] or the spin Hall magnetoresistance (SMR) [13,14]. In our experiment, we focus on a low-anisotropy antiferromagnet with high Néel temperature: MnN. It has a tetragonally distorted NaCl structure and a Néel temperature of 650 K [20,21]. Its magnetic structure is of the AF-I type with the magnetic moments aligned antiparallel along the (001) direction, see Fig. 1 a). However, the spin orientation is controversial and might depend critically on the lattice constants. In previous studies, some of the authors of this article have shown its utility for exchange bias applications with large exchange bias fields at room temperature [22][23][24][25][26][27]. However, the critical thickness for the onset of exchange bias was observed to be around 10 nm at room temperature, leading to the conclusion that MnN has a small magnetocrystalline anisotropy energy density [22]. Since the available torque from the SHE is not large, we decided to choose this low-anisotropy material, because it seems to be an ideal candidate for an electrical switching experiment. EXPERIMENT We prepared Ta (6 nm) / MnN (t MnN ) / Pt (4 nm) samples on thermally oxidized Si substrates via dc magnetron sputter deposition at room temperature. The MnN layer was reactively sputtered from an elemental Mn target in a sputtering gas ratio of 50% Ar to 50% N 2 , following the same procedure as reported in Ref. 22. Its (001) fibertextured growth in the as-deposited state as descibed in detail in Ref. 22 has been confirmed by x-ray diffraction. Magnetic and grain size characterization of similar films was performed previously using the so-called "York protocol" of exchange bias measurements and transmission electron microscopy [26,28]. In this study, the median lateral grain size of the MnN was found to be 4.8 nm and the anisotropy constant at room temperature was estimated as K AF 6 × 10 5 J/m 3 . In polarized neutron reflectometry measurements, the films were found to be slightly rich in nitrogen and no magnetic scattering from the MnN films could be detected [27]. This excludes the possibility, that electrical switching of ferrimagnetic Mn 4 N precipitates contributes to the signals we investigate in the present study. For the electrical switching experiments, the samples were patterned to star-shape [1] structures (cf. Fig. 1b)) using electron beam lithography and Ar ion milling. The devices are connected to the measurement setup via Ta/Au contact pads and Au wire bonds. Our measurement system is identical to the one we used previously for the study of Mn 2 Au and CuMnAs. It is described in detail in Ref. 11. In all experiments presented here, we used a current pulse width of ∆t = 4 µs. Pulses were grouped into bursts with constant charge per burst of Q = 1.68 × 10 −4 C and a duty cycle of 0.002. After every pulse, a delay of 2 s was applied before taking the transverse resistance R ⊥ = U ⊥ /I probe reading with a lock-in amplifier, cf. Fig. 1 b). To ensure constant nominal current density j 0 = I 0 /(wd) = U 0 /(Rwd), the pulse line resistances R were measured before every switching cycle consisting of six repeats of 200 bursts per current direction and relaxation phases of 600 s. Here, j 0 refers to a nominal current density with the total metallic film thickness d and the current-line width w = 4 µm. To determine the current densities in the individual layers, we used a parallel conductor model to determine the Pt layer resistivity using the MnN and Ta resistivities of 180 µΩcm that were determined by four-point measurements on suitable reference samples. Additionally, the current density is corrected for the inhomogeneous current flow in the center-region of the star-structure by a factor of 0.6. For more details, we refer the reader to the Appendix to Ref. 11.
RESULTS
In Fig. 1 c) we show typical raw data of switching with both polarities. To analyze a possible influence of the pulsing polarity, we calculate the differences and averages of the two polarities, see. Fig. 1 d) and e), respectively. While the first two repeats show a clear dependence on the polarity, further cycles show only negligible influence of the polarity. Due to the expected symmetry of the Néel order switching, we focus on the reproducible, polarity independent component of the measurement, i.e. the average over the two polarities after a training phase of three repeats. In the following, all switching traces refer to polarity-averaged switching traces after three repeats of training.
In Fig. 2, we show polarity-averaged switching traces for temperature dependence and current density dependence of the 6 nm MnN sample. The temperature dependence shows clearly that higher temperature assists the switching process, with increasing steepness and amplitude. It is also clearly seen that with higher temperature, the relaxation becomes much faster and complete relaxation to the initial state is seen after 600 s at 260 K. The switching is also quite sensitive to the current density and large changes of the amplitude are seen within a fairly small interval of current densities. Recent work on the switching of insulating antiferromagnets with the SHE of Pt suggests that the typical "saw-tooth" shape of the transverse voltage traces is related to a degradation effect in the Pt film [17,18]. To ensure that the electrical response in our experiment originates from the switching of the Néel order, we performed some reproducibility tests 2 0 after cycling of the temperature. The degradation of the Pt layer should result in signals that are not reproducible after temperature-or current-cycling. Our results are, however, reproducible in the same device, which points to a magnetic origin.
To facilitate a quantitative analysis of the switching traces, we adopt the method from Ref. 11. First, to remove the polarity-dependent component, we use the polarity-averaged datasets. Then we separate the switching traces into two regimes, namely pulsing along either the red or blue lines in Fig. 1 a) and relaxation. For the pulsing regime, we found a simple fit function consisting of a constant, an exponential function and a line appropriate. In this case, the variable is the burst count b: c 0,1,2 and µ are fitting parameters. Equation 1 is only a phenomenological fit function that helps to obtain the switching efficiency of the first burst R e accurately by taking the derivative of Eq. (1) at b = 0: For the relaxation regime, we use a simple exponential decay as a function of measurement time t and add an offset: d 0,1 and τ eff are fitting parameters. The decay is characterized by the effective relaxation time constant τ eff for a given set of parameters. As we show in Ref. 11, the exponential decay has a strict physical meaning. All antiferromagnetic grains of a polycrystalline film have volumes V g which typically follow a lognormal distribution. These grains are related to the anisotropy energy barriers via The relaxation time for the orientation of the Néel vector of a grain is given by the Néel-Arrhenius equation Here f 0 ≈ 10 12 s −1 is the antiferromagnetic resonance frequency, k B is the Boltzmann constant, and T is the absolute temperature. During the pulsing, we excite grains with various energy barriers at the same time, where smaller E B means that the Néel vector is easier to switch but will also relax faster. Therefore, one may expect to see a sum of multiple exponential decays during the relaxation phase. To simplify the analysis of the relaxation, we merge the ensemble of many different relaxation times into a single effective time constant τ eff . The constant offset d 0 in the relaxation fits suggests that the switching has a long-term stable contribution. However, because of the time window of 600 s for the observation of the relaxation, we can only tell there is a component in the signal that is stable for times substantially longer than this window. It is clear that d 0 is a function of the time window, so we refrain from a detailed analysis of this parameter. In addition, we define the difference of R ⊥ before and after applying the bursts along one current-line as the absolute switching amplitude |∆R a |. In Fig. 3, we summarize the results of this analysis for the temperature and current density dependencies. The absolute switching amplitude shows clear maxima for both film thicknesses as a function of the temperature (Fig. 3 a)). Remarkably, the thinner film shows a larger amplitude and the maximum is found at lower temperature. Simultaneously, the switching efficiency (Fig. 3 b)) increases with increasing temperature, but also shows an indication of peaking at a slightly higher temperature as compared to the amplitude. τ eff shows a very strong temperature dependence in both samples and is smaller for the 6 nm MnN film thickness, see Fig. 3 c). This result is fully compatible with our thermal-activation model developed earlier for the switching in Mn 2 Au and CuMnAs: the peak of the switching amplitude is loosely related to the maximum of the lognormal grain size distribution. The larger film thickness leads to a larger grain volume und thereby shifts the amplitude maximum to higher temperature. Simultaneously, higher temperatures lead to faster relaxation, as given by the Néel-Arrhenius equation. The energy barriers obtained from the relaxations are of the order E B = 0.5 . . . 0.9 eV, see Fig. 3 d). In contrast to naive expectation of scaling with film thickness, we find that the energy barriers are very similar in both films; we interpret this result later. As a function of current density, we find that both the switching amplitude (Fig. 3 e)) and the switching efficiency (Fig. 3 f)) are greatly increased with increasing current density. We find that τ eff also depends on the current density (Fig. 3 g)) which is due to our simplification of taking a single effective realaxation time instead of observing the weights associated with many different relaxation time constants of the ensemble. At higher current density, the film temperature is substantially higher, which increases the proportion of larger grains with larger energy barriers that participate in the switching (Fig. 3h)). The observed τ eff increases as these grains contribute to a slower relaxation at the measurement temperature. Notably, both films have effective relaxation time constants of less than 100 s at room temperature; this is perfectly in line with the observation that exchange bias is observed with MnN only for larger film thicknesses of approximately 10 nm [22] at room temperature.
In Fig. 4 a), we reproduce the result of the particle diameter analysis from Ref. 26. According to this analysis, the anisotropy energy density is K AF 4 × 10 5 J/m 3 for thin MnN films. For lognormal-distributed grain diameters, also the grain areas and grain volumes of cylindrical grains are lognormal distributed. The diameter of grains which correspond to the median volume is D mv ≈ 5.4 nm, see Fig. 4 b). This corresponds to E B = 0.5 eV in the 9 nm MnN film, which is clearly of the correct order of magnitude. However, this result indicates that the majority of the grains which contribute to the switching in our experiments are larger than the median of the distribution. The saturation of E B as a function of temperature in Fig. 3 d) can thus be understood as a lack of grains with diameters larger than 7.5 nm (E B ≈ 1 eV for the 9 nm film). Indeed, according to the particle size analysis, less than 10% of the grains have larger diameters. Therefore, their contribution to the electrical signal will be rather small. These results allow us to identify three classes of grains, which we call unblocked, switchable, and blocked, see Fig. 4 c). The unblocked grains relax very quickly for all temperatures at which we performed measurements. The switchable grains correspond to the observed energy barriers of 0.5. . . 0.9 eV. Finally, the blocked grains remain blocked and are not switched with the spin-orbit torque. We note that only a narrow part of the switchable ensemble will contribute to the actual switching and relaxation at any given temperature. Correspondingly, one cannot directly relate the position of the switching amplitude maximum to the maximum of the grain size distribution, due to the complexity of the switching and relaxation dynamics: due to the Joule heating, the switching occurs at an elevated temperature, whereas the relaxation happens at the set measurement temperature.
To shed further light on the Joule heating and the effect of the conducting multilayer system and associated shunting, we study the stack with the parallel resistor model and calculate the spin current density and Joule heating in the center-region as a function of the measurement temperature, see Fig. 5. The model gives very similar Pt resistivities for the two samples (Fig. 5 a)), slight deviations probably arise from the neglect of the weak temperature dependence of the MnN and Ta resistivities. Because of the identical nomial current densities j 0 , the sample with 9 nm MnN has a larger center-region current density in the Pt layer, Fig. 5 b). On the basis of the resistivities and the center-region current densities, we calculate the spin Hall angles θ SH = σ SH ρ Pt (Fig. 5 c)) with σ SH = 4 × 10 5 (Ωm) −1 [29] and spin current densities j s = θ SH j Pt (Fig. 5 d)). Here, the spin current density from the Ta film is neglected, because the current density flowing in the Pt layer is approximately eight times larger. Because of the larger film thickness, the Joule heating power is larger in the 9 nm MnN sample (Fig. 5 e)). Using the heating powers of the center-region of the star structures, we calculate the peak temperatures of the film using an analytical for- Note that all Pt current densities are given for the center region of the star-structure.
mula [11,30]. A correction factor of 1.48 determined by a stationary finite-element simulation was applied to take the 50 nm SiO 2 layer into account. Unsurprisingly, at identical measurement temperature, the film reaches a higher temperature due to the Joule heating with larger film thickness. Coming back to the similarity of E B for different film thicknesses (Fig. 3 d)), we note that both thermal activation and the spin current density are larger in the thicker film. Both aspects lead to a more efficient switching in the thicker film, bringing the two thicknesses closer together in terms of efficiency. However, E B is evaluated from the relaxation, which depends only weakly on how the state has been set, cf. Fig. 3 h). Both temperature dependencies of E B in Fig. 3 d) can be fitted with identical line fits E B = ∆ T k B T with ∆ T = 37.1 in the range up to 240 K, while saturation is seen at higher temperature. We interpret this as a grain-selection process by the available torque. Only grains with ∆ T ≈ 27 . . . 44 can be switched and be observed to relax [11]. Since the available torque is similar for all film thicknesses, in the thicker film grains with smaller diameter contribute to the switching at a given temperature as compared to a thinner film. Eventually, the energy barrier that is overcome is the same in the different films. This means that electrical switching may be observable in many antifer-romagnets just below or at the onset of exchange bias, which can be taken as a simple measure for the thermal stability and the associated switching energy barrier. Finally, we come back to the read-out mechanism, which we propose to be either due to SMR or PHE, or both. While the PHE would originate in the MnN layer, the SMR would originate in the Pt layer. We calculate the relative transverse resistivity ρ ⊥ /ρ for both cases, where we just use the maximum switching amplitudes |∆R a |. In the PHE case, the transverse voltage can be written as U ⊥ = ρ ⊥ I MnN /d MnN . Thus, (ρ ⊥ /ρ MnN ) PHE ≈ 2 × 10 −4 for 9 nm MnN thickness. Accordingly, in the SMR case we have U ⊥ = ρ ⊥ I Pt /d Pt and (ρ ⊥ /ρ Pt ) SMR ≈ 0.9 × 10 −4 . The current branching ratio is approximately I Pt /I MnN ≈ 7.3. Both numbers are fairly small compared to our previous experiments on Mn 2 Au (maximum ρ ⊥ /ρ ≈ 70 × 10 −4 ) and CuMnAs (maximum ρ ⊥ /ρ ≈ 14 × 10 −4 ) [9,11], where PHE is the only possible read-out mechanism. However, they are similar to the SMR amplitude in Pt / NiO upon rotation in a strong magnetic field (maximum ρ ⊥ /ρ ≈ 2 × 10 −4 at room temperature) [14,31]. On the other hand, the SMR can be much larger (maximum ρ ⊥ /ρ ≈ 16×10 −4 at room temperature) in YIG / Pt films [32]. Additionally, we performed density functional theory calculations of antiferromagnetic MnN with the fully relativistic multiplescattering Green function framework as implemented in the SPR-KKR program [33,34]. We calculated the resistivity tensor via the Kubo-Bastin formalism at finite temperature of 300 K and determined the PHE amplitude. Lattice vibrations were treated in the alloy analogy model using the coherent potential approximation [35,36]. The mean resistivity was found to be Tr(ρ) ≈ 57.7 µΩcm, which is much smaller than the observed value, but still rather high for a metal. The larger resistivity of the thin films arises from the small grain diameter and additional scattering in the grain boundaries. The PHE amplitude is ρ ⊥ /ρ ≈ 5.4 × 10 −4 for L [100] of the face-centered tetragonal unit cell depicted in Fig. 1 a). In contrast, with L [110] we obtain ρ ⊥ /ρ ≈ −1.7×10 −4 . This result indicates that PHE and SMR are of similar magnitude in our system and both may contribute to the signal. However, the theoretical PHE amplitude appears somewhat too small, given that only a small fraction of the film contributes to the observed signal. It is only a factor of 2.5 smaller than the theoretical result, which should be observed when the Néel states of all grains are aligned along [100]. This leaves room for speculation whether the anomalous Hall effect due to slightly noncollinear order might contribute to the electrical read-out. In this case, the small magnetic moment would have to be switched together with the Néel order. On the other hand, thermomagnetic effects such as the spin Seebeck effect should not contribute to the signal. This is because we use a lock-in technique and measure the first harmonic signal, whereas thermomagnetic effects would be seen on the dc and second harmonic components. Furthermore, the calculation might underestimate the PHE, because we do not explicitly model chemical disorder nor grain boundary and surface effects. To gain further insight into this open question, a detailed study of the magnetoresistance in strong magnetic fields is necessary and will be performed in the future.
SUMMARY
In conclusion, we observe spin-Hall driven electrical switching of the Néel order in a metallic, polycrystalline antiferromagnetic layer. The characteristics are fully compatible with a thermal activation model. Our work demonstrates that the characteristic switching properties observed in epitaxial Mn 2 Au, CuMnAs, or NiO / Pt films can also be obtained in much simpler, polycrystalline antiferromagnetic films. | 2019-07-04T13:07:43.000Z | 2019-07-04T00:00:00.000 | {
"year": 2019,
"sha1": "a2aff4b3c9c705b9711b599e3f5a662ffdb69e09",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.2.013347",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "a081f17b485e4e647c5128e7f9649d6d52e17eab",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
6392009 | pes2o/s2orc | v3-fos-license | Association of circulating angiotensin converting enzyme activity with respiratory muscle function in infants
Background Angiotensin converting enzyme (ACE) gene contains a polymorphism, consisting of either the presence (I) or absence (D) of a 287 base pair fragment. Deletion (D) is associated with increased circulating ACE (cACE) activity. It has been suggested that the D-allele of ACE genotype is associated with power-oriented performance and that cACE activity is correlated with muscle strength. Respiratory muscle function may be similarly influenced. Respiratory muscle strength in infants can be assessed specifically by measurement of the maximum inspiratory pressure during crying (Pimax). Pressure-time index of the respiratory muscles (PTImus) is a non-invasive method, which assesses the load to capacity ratio of the respiratory muscles. The objective of this study was to determine whether increased cACE activity in infants could be related to greater respiratory muscle strength and to investigate the potential association of cACE with PTImus measurements as well as the association of ACE genotypes with cACE activity and respiratory muscle strength in this population. Methods Serum ACE activity was assayed by using a UV-kinetic method. ACE genotyping was performed by polymerase chain reaction amplification, using DNA from peripheral blood. PTImus was calculated as (Pimean/Pimax) × (Ti/Ttot), where Pimean was the mean inspiratory pressure estimated from airway pressure, generated 100 milliseconds after an occlusion (P0.1), Pimax was the maximum inspiratory pressure and Ti/Ttot was the ratio of the inspiratory time to the total respiratory cycle time. Pimax was the largest pressure generated during brief airway occlusions performed at the end of a spontaneous crying effort. Results A hundred and ten infants were studied. Infants with D/D genotype had significantly higher serum ACE activity than infants with I/I or I/D genotypes. cACE activity was significantly related to Pimax and inversely related to PTImus. No association between ACE genotypes and Pdimax measurements was found. Conclusions These results suggest that a relation in cACE activity and respiratory muscle function may exist in infants. In addition, an association between ACE genotypes and cACE activity, but not respiratory muscle strength, was demonstrated.
Background
Angiotensin I-converting enzyme (ACE) is a zink metallopeptidase whose main functions are to convert angiotensin I into vasoactive and aldosterone-stimulating peptide angiotensin II and to degrade vasodilator kinins.
Circulating ACE (cACE) is found in biological fluids and originates from endothelial cells. ACE is also an important component of the local renin-angiotensin systems (RASs), which have been identified in diverse tissues, including lung and skeletal muscles [1,2]. A polymorphism of the human ACE gene has been identified in humans and contains a polymorphism consisting of either the presence (insertion, I) or absence (deletion, D) of a 287 base pair (bp) fragment [3]. The deletion is asso-ciated with increased ACE activity in both tissue [4] and circulation [5]. Circulating ACE activity was stable when serially measured in the same individuals, while large differences among subjects were observed [6]. The I/D polymorphism accounts for approximately half of the observed variance in ACE levels [5]. However, the presence of quantitative trait loci controlling ACE levels was suggested [7]. D-allele of ACE genotype has been associated with power-oriented performance, being found in excess in short-distance swimmers [8] and with greater strength gains in the quadriceps muscle [9]. Furthermore, it has been suggested that cACE activity has been associated directly with muscle strength in healthy Caucasians, naïve to strength training [10]. Thus, respiratory muscle function and specific respiratory muscle strength may be similarly influenced.
Respiratory muscle strength in infants can be assessed specifically by measurement of the maximum inspiratory pressure during crying (Pi max ) [11,12]. Pressure-time index of the respiratory muscles (PTImus) is a non-invasive method, which assess the load to capacity ratio of the respiratory muscles [13]. PTImus has been validated in both adults [14] and infants [15].
The aim of this study was to test the hypothesis that increased cACE activity in infants could be related to greater respiratory muscle strength assessed by measurement of Pi max . We further investigated the potential association of cACE with PTImus measurements, as well as the association of ACE genotypes with cACE activity and respiratory muscle strength in this population.
Patients
Infants cared for at the Neonatal Intensive Care Unit-Pediatric Department of the University General Hospital of Patras, Greece, were eligible for the study. Infants were entered into the study if parents gave informed written consent. The study was approved by the local Research Ethics Committee. The studied population was recruited from a study examining the association of ACE genotypes on respiratory muscle function in infants. All infants were studied before discharge, in supine position, at least one hour after a feed. Infants had no respiratory symptoms for at least 3 days before measurement. Furthermore, infants were on full oral feeds, had serum electrolytes, calcium, magnesium and phosphates within normal range and did not receive any methylxanthines. Blood sampling for circulating ACE activity determination was performed the previous or the same day of the measurements.
ACE genotype determination
ACE genotyping was performed on DNA extracted from 0.5 ml of whole blood, collected from an indwelling cath-eter or via peripheral venipuncture during routine blood sampling. The blood samples were stored at -80°C in EDTA vacutainer tubes. The method has been previously described [16]. Briefly, DNA was extracted by using Qiamp spin columns (Blood mini kit-Qiagen, QIAGEN Inc., Germantown, U.S.A). DNA was analyzed by electrophoresis on an agarose gel. DNA amplification of the 16 th ACE intron was performed using two sets of primers flanking the polymorphic site, (outer and inner primers), as mistyping of the D/D genotype has been reported to occur using conventional amplification with insertion/ deletion (I/D) flanking primer [17].
Plasma ACE activity determination
An additional 1 ml of whole blood was collected during routine blood sampling. Serum was separated immediately from the whole blood by centrifugation at 1500 g for 10 min. The samples were stored at -20°C in vacutainer tubes until analysis. Serum ACE activity was assayed by using a UV-kinetic method (Medicon SA) and an AU480 Clinical Chemistry System (Beckman Coulter, Inc, High Wycombe, UK). The determination of ACE was based on the calculation of the rate of absorbance change at 340 nm during the hydrolysis of the substrate N-(3-(2-(furyl)acryloyl)-L-phenylalanylglycylglycine (FAPGG) to N-(3-(2-(furyl)acryloyl)-L-phenylalanine (FAP) and glycylglycine. The method has a detection limit of 7 U/L, linearity between 7-140 U/L of ACE and an intra-assay coefficient of variation between 2.26% and 4.86%.
Measurement of respiratory muscle function
Airway flow was measured using a pneumotachograph (Mercury F10L, GM Instruments, Kilwinning, Scotland) connected to a differential pressure transducer (DP45, range ± 2 cm H 2 O, Validyne Corp, Northridge, CA, USA). Airway pressure (Paw) was measured from a side port on the pneumotachograph, using a differential pressure transducer (DP45, range ± 100 cm H 2 O, Validyne Corp, Northridge, CA, USA). The signals from the differential pressure transducers were amplified, using a carrier amplifier (Validyne CD 280, Validyne Corp, Northridge, CA, USA) and they were recorded and displayed in real time on a computer (Dell Optiplex GX620, Dell Inc., Texas, U.S.A) running Labview™ software (National Instruments, Austin, Texas, U.S.A) with analog-to-digital sampling at 100 Hz (16-bit NI PCI-6036E, National Instruments, Austin, Texas, U.S.A).
Measurement of Pi max
To measure Pi max , a facemask (total deadspace, 4.5 mL) was held firmly over the infant's nose and mouth. A small needle leak in the mask was used in order to prevent glottic closure and artificially high Pi max [13]. The airway was occluded at the end of a spontaneous crying effort using a unidirectional valve attached to the pneumotachograph, which allowed expiration but not inspiration. The occlu-sion was maintained for at least four inspiratory efforts. At least three sets of airway occlusions were performed and the maximum Pi max achieved for individual was recorded.
Measurement of PTImus
P 0.1 was calculated as the airway pressure generated 100 milliseconds after an occlusion, while the infant was quietly breathing. At least four airway occlusions were performed and average P 0.1 was calculated. Pressure-time index of the inspiratory muscles (PTImus), was calculated as: PTImus = (Pi mean /Pi max ) × (Ti/Ttot) where Pi mean was the average airway pressure during inspiration, obtained from the formula Pi mean = 5 × P 0.1 × Ti [18]. Pi max was the maximum inspiratory airway pressure, Ti was the inspiration time and Ttot was the total time for each breath, calculated from the airway flow signal.
Muscle mass increases with maturity and body growth [19] and P imax continues to increase outside the neonatal period [11]. Therefore, in order to examine the association of ACE genotype with respiratory muscle strength, P imax was also related to body weigth at the time of measurement.
Statistical analysis
Data was tested for normality using the Shapiro-Wilk and D' Agostino skewness tests. Differences between ACE genotype groups were assessed for statistical significance, using the Kruskal-Wallis and Dunn's post-hoc non parametric and Cramer's V tests, as appropriate. Simple regression analysis was performed to determine whether cACE is related to Pi max and PTImus measurements.
Stepwise multiple regression analysis was performed to determine if cACE activity is related to respiratory muscle strength, assessed by measurement of Pi max and PTImus measurements, independently to weight at measurement, ACE genotyping, postmenstrual age (PMA), gender and support from mechanical ventilation.
Sample size
Interim analysis of the data of 50 infants demonstrated a correlation of magnitude r = 0.23 between Pi max and cACE activity approaching statistical significance. Recruitment of 106 subjects would allow us to detect a correlation of magnitude r = 0.24 between Pi max and cACE levels with 80% power at 5% significance level ("Alpha", the probability of rejecting a true null hypothesis).
Whole study population
Between (table 2). Neither Pi max measurements, nor Pi max adjusted for weight at measurement, were statistically different between the three groups, (table 2). Infants with D/D genotype had higher serum ACE activity than infants with I/I or I/D genotypes (Kruskal-Wallis, p = 0.028; Dunn's test, z-value = 2.37, p < 0.05 and z-value = 2.12, p < 0.05, respectively) (table 2), (figure 1). No difference, in regards to serum ACE activity, was found between infants with I/I and I/D genotypes (Dunn's test, z-value = 0.83, n.s). Linear regression analysis demonstrated that cACE activity was significantly related to Pi max after logarithmic transformation (r = 0.253, t-value = 2.72, p = 0.0075) and inversely related to PTImus (r = -0.238, t-value = -2.55, p = 0.012). Furthermore, stepwise regression analysis revealed that Pi max after logarithmic transformation was significantly related to cACE activity (p = 0.0045) and weight at measurement (p = 0.0081), independent of ACE genotyping, PMA, gender and support from mechanical ventilation (table 3). In addition, PTImus was related (inversely) to cACE activity (p = 0.00037) and to ACE genotypes (p = 0.00163), independent of weight at measurement, PMA, gender and support from mechanical ventilation (table 4).
Infants that never required ventilatory support
The characteristics of the infants that never required any form of ventilatory support (n = 60) are presented in Infants with I/I, D/D and I/D ACE genotypes, did not differ in regards to their characteristics and in regards to either Pi max measurements or Pi max adjusted for weight at measurement (table 6). Linear regression analysis demonstrated that cACE activity was significantly related to Pi max after logarithmic transformation (r = 0.421, t-value = 3.532, p = 0.0008) and inversely related to PTImus (r = -0.289, t-value = -2.29, p = 0.025).
Discussion
In this study a positive correlation between serum ACE activity and respiratory muscle strength, assessed by Pi max measurement, and a negative correlation between serum ACE activity and PTImus in infants, was demonstrated. Infants homozygous for the D-allele had higher cACE activity than infants homozygous for the I-allele and heterozygous I/D. Furthermore, cACE activity was related to Pi max and PTImus independent of other factors which could affect respiratory muscle function. The correlation between cACE activity and either Pi max or PTImus was replicated on a subpopulation of the main group, consisting of infants that never required any form of respiratory support.
Mouth pressures generated during crying efforts, could provide an index of respiratory muscle strength in awake infants [13]. The test has been previously validated in infants [11,12]. P imax measurement is a volitional test of respiratory muscle strength, however, the generated pressures produced during crying, are considered to be maximal [19].
Fatigue of respiratory muscles may result in an inability to maintain adequate alveolar ventilation and respiratory failure. Diaphragmatic pressure-time index (PTIdi) is a measure of the load-capacity ratio of the diaphragm. It describes the pressure-generating capacity of the diaphragm, independent of respiratory frequency or the type of load imposed on the respiratory system [13] and it is closely related to the endurance time, referred to as the point where the inspiratory muscles failed to maintain a task despite maximal effort [20]. The determination of PTIdi, however, is rather invasive, since it requires the placement of an esophageal catheter. Assessment of inspiratory muscle function by measurement of a noninvasive pressure-time index of the respiratory muscles (PTImus) was first described by Gaultier et al. [18]. In spontaneously breathing infants, an agreement between PTImus and PTIdi measurements using Bland and Altman analysis, was found [15].
It has been suggested, based primarily on observational evidence, that the D-allele of ACE polymorphism was associated with greater training-related strength gain and power-oriented performance [21]. An excess of the ACE D-allele has been found among elite sprint runners [22] and swimmers [8]. In addition, ACE genotypes in adults, were associated with strength response to muscle training and D-allele carriers experience greater strength increase than II homozygotes [9,23]. ACE genotype, however, is not associated with baseline muscle strength and size [24]. Furthermore, several studies have suggested that I-allele has been associated with superior exercise endurance, being found with increased frequency in elite distance runners [22], rowers [25], triathletes [26] and mountaineers [27]. A study in healthy Caucasian naïve to strength training, suggested that cACE activity was significantly associated with baseline muscle strength [10].
In the current study, ACE genotype was associated with cACE activity, which is in accordance with present literature [4,5]. Infants with D/D ACE genotype had increased cACE activity compared to infants either homozygous for the I-allele or heterozygous I/D. Although serum ACE activity was associated with increased respiratory muscle strength, such association was not demonstrated in regards to ACE genotypes. One explanation is that the deletion accounts for approximately 47% of the intra-individual variation in plasma ACE activity in Caucasians [5]. Furthermore, cACE activity is a continuous variable and would provide greater statistical power than a categorical variable such as ACE genotype. Similar results, however, have been demonstrated by others, where an association between ACE genotyping with pre-training muscle strength was not found [9,28]. Nevertheless, the correlation between Pi max and cACE activity is rather weak, as approximately only 7% of the variation in Pi max can be accounted for by the variation in cACE activity.
A maturational effect on P imax has been previously demonstrated [12]. Several factors could affect Pi max , such as gestational age, PMA, birthweight and weight at measurement [12]. Muscle mass increases with maturity and body growth [19] and P imax continues to increase outside the neonatal period [11]. Furthermore, ACE levels in infants have been reported to be higher than in adults [29], other studies, however, did not show any significant correlation of cACE activity with age [30]. In this study, respiratory muscle assessment was performed at the time of the blood collection, therefore, any maturational effect on P imax and variation on cACE activity was avoided. However, to examine the association of ACE genotype with respiratory muscle strength, P imax was also related to body weigth at the time of measurement.
The primary aim of this study was to examine the association of cACE activity with respiratory muscle strength in infants. Secondary aims were to investigate the potential association of cACE with PTImus measurements and ACE genotypes with cACE activity and respiratory muscle strength in this population. An association between ACE genotypes and PTImus in infants has been previously shown [31]. Thus, this issue was not examined in this study. However, ACE genotype was included in the stepwise regression analysis, as it is now known that it is strongly correlated with PTImus.
The association of cACE activity and respiratory muscle strength may be mediated through synthesis of angio-tensin II (Ang II). Ang II could possibly act as a growth factor in cardiac muscle [32] and its effect may be mediated through Ang II type 1 (AT1) receptor [32]. Furthermore, Ang II may be necessary for optimal overloadinduced skeletal muscle hypertrophy, acting at least in part via an AT(1) receptor-dependent pathway [33]. The physiological properties of a motor unit correlate with the histochemical properties of the constituent muscle fibres [34]. ACE D-allele compared to I-allele is associated with an increased percentage of fast-twitch type IIb skeletal muscle fibres [35], which produce greater force per unit of cross-sectional area [36]. Ang II may be also important in the redirection of blood flow from type I, fatigue-resistant, to type II, fast-twitch, muscle fibres [37]. Furthermore, in animal studies, Ang II infused into rat hindlimps increases the tension during tetanic stimulation [37]. Other actions of Ang II, that might explain the association between cACE activity and respiratory muscle strength, include the increased noradrenaline release from peripheral sympathetic nerve terminals and the CNS, facilitating sympathetic transmission [38,39]. Circulating ACE may also influence diaphragmatic muscle strength, through the degradation of kinins. In animal studies, bradykinin reduces the phenylephrine-induced hypertrophy of cardiomyocytes [40]. Thus, elevated cACE may influence muscle strength via this pathway. Several factors may affect respiratory muscle function, such as nutrition [41], prolonged ventilatory support [42], drugs [43][44][45], as well as phosphate [46], calcium [47] and magnesium [48] blood levels. In addition, hypoxia [49] and hypercapnia [50] reduce diaphragmatic contractility in young piglets. All infants were measured prior to discharge, being free of any respiratory symptoms and on full enteral feeds, they did not receive any medication and their biochemistry blood tests were within the normal range during measurements.
This study has potentially important implications, given the availability of ACE inhibitors. A recent study has demonstrated that in patients with chronic heart failure, long-term therapy with ACE inhibitors improved respiratory muscle strength [51]. However, it was an uncontrolled observational study with a very small sample size. Maximum inspiratory pressure measurement is a volitional test, therefore, in order to assess respiratory muscle strength in subjects with chronic heart failure under therapy, other factors that would interfere with respiratory muscle function, should be taken into account. Some studies have demonstrated that ACE inhibitor treatment improves exercise capacity [52,53] and decrease long term decline in physical function in elderly adults [54]. However, all studies were observational and referred to either disable, hypertensive subjects or adults with congestive heart failure.
Conclusions
These results suggest that a relation in cACE activity and respiratory muscle function, as assessed by measurement of Pi max and PTImus, may exist in infants. No association between ACE genotypes and Pi max measurements was found. In addition, an association of D-allele of ACE genotype with increased cACE activity in infants was demonstrated. Circulating ACE accounts for only a small proportion of the total body RAS, therefore, ACE activity in muscles may be a more important factor in regards to respiratory muscle properties. Further work is required to clarify the effect of ACE inhibitor treatment on respiratory muscle function. | 2017-06-19T17:35:58.206Z | 2010-05-12T00:00:00.000 | {
"year": 2010,
"sha1": "915dbc9fde82ac4e1f8f2e52dc7268a650f04da2",
"oa_license": "CCBY",
"oa_url": "https://respiratory-research.biomedcentral.com/track/pdf/10.1186/1465-9921-11-57",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "344660761f5f251977b831935da87e1e6a335a55",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
242039458 | pes2o/s2orc | v3-fos-license | Use of the e-Delphi Method to Validate the Corporate Reputation Management Maturity Model (CR3M)
Information and communication technologies (ICT) have allowed the modification of many research methods based on communication. One of them is the Delphi method, especially e-Delphi. The two main areas of application of the Delphi method are traditional forecasting, but recently also the conceptual framework development of a model or theory, especially in complex domains and those where empirical research is lacking. The aim of the study was to assess the substantive accuracy (validation) of the enterprise reputation management maturity model (CR3M) using the modified e-Delphi. The final model will be tested in selected companies in the future (in the next stage of the research). Maturity models have long been regarded as tools for assessing skills in some field. The CR3M model originally included 96 practices (in four areas: communication management, corporate social responsibility, reputational risk and quality), the implementation of which was considered to be a maturity indicator. The basis for the development of the model was the literature on the subject and the existing partial maturity models. Ten experts (five theorists and five practitioners) participated in the study and assessed the appropriateness of including practices in the model and their validity, using a 5-point Likert scale. The questionnaire consisted of three questions. In two rounds of the Delphi study, an expert consensus was reached (in accordance with a priori established indicators) regarding the retention 70 out of 96 practices originally included in the model (26 deleted). The model retained 18 practices in the area of Communication Management, 14 practices in the area of Corporate Social Responsibility, 18 practices in the field of Reputation Risk Management and 20 practices in the field of Quality Management. Modification of the maturity model-reducing the number of practices included in the model increased its applicability. At the same time, 89% of experts found the presented maturity model a useful tool for self-assessment and improvement of reputation management. The modified e-Delphi procedure can be considered as an effective methodology for validating complex conceptual models.
Introduction
The subject of the article is related to the application of information and communication technologies (ICT) for the sustainable development of the information society (SIS), although it does so indirectly. The adoption of information and communication technologies in this case does not apply to their application in the enterprise, through appropriate expenditures, developing information culture, improving the management and quality of ICT [1], but using it to validate the theoretical model of corporate reputation management maturity (CR3M), which should ultimately contribute to the development of SIS.
Sustainable development is understood according to Brundtland's definition as one that meets the needs of the present without compromising the ability of future generations to meet their own needs [2]. Sustainable information society (SIS) it is the next stage in the development of the information society, in which information and communication technologies (ICT) are becoming the key enablers of sustainable development [3][4][5].
A sustainable information society is a society that effectively uses knowledge and ICT, constantly learns and improves competences, positively adjusts to emerging trends, and builds the prosperity of present and future generations, while balancing the interests of various stakeholders (citizens, enterprises, NGOs, agencies government, and the natural environment) [1].
It is believed that ICT can contribute to sustainable development in all its dimensions, i.e., ecological, socio-cultural, economic and political [1,2]. Information systems can facilitate sustainable development by creating the kind of economic activity that in the long term harmonizes the natural environment with the well-being of society [3]. In this approach, the use of ICT by enterprises (but also households or public administration) becomes one of the most important tools for building sustainable business practices [2], serving both the improvement of economic results and the wider social interest (e.g., environmental protection) [3].
In this article, ICT is not a direct tool for shaping sustainable business practices, but a tool for validating a model (e-Delphi method) that identifies such practices. Model CR3M can contribute to sustainable development by drawing managers' attention to those aspects of economic activity that favor ecological, economic and socio-cultural sustainability. It includes practices in the field of communication with stakeholders, corporate social responsibility and quality management.
Improving management based on the model will be conducive to ecological sustainability, as it indicates the basic practices concerning, inter alia, environmental protection, environmental friendliness of the technologies used, designing ecological products, avoiding wastage of resources, etc., without which it is impossible to enjoy a good reputation. The use of the model will also contribute to economic sustainability, as the model indicates specific practices of building a good reputation, thanks to which the company can achieve tangible and intangible benefits that have an impact on economic results (the consequences of a bolder pricing policy, easier access to distribution channels, greater loyalty customers, etc.). The use of the model will also be conducive to socio-cultural sustainability, because a positive reputation is based on the practices of building good relations with stakeholders (fair treatment of employees, providing full information to clients, engaging in local social initiatives, etc.).
The model is therefore intended to improve the management of various aspects of the company's operations that have an impact on building a positive opinion in the environment, with many of these aspects relating directly to sustainable development. It can be assumed that the targeted practices of building good reputation indicated in the CR3M maturity model will ultimately significantly support the company's transition towards sustainable development.
The article presents the course of a research procedure using ICT in the Delphi method, aimed at validating the corporate reputation management maturity model (CR3M). The Delphi method has gained in popularity in the last two decades as a methodology for researching research questions. A number of theorists [6][7][8][9][10][11] made a significant contribution to defining the rules increasing the rigor of this type of research and describing the stages of the procedure. This made researchers confident that their results could be used in subsequent research, and managers confident that the information obtained in this way was reliable.
The Delphi method, developed in the 1950s by Rand Corporation, is generally used to gain expert consensus on a specific topic. It is based on the assumption that group judgments are more credible than individual judgments and can be applied in a wide variety of sectors, such as public health, society, transport, education, etc. [12]. The Delphi method was originally used to predict various events and was designed as "a method used to obtain the most reliable consensus of opinion of a group of experts by a series of intensive questionnaires interspersed with controlled feedback" [13] (p. 458). With the widening of its application to (generally) problem solving and the evolution of the classical approach, slightly broader definitions have been proposed. For example, Linstone and Turoff regard Delphi as "a method for structuring a group communication process so that the process is effective in allowing a group of individuals, as a whole, to deal with a complex problem" [7] (p. 3). In turn, Reid believes that Delphi is a method of systematic gathering and aggregation of informed judgment by a group of experts on specific questions and problems [14,15].
Due to the fact that this method puts quite a lot of emphasis on communication (e.g., definitions of Linstone and Turoff or Reid), sometimes-wrongly-it is reduced only to the form of data collection [16], when in fact it is much more; First, as a method of iterative feedback, it develops insight, and thus a deeper understanding of the problem. Second, its essential value is to facilitate consensus, especially in the case of divergent opinions and incomplete knowledge. This ability to develop consent is based primarily on anonymity, giving participants the opportunity to freely express their opinions and eliminating all possible personal conflicts [7,17,18]. The three characteristics of the Delphi method are (i) iteration, which allows participants to rethink and refine their views, (ii) controlled feedback that provides them with information about the group's views to clarify or change their position, and (iii) statistical analysis, which enables the quantitative presentation of the views of the group [14,[18][19][20].
It is believed that the Delphi method is useful especially when dealing with complex problems [21], when empirical evidence is lacking [22], or when knowledge of the problem or phenomenon is incomplete, and is generally used to obtain the most reliable opinion of the group [7,16]. Thus, in addition to traditional forecasts, over time, another important area of its use has come, namely the development of concepts (theoretical frameworks) in which research usually includes the identification and development of a set of concepts and the development of a classification/taxonomy [8]. Maturity models are a type of such theoretical framework.
Maturity models are a kind of compendium of knowledge in a given field and a guide for managers, translating knowledge into specific activities aimed at changing and improving the organization. They are based on the assumption that at various stages of development, enterprises undertake various types of activity, which are related to the level of their skills in a given field. Such models show a certain desirable or logical development path from the initial state to full maturity [23]. The first maturity models appeared at the turn of the 1970s and 1980s, and their simple and practical logic was very quickly appreciated by managers. This initiated the rapid and ongoing development of maturity models in many different areas (by far, most of them were created in the area of process management and project management).
In the maturity model, successive levels describe the degrees of organizational skills, most often from complete immaturity, characterized as ad hoc, lack of organization and chaos (level 1), through repeatability and standardization (level 2), organization and monitoring (level 3), conscious management (level 4), to continuous improvement, as an expression of the highest maturity (level 5) [23]. Each of the levels of maturity has its own characteristics, and there must be a logical connection between the successive steps. This path is reflected in a hierarchical structure in which each level of maturity is precisely described through the characteristics of solutions undertaken in terms of strategies, structures, systems and processes, as well as the methods and tools used. Each of the levels follows logically from the previous one-it is its development and more and more complex continuation. It does not assume the need to achieve the highest level of maturity in a given field in the future, as there is usually no universal pattern of practice in a given field (in this case-reputation management) that fits all organizations. On the other hand, maturity models can play the role of a "road map" that allows managers to diagnose what skills the company currently has and which it lacks, and needs to build them in order to make progress in a specific area.
The article presents the course of the research procedure using the Delphi method to validate the corporate reputation management maturity model (CR3M). The use of ICT allowed for an efficient and convenient form of administration for the participants, as e-mail and an online website dedicated to the study (the so-called landing page) were used to communicate with the panelists, containing the most important information for the participants (e.g., research description, instructions for experts, model description, etc.). The Delphi method allowed, in two rounds, to reach a consensus on whether to leave or reject individual practices in the field of corporate reputation management from the model, and to assess the model's usefulness. The aim of the article is to show the possibility of using the ICT-based Delphi method to verify complex theoretical concepts such as maturity models. The research methods used include the analysis of the literature on the subject, the modified e-Delphi technique and the evaluation of the results of the research procedure.
The structure of the article is as follows; The next section covers the general theoretical foundations of the Delphi method. The following sections, based on a literature review, discuss in more detail two issues fundamental to the Delphi study: the requirements for recruiting a panel of experts and possible consensus criteria. The fifth part presents the main assumptions of the corporate reputation management maturity model (CR3M) to be validated and the idea of maturity models. Section 4 describes the course of the Delphi study: time frames, characteristics of the panel of experts along with the recruitment criteria, and the content of the research questionnaire. The seventh part presents the results of the study-the number of practices on which an expert consensus has been reached and which should remain in the model, as well as the number of practices rejected. The last, Section 6 is the conclusions of the study, a summary of limitations and practical implications.
Literature Review
Many different forms of Delphi research are currently in use and have emerged with increasing use and modification of the approach. These include, for example, "modified Delphi", "e-Delphi", "Delphi policy" and "Real-time Delphi" [24]. It is worth emphasizing that not all Delphi techniques strive to achieve consensus; for example, Delphi's policy aims to support decision-making by sorting out and discussing different views on the "preferred future" [24]. This was reflected in much broader definitions and different interpretations of this method.
Hasson and Keeney point out, however, that this variety of approaches and the lack of a precise definition of the method is a problem from the point of view of methodological rigor [24]. As argued by Rowe and Frewer, the more precise the definitions are, the better the research can be conducted (in terms of greater reliability and validity), the easier it is to interpret the results and the more confidence in the conclusions drawn [24,25]. In addition to the varied definitions of the approach, the Delphi method is burdened with other uncertainties, such as the importance of consensus, the criteria for defining experts, and the sheer multitude of varieties and types of Delphi used. The latter issue is the second major problem of Delphi research, which makes it difficult to establish an appropriate methodological rigor. All the more so as within each Delphi type, the procedures may also differ, for example in the number of rounds, the level of anonymity and feedback provided, as well as the inclusion criteria, sampling approach or method of analysis. No wonder that various adaptations of Delphi have led to considerable criticism of the method itself, and some argue that it even threatens the ability to determine the reliability and validity of the technique [24]. On the other hand, this "capacity" of the method is seen by many as a key benefit that allows flexibility in its application. Reviewing the critical remarks of various authors regarding the Delphi method, Hasson and Keeney conclude that despite justified doubts and methodological flaws, it is difficult to formulate a definitive unequivocal statement as to its reliability and validity-while some argue that this method is not reliable, others say that it is both accurate and reliable [24].
In the conclusion, the authors state that one should strive to achieve the scientific rigor of this method, bearing in mind two issues [24]; First, it should be recognized that Delphi replication at different time-scales misses the goal of most research to explore ideas or to improve decision making by consensus. Additionally, because opinions do not exist in a static vacuum, various confounding variables (such as situation or expert-specific factors) are intrinsically linked to any individual Delphi study. Such factors are rarely controllable and therefore limit the measurement of methodological rigor. Future empirical research should therefore consider the use of parallel measures as well as setting the rigor in each individual Delphi and the inclusion of both qualitative and quantitative measures, as recommended by Day and Bobeva [9] or included in the mixed methods appraisal tool (MMAT) [10].
Secondly, it should be accepted that the Delphi results do not offer an indisputable fact, but rather constitute a kind of a snapshot of experts' opinions for a specific group, at a given time, especially since the size of the panel does not ensure that the results are representative. This collective opinion of experts can be used to build or verify some concept, practice or theory. Therefore, Delphi's findings should be compared with other relevant evidence and verified with further research to increase confidence [25]. In order to maximize the quality of the results and solve the problems related to methodological discipline, some researchers propose that the Delphi study be triangulated using other methods used in parallel (e.g., questionnaire surveys and assessment interviews) [9].
The model of research procedure in Delphi conventionally includes three stages: exploration, distillation and use [7]. The first stage, i.e., search or exploration, is a free and unstructured study of the problems, limitations, challenges and problems of the studied domain, sometimes in the form of brainstorming. This phase includes, inter alia, activities such as setting criteria for selecting participants and setting up a panel of experts, designing data collection and analysis tools, etc. Currently, however, there is a so-called modified version of Delphi, that proposes a major overhaul of this first stage; it is about replacing the initial, free set of opinions with a synthesis of key issues identified in the literature or through preliminary interviews with selected field experts, and thus using a structured questionnaire already in the first round [9,11]. The second stage, the so-called distillation, includes (repeated) consultation and subsequent analysis to see if Delphi has reached the critical point for completion of the test. The final step-exploitation-involves the development and dissemination of the final study report.
The two most fundamental issues in the application of the Delphi method are related to the selection of the expert panel and the design of the questionnaire. The first relates to the size of the panel, its main features and response rate, while the second relates to the selection of the Likert scale and the number of rounds [12]. Particularly important when constructing a panel of experts is the awareness of the fact that their experience (practice) and knowledge directly determine the reliability and validity of the results [13,23,26]. In practice, the selected panel selection strategy will probably depend on the nature of the research problem: the narrower the scope and specificity of the field, the greater the depth and specificity of the required specialist knowledge, and therefore the more likely it is that a targeted approach will be appropriate [9].
For this reason, when selecting experts, a target sample is usually used, which allows consciously selecting experts who have knowledge and experience adequate to the specific issue or problem under study. This approach is also particularly useful in cases where there are only a limited number of experts in a given field [27]. Self-assessment scales can be used to assess the expertise of experts, e.g., a five-point scale (in which low numbers indicate a low level of knowledge, and high numbers indicate a high level of knowledge) [7]. Experts are usually identified by reviewing the literature, recommendations of institutions, or recommendations of other experts.
Regarding the size of the expert panel, there are no strict rules in Delphi, but the size of the group is strongly related to the purpose of the study [28], and it is obvious that the group error decreases and the quality of decisions is strengthened as the sample grows [19]. In most Delphi studies, the sample size is tailored to the specific study; it is believed that if, for example, experts who have a similar general knowledge of the problem under investigation are selected, a relatively small sample can be used [29]. Most studies use panels of 15 to 35 people [30], but useful results can also be obtained from small, homogeneous groups of 10-15 experts [31]. The minimum size of the panel is considered to be seven experts [7]. The admissibility of the relatively small size of the panel is very important, including due to the often high dropout rate of experts in consecutive Delphi rounds (with the dropout rate being higher in larger groups of more than 20 members) [16]. The next step in the general Delphi procedure is the so-called distillation, involving multiple gatherings of expert opinions and then analyzing whether Delphi has reached the breaking point by the end of the study. A characteristic feature of this method is the multiple rounds (iteration) of the test. The number of rounds depends on the speed at which consensus is reached and, in a way, on the size of the panel. Theoretically, the Delphi process may be repeated many times until consensus is reached, however, many researchers believe that three iterations are often sufficient to collect the necessary information and reach consensus in most cases [32,33].
The size of the expert panel also matters here. If the group is small, it may turn out that even one round is sufficient [28]. On the other hand, to enable feedback and verification of the answer-which is one of the key distinguishing features of the method-at least two rounds are required [18,28,34]. For large samples of more than 30 experts, usually even three rounds are recommended [7,18,35]. An argument in favor of limiting the number of rounds in the Delphi method is the fact that the participation of panel participants in subsequent iterations often drops quickly-there are cases when the percentage of responses decreases by as much as 40% after each round [9]. In such situations, in order to prevent the number of participants from dropping below the critical level, it is better to sacrifice consensus (at all costs) and finish the study, e.g., after the second round. In the case of the modified version of the Delphi method, when the experts in the first round receive a list of pre-selected items (e.g., from synthesized literature reviews), the number of rounds may be successfully reduced to just two [27].
Many authors emphasize that the number of rounds in the Delphi method depends primarily on reaching a consensus, because in fact it is reaching consensus that is the basis for completing the Delphi rounds, not the other way around. This means that theoretically, the process should continue until the a priori criteria necessary to obtain a consensus are met. On the other hand, however, if the Delphi method is used to develop a concept (theoretical framework) or model, i.e., to verify or validate certain factors/indicators/criteria, this consensus will concern, for example, individual elements of the model. In this sense, consensus will not be a criterion for concluding the study rounds, but merely serves to reject or retain elements of the model/concept. Establishing the number of study rounds a priori also makes sense due to the high dropout rate or the prevention of fatigue in cases where the questionnaires contain a large number of items (some Delphi studies may contain several dozen or more items to consider).
Likert scale is usually used to gather expert opinion in qualitative research that aims to determine the importance of certain items/factors or to screen them out. The most common scales are 5-point or 10-point [12]. On the 5-point Likert scale, two expressions are usually used at both ends of the spectrum: "strongly agree" and "strongly disagree." The most important criterion for selecting the Likert scale, however, is the purpose of the study. Review studies by Giannarou and Zervas show that 10-point Likert scales are used in the case of testing the level of importance (indicators, factors, etc.), while when the level of understanding between experts is tested, the 5-point scale is the most common [12].
The Delphi technique is an iterative process with repeated rounds of data collection until consensus is reached. Consensus refers to the extent to which each respondent agrees with the idea, element or concept, which is assessed on a numerical or categorical scale [28]. Although the main goal of the Delphi research is to obtain the most reliable consensus of the opinion of a group of experts, the greatest weakness of this method is still the lack of a commonly accepted scientific method for determining the level of consensus [7,12,20,36]. There is no basic statistical theory that determines an appropriate stopping point in the Delphi process. There will always be some amount of oscillation and shifts in group views, but because respondents are sensitive to feedback from the rest of the group, they tend to develop consent [27]. In the literature on the subject, there are many types of criteria used to define and establish consensus in Delphi research, and they are subject to different interpretations of researchers.
These criteria most often measure opinion consensus by frequency distribution, standard deviation, interquartile range, coefficient of variation, or other indicators such as Kendall's W. Most analyses also include the calculation of the mean and median, as they are used to describe the middle and most common response, representing the central trend [13]. When frequency distribution is used, consensus can generally be reached if a certain percentage of the votes fall within a certain range [36]. According to McKenna, this percentage of responses to a given category should be at least 51%, i.e.,-in other words-only categories selected (indicated) by over 50% of experts (e.g., at least 51% of votes for ratings 4 or 5 in 5-point Likert scale) [37]. In some cases, a certain distance from the mean is also taken into account, e.g., Christie and Barela propose that at least 75% of the participants' responses "should fall between two points above and below the mean on a 10-point scale" [17].
When it comes to studies using standard deviation to assess the level of consensus, the proposition of Christie and Barela [17] that it should be less than 1.5 is usually accepted. Additionally, a common measure of consensus is the interquartile range, most often used with standard deviation or median [12]. The interquartile range for the 10-point Likert scale should be less than 2.5 [13], and for the 5-point scale it should not exceed 1 [38][39][40].
In quantitative analysis, the Kendall coefficient of compliance is also used to assess the level of expert agreement. The value of W ranges from 0 to 1, with 0 being no consensus and 1 being perfect consensus between lists. In interpreting various values of W, researchers usually use the proposal of Schmidt [6], who assumed that values above 0.7 mean a strong agreement (strong consensus for W > 0.7; moderate consensus for W = 0.5; and weak consensus for W < 0.3). The Kendall value of W therefore determines the next steps: a W-factor of 0.7 or more indicates satisfactory compliance and means that the distillation phase can be completed, however, if W is less than 0.7, the questionnaire must be sent again to the panel members in the next round [8]. Some authors also use the coefficient of variation (i.e., the quotient of the standard deviation and the mean), also reflecting the homogeneity of the observations [41,42]. Usually it is interpreted in such a way that values <25% mean low volatility, values between 25-45% mean average volatility, those between 45% and 100% high volatility, and those above 100%-very strong volatility.
In order to present information on the collective judgments of respondents, Delphi studies also use measures of central tendency, i.e., the mean, median and modal [43]. Generally, the median and modal are used, and in some cases the mean is also acceptable. However, the use of the median based on the Likert scale is definitely preferred in the literature on the subject, as it seems by nature to be the best fit for reflecting convergence of opinions and for reporting results in the Delphi process [11].
Some authors believe that the use of percentage measures is insufficient and suggest that a more reliable alternative is to measure the stability of experts' responses in subsequent iterations [11]. Measuring the stability of the expert opinion distribution curve in consecutive rounds has an advantage over methods that measure the amount of change in each person's opinion between rounds (degree of convergence) in that it takes into account deviations from the norm [28]. Thus, the use of measures of stability helps to mitigate the effects of extreme or conflicting positions. Proponents of this approach believe that stability of opinion reflects consensus [43]. This means the need to monitor the continuity of the distribution of respondents' votes in subsequent rounds. According to Linstone [43], the stability threshold is set by changes of less than 15% between rounds. Such a level means that the responses are essentially unchanged, which is a signal of reaching a consensus and at the same time is a criterion for ending the Delphi rounds [9].
It seems that a good practice in assessing the level of consensus is the combined use of several measures, such as in the proposal [12], which recommends supplementing the simultaneous use of three measures, because each of them individually cannot be considered a good indicator. These measures are: (i) the interquartile range, (ii) standard deviation, and (iii) 51% percent of respondents in the "very important" or "strongly agreeing" categories. The rationale behind this approach is to show that there are cases where the interquartile range and/or standard deviation may fall within the adopted limit, but there is insufficient expert consensus on the significance of the factor (the requirement of at least 51% of the opinion for the value of 8-10 on a 10-point scale, or 4-5 on a 5-point scale), or vice versa-in the first Delphi round, opinions are within the standard deviation below 1.5 and/or 51% of experts respond to the "agree" category and "strongly agree" (i.e., between 4 and 5 on a 5-point scale), but their interquartile range may be above 1. For this reason, to be sure of a consensus, these three measures should be considered simultaneously [11].
The most important assumption of the Delphi method is about developing a consensus, i.e., changing the opinion of experts in subsequent iterations, as a result of receiving information about the opinions of other members of the panel. For this reason, for each subsequent Delphi round, a Controlled Feedback Survey from the group perspective should be designed so that respondents can explain or change their views [12]. Feedback from experts typically includes: (1) statistical summaries that provide measures of central tendencies such as variance, mean and median; (2) comments from individual experts; (3) ranking, percentages and interquarter ranges; and (4) subjective messages as anonymous feedback [28]. On this basis, experts are again asked to revise their assessments for each item, and to explain their views. As proposed by Giannarou and Zervas, the interquartile distance and standard deviation of each variable should be determined, and respondents should change or justify the answer when it is outside this range [12]. The process of pressing the rounds should continue until the criteria established a priori for consensus (or stability) are met.
Materials and Methods
A firm's reputation can be defined as a collective representation of a firm's past actions and results that describes its ability to deliver valuable results to key stakeholders [44,45]. Reputation is therefore a subjective, collective assessment of a company's credibility and responsibility. Undoubtedly, the company's reputation is difficult to manage, as it is based on the perception of the environment-the feelings, beliefs and experiences of stakeholders. It should be treated holistically as an integral part of business management. Reputation management is the responsibility of top management (ultimately the CEO), but mainly in terms of the institutionalization of specific values, structures, methods and procedures, while in fact all employees are responsible for the company's reputation.
Reputation management combines elements of several management areas, such as stakeholder management, communication, corporate social responsibility, quality or risk, so it requires gathering information from various functional areas, but emphasizes those processes, practices and methods that are specific to gaining and maintaining a good reputation in the environment.
Reputation management, i.e., building a good reputation, and then maintaining it, protecting it, changing or recovering it, are so complex issues that companies often have a big problem with it (often intuitive, ad hoc and chaotic actions are taken). In this area, there is a lack of both strong theoretical foundations that would allow for the integration of existing research, as well as the dissemination of lessons learned and good practices to facilitate management among managers. The conceptual chaos related to the concepts of reputation, image and identity, which is a problem for theoreticians conducting research, also means that the knowledge available in the literature on the subject has little value for managers who consider it too vague and therefore of little use in management practice.
Additionally, in recent decades, there has been a significant change in external conditions, which mean that reputation is now perceived as the most risky area influencing the implementation of the company's strategy. These include the emergence of a new, powerful stakeholder-online communities that operate according to completely different logic and rules (hyperarchic structures), the increasing globalization of markets and supply chains, which exposes companies to legal and ethical abuse caused by suppliers and cooperatives, constantly growing social expectations towards business (expressed by the idea of corporate social responsibility and sustainable development), strengthening the power of customers and a decrease in their loyalty, or loss of trust in business and growing social skepticism from year to year [46].
In this context, it seems reasonable to develop a tool in the form of a maturity model, which gives an opportunity to improve the effectiveness of reputation management. Such a model provides a certain framework that enables the integration of knowledge about the best practices of reputation management, and also constitutes a kind of roadmap that enables the analysis and assessment of the current situation of the company in this regard. Therefore, it can be a tool that helps to guide the implementation and improvement of the entire system of activities allowing for reputation management.
The reputation management maturity model (CR3M) will allow us to achieve three goals: firstly-to help the company's management to better understand the complexity and multidimensionality of reputation, secondly-to enable them to self-assess the degree of reputation management maturity in the company, i.e., the current state of advancement of practices in this field, and thirdly-ultimately define recommendations for actions aimed at improving skills in this area (i.e., maintaining a good reputation more effectively and avoiding a bad reputation). Therefore, it can be assumed that showing various factors that determine the effective management of the company's reputation and making a self-assessment allows the management to determine the level of advancement of the reputation management system and make a more conscious choice of specific practices, setting a certain development path taking into account local conditions and the specificity of activities.
Maturity model development can be described as qualitative research as it is subjective, holistic, interpretive and inductive. The CR3M model uses a stage-gate approach that enables the delivery of more differentiated maturity assessments across complex domains. It involves the application of additional levels of detail that, apart from the overall assessment for the entire organization, allow for separate maturity assessments for a number of separate areas. These additional levels are represented by the so-called Key Maturity Areas (KMA), i.e., certain areas of knowledge, and Capability Areas (CA), i.e., components or domain dimensions that can be called skill areas. Such granularity (detailing) of the model enables the organization to better understand its strengths and weaknesses in a given field, and then to plan specific strategies to improve and better allocate resources.
KMA can be treated as key success factors of a given field, the management of which has a significant impact on the organization, so they are the main elements in building the capacity to manage a given area. In the CR3M model, the key maturity areas identified on the basis of the analysis of the literature on the subject are: identity management, communication management, stakeholder relationship management, social responsibility management, issues management, crisis management and quality management. Ultimately, these eight areas were merged into four: communication, social responsibility (CSR), reputational risk and quality (Table 1). Capability areas, on the other hand, include 6 elements: leadership, values, competences, structures and systems, methods and tools, as well as policies and procedures.
In each area of the model (KMA), twenty-four different practices were proposed on the basis of a literature review-four in each CA, i.e., broken down into leadership, values, competences, structures and systems, methods and tools, and policies and procedures (this distinction was not visible to the panelists). These practices are de facto some desirable determinants for the successful management of a company's reputation. The structure of the model is shown in Figure 1. In total, the model contains 96 practices, which are a structured synthesis of key issues in the field of corporate reputation management that should be verified during the Delphi study. The legitimacy of placing these practices, i.e., the decision whether to leave them, remove them, or modify them (and how) was the subject of the first question of the Delphi study. Each of the practices is described for five levels of maturity, but the experts participating in the study-for greater clarity-received a simplified version of the model, with no levels of maturity shown. In each area of the model (KMA), twenty-four different practices were proposed on the basis of a literature review-four in each CA, i.e., broken down into leadership, values, competences, structures and systems, methods and tools, and policies and procedures (this distinction was not visible to the panelists). These practices are de facto some desirable determinants for the successful management of a company's reputation. The structure of the model is shown in Figure 1. In total, the model contains 96 practices, which are a structured synthesis of key issues in the field of corporate reputation management that should be verified during the Delphi study. The legitimacy of placing these practices, i.e., the decision whether to leave them, remove them, or modify them (and how) was the subject of the first question of the Delphi study. Each of the practices is described for five levels of maturity, but the experts participating in the study-for greater clarity-received a simplified version of the model, with no levels of maturity shown. Validation of the model to determine the final content of the model, i.e., the practices that should be included in each area of KMA, is based on expert judgment. The model, verified as a result of the Delphi study, will then be tested in several companies. Ultimately, it is intended to enable the company's management to self-assess the company's reputation management practices and to determine the level of maturity in this field, i.e., the degree of advancement of these practices (on a scale from 1 to 5). The degree of maturity for a given KMA is usually adopted at the level of the lowest-rated practice, so it is determined by the weakest link in a given area. The result of such an assessment is a company's reputation management maturity profile, which graphically illustrates its degree of development in a given field.
Conduct of the Survey
The Delphi study described above focused on developing a group consensus on the components of the enterprise reputation management maturity model. It consisted of a series of two rounds over a nine-month period. The Delphi study aimed to validate the model, whereby validation should be understood as examining the suitability and accuracy of the model as well as obtaining confirmation that it is fit for its intended use. The proposed CR3M conceptual model was presented to a panel of experts through a modified Delphi technique. The first part of the article emphasizes that the Delphi method is particularly beneficial when it deals with complex problems [22] and when there is no empirical evidence [16], and that an important area of its application is the development of concepts/models/theoretical framework [8]. The presented Delphi study meets these criteria because: First-the aim of the research is to develop a conceptual model of corporate reputation management maturity; Second, corporate reputation management is considered a complex domain; And third, there is little empirical evidence about management practices in this area.
In particular, the purpose of the Delphi study was: The description of the model posted on the research website (landing page) contained all the information needed to understand the idea of the presented concept, but without going into details about the individual levels of granulation (without showing that the practices belong to CA, i.e., leadership, values, etc., and without showing maturity levels). This resulted directly from the stated purpose of the study and was intended to prevent unnecessary complication of the entire concept to be assessed. General information about the Delphi study conducted is presented in Table 2.
Kind of the Delphi method
Modified e-Delphi The term of the Delphi study August 2020-May 2021 Scope of the study The research method was used to (1) verify the list of 96 reputation management practices as components of the maturity model, (2) assess the validity of individual practices included in the model, and (3) assess the suitability of the model as a self-assessment tool for the company's current reputation management maturity level. The research procedure consisted of four stages: (1) study planning, (2) expert panel selection, (3) data collection, and (4) data analysis and compilation. The first step was to plan the study, which included, inter alia, designing a research tool, i.e., transforming the maturity model into a questionnaire with questions. The design of a data collection tool is critical both in the exploration and distillation stages, but with regard to Delphi research, there are no clear-cut rules for its design, and the number of issues that can be raised in it (this can range from a few to even several dozen). In the case of this study, the length of the questionnaire reflected the complexity of the problem and the type of data collected. The questionnaire was developed in an Excel file and contained two questions in separate tabs, while in the second round, an additional third question. Each practice was briefly defined in the answer sheet and assigned to a specific area (KMA), the values of the 5-point Likert scale were also described, and a space was provided on the answer sheet to enter your grade and any comment.
In line with the adopted aim of the study, the survey asked experts to verify the current list of 96 reputation management practices as components of the maturity model, i.e., to determine the final architecture of the model by recommending leaving/rejecting/changing a given practice (question 1), assessing the importance of individual practices included in the model (question 2) and the assessment of the model's suitability for self-assessment of the current degree of maturity of the company's reputation management (question 3). A 5-point Likert scale was used to assess these issues in all three cases.
The first question of the survey concerned the assessment of the legitimacy of the presence of individual corporate reputation management practices in the presented model. In this question, a 5-point Likert scale was adopted with the following significance of individual assessments: (1)-I strongly disagree with including a given practice in the model-recommendation: DELETE; (2)-I rather disagree with the presence of this practice in the model-recommendation: DELETE; (3)-I partially agree to include this practice in the model-recommendation: MODIFY (in this case, an additional request for clarification and a new definition of the practice); (4)-I rather support including this practice in the modelrecommendation: LEAVE (no change); (5)-I strongly support including this practice in the model-recommendation: LEAVE (no change). In the second round, the panelists were asked to take into account the responses of other experts and to consider changing the previous assessment if it was significantly different from the assessments of other experts. Experts also had the opportunity to write a comment or remark on a given practice. Some panelists took advantage of this opportunity, as 2/3 of practices received comments (60 out of 96 practices). All comments were shared (visible) anonymously in the questionnaire until the second round.
In the second question, the panelists were asked to assess the importance of the above-mentioned practices for good (effective) management of the company's reputation. Again, the 5-point Likert scale was used, where the rating of 1 meant very little importance of the practice, and 5-very high importance. In the second round-as in the 1st question-the panelists were asked to take into account the answers of other experts and to consider changing the previous assessment if it differed significantly from the assessment of other experts.
In the third, additional question (included only in the questionnaire for the second round), experts were asked to assess to what extent the presented maturity model is a tool that allows self-assessment of the current state of reputation management practices in the company and drawing conclusions on the directions of the necessary changes. In this case, a rating of 1 (again a 5-point Likert scale) meant that the model was a completely useless tool for self-assessment and improvement, while a rating of 5 meant that it was definitely useful as a self-assessment and improvement tool.
For all three questions, a priori criteria were established as necessary to obtain a consensus for the assessment of individual practices, which, according to the recommendation of Giannarou and Zervas [12], included three measures: (1) at least 51% of answers to categories 4 or 5; (2) interquartile range < 1; and (3) standard deviation < 1.5.
To facilitate communication and maximize time savings, the survey was carried out using an online website (so-called landing page) and e-mail. At the planning stage of the study, information was prepared and posted on the project website. The landing page was specially designed for this study, providing all panelists with online access to the following information: project description, expert panel, expert guide, model presentation and glossary. The planning stage of the study was completed by the preparation (editing) of invitations for experts, which were then sent by e-mail.
The second stage consisted of activities related to the organization of a panel of experts. The selection of a panel of experts is a very important aspect of the Delphi method, in fact decisive for the success of the study and the quality of its results. The narrow scope of the field covered by the model (a limited number of expert theorists) and the depth and specificity of the required specialist knowledge made the selection of a panel of experts used a deliberate approach. In fact, there is no set of universal guidelines for qualifying experts for a Delphi panel [47]. In this study the panel of experts was selected on the basis of their knowledge or professional experience in corporate reputation management.
When determining the list of potential panelists, care was taken to ensure a balance of views from a theoretical and practical perspective, therefore the recruitment of experts was based on the academic and industrial environment. In the case of theoreticians, the inclusion criteria for the panel were as follows: (1) a researcher with at least a doctoral degree; (2) at least 5 scientific publications on corporate reputation or image management; (3) self-assessment of knowledge and experience in the field of reputation management at a level of at least 3 on a 5-point Likert scale; and (4) consent to participate in the study. Experts-theorists were searched for on the basis of scientific publications in the field of managing the company's reputation (or image). This group includes people from all the largest Polish academic centers, incl. SGH, UJ, UG, PŚ, University of Economics in Katowice and Wrocław.
In the case of practitioners, the criteria for inclusion in the panel were: (1) recommendation or referral from academics; (2) professional experience of at least 5 years; (3) work in communication, PR, marketing departments or higher management level (board members); and (4) self-assessment of knowledge and experience in the field of reputation management at a level of at least 3 on a 5-point Likert scale. In the process of recruiting practitioners, the recommendation and command procedure was applied, using the personal contacts of theoreticians who agreed to participate in the study. They were asked to indicate people who could become experts due to their position and professional experience.
Invitations to participate in the study were sent to a total of 36 experts, contacted by e-mail (August-September 2020). In the invitation, potential experts were informed about the purpose and assumptions of the study, and were provided with a link to the online landing page, where in the Expert Panel tab, a short metric with a consent form for the use of their personal data for the purposes of administering the study was posted. Nineteen people responded positively to the invitation, agreeing to participate in the study and filling in the form. Ultimately, however, 15 experts took part in the first round of the study (8 people reported in September and October, and after the recruitment of another 7 people in December 2020), and 10 experts in the second round, returning completed questionnaires. The response rate was 79% for the first round and 66% for the second round, therefore it was quite low and resulted in a significant sample reduction for later statistical analysis. The expert panel size of only 10 in the second round is small, but due to the acceptability of such a small sample size for experts who have similar knowledge of the problem at hand, it was considered sufficient [29]. There are known studies where even a 10-person panel of experts [19] is able to provide strong findings. It is generally accepted that the balance or representation of multiple viewpoints and expertise is more important than the size of the panel [48]. The characteristics of the expert panel are shown in Table 3. Table 3. Characteristics of the expert panel.
Initial list of experts (invitations sent)
Consent to participate in the study Number of panelists in the first round Number of panelists in the second round After completing the form, all panel members received a questionnaire in an Excel spreadsheet via e-mail. The entire study using the Delphi method consisted of two rounds, and therefore required the assessment procedure to be repeated twice. In the first round, it was a standard questionnaire, the same for all experts, while in the second round, the questionnaire was individualized, as it contained assessments of practices from the previous round and statistical data on the evaluations of other members. The first questionnaire (in the first round) contained two questions, while in the second round there was an additional, third question about the usefulness of the entire model.
The experts had 2 weeks to review the content of the worksheet and respond. In the event of no reply at that time, a reminder e-mail was sent to them. In practice, this response time has increased significantly, despite repeated reminders and requests. The low response rate in the first round, amounting for only 42% two months after the start of the survey, necessitated additional recruitment of experts-mainly practitioners. This extended the duration of the first round by another three months (until March 2021), and the entire study until May 2021. According to the author, the main reason for such a delay was-signaled by the panelists-the high degree of difficulty of the survey, its length (the need to evaluate 96 practices twice-in the first and second question of the survey), and therefore the long time needed to complete it.
After the end of the first round, the respondents' answers were analyzed and recalculated for each expert. Their results-including mean, median, dispersion, inter-quartile ranges, and anonymous comment summaries-were included in individualized sheets prepared for the second round. The information provided to the panelists in these individualized sheets included: (1) the note that the second round concerns the evaluation of the same practices using the same questions and scales as in the first round; (2) previous expert assessments of individual practices; (3) information on whether the assessment of a given expert differed from the assessments of other experts (this information was based on: average of experts' assessments +/− standard deviation) along with the values of average measures and cohesion measures for the group of experts; (4) possible expert comments on individual practices included in the model; (5) a request to consider a revision of the assessment if the previous expert's assessment was significantly different from that of other experts; and (6) additional question about the usefulness of the model.
These feedback from the previous questionnaire provided to the panelists formed the basis for the next round of the survey (it lasted from March to May 2021). The answer sheets were sent to the 15 experts who participated in the first round. In the second round, the response rate was lower than in the first round, as only 10 experts sent the questionnaires back. After the end of the second round, the statistics were recalculated.
Administrative services related to the Delphi study as well as data collection and calculation of statistical measures were carried out by IMAS International Ltd. One of the important features of the Delphi research is the principle that the assessments and opinions of experts are completely anonymous, both in filling in the questionnaires and in the subsequent dissemination of the results. To ensure the anonymity of experts, the questionnaires sent to them were properly coded, and the author of the study did not have access to them during the study.
Study Results-Model Modification
The result of the study is a maturity model on the content of which an expert consensus has been reached, and which can be tested in practice to assess its usefulness as a selfassessment and improvement tool in the area of reputation management. A priori agreed consent criteria for individual practices allowed for the modification of the maturity model; The first question of the survey referred directly to the question of the legitimacy of the presence of individual practices in the maturity model (more precisely, leaving, changing or removing a given practice). The main goal of data analysis in the second round was to establish the degree of consensus among respondents with regard to the appropriateness of leaving/rejecting/changing each of the 96 corporate reputation management practices in four categories.
The experts' answer to the first question of the questionnaire allowed us to reduce the number of practices in the CR3M model from 96 to 70. All practices for which the aforementioned consent criteria were not met, i.e., 51% of votes for categories 4 and 5, the interquartile range <1 and the standard deviation <1.5, were rejected from the model. As a result, a total of 26 practices that did not meet these three criteria in the second round were removed from the model. There were six practices in the area of Communication Management (CM), 10 practices in Corporate Social Responsibility Management (SM), 6 practices in Reputation Risk Management (RM) and 4 practices in the area of Quality Management (QM). Therefore, the number of practices in the CR3M maturity model decreased from 96 to 70, which significantly increased the applicability of the model and which was also one of the objectives of the study. The list of consensus indicators for question 1 is presented in Appendix A (Table A1). The target CR3M model would therefore consist of 70 practices: 18 CM practices, 14 SM practices, 18 RM practices and 20 QM practices-these are the practices that experts recommended to leave in the model (at least 51% of 4 or 5 grades) and were they fully agree on this decision (interquartile range <1 and standard deviation <1.5). In conclusion, the panelists in the second round estimated that 73% (70 out of 96) of all original practices should remain unchanged in the model. Table A2 in Appendix B shows the 70 practices left in the model based on expert consensus (question 1).
The questionnaire also included a second question about the significance (validity) of particular practices included in the original maturity model of corporate reputation management. The main purpose of data analysis was to establish the degree of consensus among respondents with regard to the importance of each of the 96 corporate reputation management practices.
In this case, the panelists' assessments can be treated as a guideline for a possible further reduction of the number of practices in the target model, using the same principle as before. If, in this case, all practices for which in the second round did not meet the above-mentioned consent criteria (i.e., 51% of votes for the important and very important categories, the interquartile range <1 and the deviation standard <1.5), a total of 37 praxis should be removed from the model. It would be 9 practices in the area of Communication Management (CM); 11 practices in the area of Corporate Social Responsibility Management (SM); 11 practices in the area of Reputation Risk Management (RM) and 6 practices in the area of Quality Management (QM). Thus, 59 practices would remain in the CR3M maturity model-only those considered by experts to be important or very important for effective corporate reputation management (grades 4 or 5) and on the importance of which there is full agreement among panelists (interquartile range <1 and standard deviation <1.5). The list of statistical measures for the second question in the questionnaire is included in Appendix C (Table A3). The target model would consist of 59 practices: 15 CM practices, 13 SM practices, 13 RM practices and 18 QM practices.
The third, additional question, which was only included in the questionnaire for the second round of the study, concerned the assessment of the presented maturity model as a tool enabling the self-assessment of the current state of reputation management practices in the enterprise and the formulation of conclusions regarding the directions of necessary changes. The rating was again expressed using a 5-point Likert scale, where a score of 1 was a completely useless tool for self-assessment and improvement, and a score of 5 was a definitely useful tool for self-assessment and improvement (Table A4 in Appendix D presents all three questions contained in the survey questionnaire). In this case, 88.9% of experts found the maturity model a useful or definitely useful tool for self-assessment and improvement (grades 4 or 5, median 4), with full agreement, as evidenced by consensus indicators at the interquartile range = 1 and standard deviation = 0.9. The list of statistical measures is presented in Table 4. Almost 90% of positive assessments of the presented model and the average score of 4.1 (on a 5-point Likert scale) confirm the sense of its development and dissemination among people responsible for managing the company's reputation or image. It can be said that the panel of experts has accepted the conceptual model that provides the basis for a better understanding and application of corporate reputation management practices.
Discussion
The reliability and validity of the research were ensured in line with their understanding in qualitative research. As suggested by some researchers, with regard to qualitative methods, instead of reliability and validity, one should rather talk about credibility, con-firmability, reliability, consistency, transferability to other contexts, etc. [49]. The equivalent of validity is credibility, i.e., the degree to which the research results can be regarded as faithfully reflecting some aspect of reality, not distorted by incorrectly collected data, selective and biased observation and correctly interpreted [50,51]. The credibility of the results was achieved thanks to the triangulation of data sources, i.e., the participation of many different experts in fields directly related to corporate reputation management (such as PR, marketing, communication, etc.), representing both theoretical knowledge (academics publishing in the area of reputation management or image management) and practical (managers experienced in reputation management). The credibility of the results was also increased by data triangulation, i.e., supplementing quantitative data (expert assessments) with qualitative data (comments on individual practices), and ensuring the appropriate context of the study, by preparing and providing panelists with a description of the reputation management maturity model (CR3M).
In turn, the equivalent of reliability in qualitative research is dependability, ensured by the quality of the research itself. It is a procedure for controlling the research process, which applies both to its entire course and to its results [49]. Increasing the reliability was achieved by the use of team work, involving the involvement of several researchers in Delphi (the author of the study, the person supervising the study on the IMAS side and the person conducting the statistical analysis of the results), and the quality control of the study, consisting in a detailed description of the assumptions and supervision of the entire study.
Two rounds proved sufficient to reach consensus on the vast majority of the practices included in the original model. The basis for the decision to end the Delphi study after two rounds was to compare the consensus criteria and the stability of the panelists' responses (coefficient of variation) for both rounds. First, a comparison of the a priori consensus criteria between the two rounds shows that the number of practices that are allowed to remain in the model improves in subsequent rounds, although this is more evident in the case of the second question; For the first question in the questionnaire, the number of practices for which there is agreement in both rounds is practically the same (70 practices), although some practices have changed. On the other hand, for the second question, the number of practices for which there is consent is clearly greater in the second round (respectively: 48 in the first round and 59 in the second).
Secondly, the approval of experts is also indicated by the low value of the CV variation index: in the first round, in more than half of the cases (55% of assessments), this index was at a low level, indicating low variability (<25%), and in the remaining cases it was average level, proving average variability (25-45%), while in the second round, in the case of almost 2/3 of all practices (exactly 72% of assessments), the CV ratio was low, and in the remaining cases at the level of average variability. There was no practice in any of the rounds for which the CV index showed high variability in ratings.
Conclusions
The Enterprise Reputation Management Maturity Model (CR3M) submitted to the panel of experts for evaluation originally included 96 practices, which were defined on the basis of an extensive literature review. These practices have been included in four areas important for reputation management, the so-called key maturity areas (24 practices Table A2 in Appendix B). The a priori accepted criteria of consensus in the case of a recommendation to leave a given practice in the model were: at least 51% of scores 4 or 5, the interquartile range <1 and standard deviation <1.5. In the case of 26 practices, no expert consensus was obtained, and they were rejected from the model. The highest number of rejected practices belongs to the area of CSR (10), while the lowest number of practices in the field of quality management.
It is worth noting that out of the three consensus criteria, standard deviation (<1.5) was met in all 96 cases, while the reason for removal from the model was failure to meet one of the other two criteria, i.e., too small (<51%) number of experts agreeing to leave practice in the model (scores of 4 or 5 on the Likert scale) or too large an interquartile range (>1). Experts were also asked in the survey to list possible other practices in the field of corporate reputation management, but none of the panelists proposed to supplement the model with any new practices. At the same time, 89% of experts considered the theoretical maturity model as a useful or definitely useful tool for self-assessment and improvement. It should be noted that this model will be tested in several enterprises in the future to fully assess its usefulness.
It is worth noting that the validated model, shown to panelists on the survey website, was a simplified version, as it did not reveal the granularity of practices on the so-called Capability areas (leadership, values, competences, structures, systems, policies), nor did it highlight the five levels of maturity described for each practice in the model. It was concluded that too much complication of the model structure presented to the panelists will not increase the understanding of the model, but will only make it difficult for experts to recommend individual practices and the price of their validity. Achieving a consensus of experts on the content of the CR3M model will allow in the future to start the next stage of research, namely to test the model (full, i.e., showing descriptions of individual levels of practice maturity) in selected enterprises.
This will consist in indicating for each practice the level of maturity (from 1 to 5) that corresponds to the actual state of a given practice in the surveyed company. In this way, it will be possible to get an overall picture of the degree of development of reputation management. Such self-assessment will allow managers to identify possible gaps and plan improvement actions in all areas affecting the company's reputation. Finally, after the testing phase, thanks to the feedback obtained, further corrections will be made and the final assessment of the model's usability will be verified. It is a condition of the last phase of developing the maturity model, namely its dissemination [24].
The CR3M model modified as a result of consensus is intended to: (1) help the company's management to better understand the complexity and multidimensionality of reputation, (2) enable a self-assessment of the degree of maturity at which reputation management is located in the company (the current state of practice in this field), and (3) define recommendations for actions aimed at improving skills in this area, enabling a gradual increase in its maturity (maintaining a good reputation more effectively and avoiding a bad reputation).
Conclusions from the use of the e-Delphi method to validate the theoretical model can be divided into two areas. The first is related to the procedure that uses ICT in the research, while the second concerns the advantages of the Delphi method as a comprehensive research tool.
The advantages of an ICT-based study are widely known, they can be reduced to the possibility of shortening the time between consecutive rounds and, as a result, accelerating the procedure [7], easier administration of the study, ensuring a good context of the study in the form of access to information about the course and subject of the study, the possibility of acquiring experts from various, even geographically distant centers and enterprises, and the ease of ensuring the anonymity of experts and their convenience.
A study conducted 20 years ago by Linestone and Turoff showed that the use of computer technology can shorten the process between rounds in the Delphi process [7]. However, in the case of the Delphi survey described above, this was not the case, mainly due to the high degree of difficulty of the survey (the complexity of the model-96 practices to be evaluated), and-consequently-the high drop-out rate among experts. Therefore, it can be concluded that the use of Delphi questionnaire as a research tool should take into account the limitation of the questionnaire's complexity and comprehensiveness. Although the Delphi method has been used many times to evaluate a large number of conceptual elements (often 50 or more items to be considered), the complexity of the questionnaire definitely decreases the willingness to participate in the study, and then affects the fatigue of panelists and clearly reduces the response rate in subsequent rounds.
However, the use of ICT allowed to confirm other benefits of this variation of the Delphi method. Providing experts with access to a specially designed website (landing page) provided the right context for the study; It allowed everyone to get acquainted with the concept of the proposed maturity model and detailed instructions describing the course of the study, ensured a common understanding of the terms thanks to the glossary, and finally enabled quick obtaining of personal data with confirmation of consent to participate in the study. Communicating with the panelists via e-mail allowed for the inclusion of experts from various universities in the country, which confirms that it is an effective way of collecting opinions from geographically dispersed experts [52]. The use of the Internet and e-mail also made it easier to maintain the anonymity of the respondents-through coding the questionnaires and the intermediation of a research company. Providing them with anonymity of statements (comments) and assessments is an important attribute of Delphi research, helping to avoid the influence of group dynamics, strong personalities or group conformism that may appear during personal interactions between participants [52]. The use of e-mail also turned out to be the most convenient form for busy experts. In summary, ICT technology is an invaluable aid in conducting Delphi research. Problems related to the application of this method, which were an obstacle two or three decades ago, can now be successfully solved using ICT, providing a friendlier environment, easier administration and faster obtaining of results. Online research methods, such as e-Delphi, have become extremely popular, as they are associated with convenience for participants, time and cost savings, and many data management options [53].
The second group of conclusions concerns the advantages of the Delphi method as a comprehensive research tool. It should be emphasized that the Delphi technique is a qualitative tool that is used to obtain expert opinion, especially when knowledge about a problem is limited. First of all, the Delphi method provides a flexible tool for collecting and analyzing data, discovering and filling gaps in certain areas of knowledge, thanks to the involvement of people who are well-versed in a given topic. In the described case, the Delphi method made it possible to collect opinions from experts in the field of which the maturity model relates, i.e., corporate reputation management. The Corporate Reputation Management Maturity Model (CR3M) initially included 96 practices, the implementation of which was considered (based on an analysis of the literature on the subject) as an indicator of the development of this area. As a result of two rounds of Delphi study, experts agreed to keep 70 practices in the model and found the model a useful tool for self-assessment and improvement. The experts' assessment therefore helped to select the most important practices and decided on the final establishment of the maturity model architecture. The use of the e-Delphi method allowed us to combine the opinions of experts in order to achieve an informed group consensus on the complex problem [] of determining the content of the maturity model.
The Delphi method has known limitations, such as the use of non-randomized samples, subjectivism and bias imposed by the composition of the expert panel, and the lack of commonly accepted recommendations regarding the number of participants, rounds, the way of defining the consensus, or poor definition of reporting criteria [43]. The most serious limitation in the use of this method, confirmed in this study, concerns the design of the research tool itself, which is the survey questionnaire; Its complexity directly determines the possibility of maintaining the interest of experts in the study and the percentage of their resignations in subsequent rounds. In the case of the presented study, a significant limitation was also the small size of the sample: the panel of experts in the second round consisted of only 10 people. Such a small size of the panel resulted, firstly, from the relatively narrow field of knowledge covered by the model, and thus difficulties in attracting more expert-theorists, and secondly, from the high dropout rate in the second round of the study (34% of experts did not join to the second round). Low response rates in subsequent rounds are another important limitation that should be taken into account when deciding on this method.
Overcoming this limitation requires from the participants a great personal commitment [48], which-unless it results from their high internal motivation-should be maintained by skillful motivation by the researcher. The high percentage of experts' resignation in the described study could have resulted not only from the high difficulty of the questionnaire (evaluation of 96 practices), but also from too low intensity of communication. Sending two automatic reminders two and four weeks after the questionnaire was sent, and additionally-after another two weeks-a personalized e-mail from the author of the study, should be considered insufficient. This is supported by the fact that in addition to studies that show a significant reduction in response rates in subsequent rounds, there are also studies that show an impressive 90% response rate over five rounds Delphi [54]. In this case, apart from the involvement of the participants, an important role was played by the use of frequent and varied communication. Multiple e-mail reminders, and even phone calls and text messages seem to be acceptable and tolerable in the e-Delphi method as a form of maintaining the interest of panelists [55], especially if the time intervals between consecutive rounds are significant (months).
Another limitation was the use of a panel composed only of Polish experts. In the future, this type of research should consider extending the panel of experts to include people from different countries, and thus increase the representativeness of the panel. In addition, the study used a modified version of Delphi, which means that a structured questionnaire was used in the first round, instead of the open round as in the classic Delphi version. This approach is sometimes criticized for imposing a conceptual framework rather than developing it inductively [43]. The structured approach did seem appropriate in this case, however, given the complexity of the CR3M Enterprise Reputation Management Maturity Model. The questionnaire sent to the panelists, developed on the basis of the CR3M model, where 96 practices had to be assessed, turned out to be so complicated and thus time-consuming that it resulted in the resignation of a large number of experts after the first round. Increasing the difficulty of the study-in the form of inductively developing a model from scratch-would further reduce the chances of its completion. This limitation was somewhat controlled by allowing panelists to comment on practices, suggest new practices, or modify existing ones already in the model.
One of the fairly important elements of the Delphi procedure is providing participants with information about the subject and course of the study [55]. Research authors very rarely check whether the panelists have read the information provided to them, and it can be assumed that they do not always review it. One study found that only 56% of the panelists reviewed more than 75% of the information provided [52]. Turnbull et al. [54] analyzed how they perceive the burden of participation, how they use the background information provided, how they take into account and weigh feedback and voting from previous rounds, and how much they understand the subject of the study. These issues may directly affect the quality of the research results. In this context, another limitation of the presented study may be the lack of verification of the experts' preparation for the study, as it has not been confirmed whether they have read the description and instructions for the study, the characteristics of the CR3M model, the glossary of terms, etc., presented on the website.
The practical conclusions of the study carried out can be summarized as follows: firstly, too much complexity of the questionnaire has a negative impact on the readiness of experts to cooperate and the subsequent high rate of resignation in subsequent rounds. Secondly, keeping experts engaged requires very intensive communication using various channels. Thirdly, it is worth encouraging and verifying the level of experts' preparation for the study (getting to know general information on the course and subject of the study). Fourth, all the above-mentioned problems are easy to overcome in the case of high internal motivation of the research participants; Recruiting motivated panelists to participate in the study is, in the author's opinion, one of the most important success factors in conducting research based on expert opinions. Despite these reservations, the e-Delphi procedure seems to be a promising method for building models or validating complex theoretical concepts, contributing to the development of management theory and practice. The role of ICT is difficult to overestimate here. In this case, it was a tool that was used to develop a model that in the future would lead to the transformation and improvement of business, and thus ultimately also to improve the quality of life of the society [1].
Data Availability Statement:
The data presented in this study are available on request from the author.
Conflicts of Interest:
The author declares no conflict of interest.
Appendix A
The table below presents a list of consensus indicators for question 1 in the questionnaire regarding the validity of the presence of individual practices in the model. Those practices that did not meet any of the three consensus conditions are marked in gray. Table A2 shows the practices left in the model based on expert consensus (question 1). CM22. Implementation of integrated communication (marketing and corporate)-supporting PR activities through marketing communication in order to achieve consistency of the company's image in marketing campaigns, customer service programs, image advertising, etc.; CM24. Publication of social and environmental results in social reports (e.g., in an integrated annual report according to the GRI standard), disclosing also mistakes made and inappropriate practices that need to be changed.
KMA: Corporate Social Responsibility Management
SM1. Formulating and formally accepting the goals and CSR strategy consistent with the company's values (along with the priorities and measures of actions towards key stakeholders) and integrating them with the business strategy; SM2. Taking responsibility for the direct and indirect effects of one's own activity and business partners (enhancing the positive impact and limiting the negative one); SM3. Active support of CSR initiatives/programs by the management board and managers of all levels (joining the implemented projects and encouraging employees); SM4. Including CSR initiatives in the operational goals set for middle and senior managers (defining goals, measures, accounting for results); SM6. Taking care of employees-providing them with good working conditions, good social welfare and health protection; SM7. Emphasis on fair and fair treatment of employees (satisfactory remuneration, equal opportunities, a transparent system of evaluation, reward and promotion, etc.); SM8. Consistent adherence to ethical, social and environmental values by the company when cooperating with business partners (developing long-term cooperation by keeping contract terms, not abusing bargaining advantage, etc.); SM10. Training employees (especially managers) in understanding the idea of social responsibility or sustainable development and adapting it to the specificity and culture of the company; SM12. Disseminating ethical principles in force in the company by implementing ethical elements into the training system of employees (especially new hires); SM14. Appointment of an ethics spokesman or ethics team to oversee the enforcement of the principles enshrined in the code of ethics; SM15. High degree of structuring of CSR initiatives-mutual coherence within the overarching strategy or program, set goals and measures, allocation of resources, accountability of responsible persons, etc.); SM17. Use of multiple tools for sustainable supply chain management (e.g., supplier codes of conduct, regular audits, risk assessment, environmental life cycle assessment-LCA); SM18. Developing and formalizing the company's values in an ethical code (or a code of good practice), based on the principles contained, for example, in SA 8000, ISO 26000 or other international standards; SM21. Implementation of environmentally friendly production processes, waste management and recycling, and application of the highest safety standards in operational procedures.
Appendix C
List of consensus indicators for question no. 2 in the questionnaire regarding the validity of practices included in the model. Those practices for which all three consensus criteria have not been met are marked in gray. | 2021-11-04T15:30:41.281Z | 2021-10-30T00:00:00.000 | {
"year": 2021,
"sha1": "8c59ddabfd250b6637a3c15d9d2c120ae4a13adc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/21/12019/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "37eac63d50b745873e8135a0736ea5a664b71d24",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": []
} |
268012610 | pes2o/s2orc | v3-fos-license | Rates of change of pons and middle cerebellar peduncle diameters are diagnostic of multiple system atrophy of the cerebellar type
Abstract Definitive diagnosis of multiple system atrophy of the cerebellar type (MSA-C) is challenging. We hypothesized that rates of change of pons and middle cerebellar peduncle diameters on MRI would be unique to MSA-C and serve as diagnostic biomarkers. We defined the normative data for anterior–posterior pons and transverse middle cerebellar peduncle diameters on brain MRI in healthy controls, performed diameter–volume correlations and measured intra- and inter-rater reliability. We studied an Exploratory cohort (2002–2014) of 88 MSA-C and 78 other cerebellar ataxia patients, and a Validation cohort (2015–2021) of 49 MSA-C, 13 multiple system atrophy of the parkinsonian type (MSA-P), 99 other cerebellar ataxia patients and 314 non-ataxia patients. We measured anterior–posterior pons and middle cerebellar peduncle diameters on baseline and subsequent MRIs, and correlated results with Brief Ataxia Rating Scale scores. We assessed midbrain:pons and middle cerebellar peduncle:pons ratios over time. The normative anterior–posterior pons diameter was 23.6 ± 1.6 mm, and middle cerebellar peduncle diameter 16.4 ± 1.4 mm. Pons diameter correlated with volume, r = 0.94, P < 0.0001. The anterior–posterior pons and middle cerebellar peduncle measures were smaller at first scan in MSA-C compared to all other ataxias; anterior–posterior pons diameter: Exploratory, 19.3 ± 2.6 mm versus 20.7 ± 2.6 mm, Validation, 19.9 ± 2.1 mm versus 21.1 ± 2.1 mm; middle cerebellar peduncle transverse diameter, Exploratory, 12.0 ± 2.6 mm versus 14.3 ±2.1 mm, Validation, 13.6 ± 2.1 mm versus 15.1 ± 1.8 mm, all P < 0.001. The anterior–posterior pons and middle cerebellar peduncle rates of change were faster in MSA-C than in all other ataxias; anterior–posterior pons diameter rates of change: Exploratory, −0.87 ± 0.04 mm/year versus −0.09 ± 0.02 mm/year, Validation, −0.89 ± 0.48 mm/year versus −0.10 ± 0.21 mm/year; middle cerebellar peduncle transverse diameter rates of change: Exploratory, −0.84 ± 0.05 mm/year versus −0.08 ± 0.02 mm/year, Validation, −0.94 ± 0.64 mm/year versus −0.11 ± 0.27 mm/year, all values P < 0.0001. Anterior–posterior pons and middle cerebellar peduncle diameters were indistinguishable between Possible, Probable and Definite MSA-C. The rate of anterior–posterior pons atrophy was linear, correlating with ataxia severity. Using a lower threshold anterior–posterior pons diameter decrease of −0.4 mm/year to balance sensitivity and specificity, area under the curve analysis discriminating MSA-C from other ataxias was 0.94, yielding sensitivity 0.92 and specificity 0.87. For the middle cerebellar peduncle, with threshold decline −0.5 mm/year, area under the curve was 0.90 yielding sensitivity 0.85 and specificity 0.79. The midbrain:pons ratio increased progressively in MSA-C, whereas the middle cerebellar peduncle:pons ratio was almost unchanged. Anterior–posterior pons and middle cerebellar peduncle diameters were smaller in MSA-C than in MSA-P, P < 0.001. We conclude from this 20-year longitudinal clinical and imaging study that anterior–posterior pons and middle cerebellar peduncle diameters are phenotypic imaging biomarkers of MSA-C. In the correct clinical context, an anterior–posterior pons and transverse middle cerebellar peduncle diameter decline of ∼0.8 mm/year is sufficient for and diagnostic of MSA-C.
Introduction
Multiple system atrophy (MSA) is a sporadic, adult-onset neurodegenerative synucleinopathy characterized by the combination of autonomic neuropathy and either parkinsonism (MSA-P 1 ) or cerebellar ataxia (MSA-C), although phenotypic overlap is not uncommon. 2,3][6][7] The diagnosis of Definite MSA is confirmed at autopsy, characterized by α-synuclein-positive oligodendroglial cytoplasmic inclusions and neuronal loss in the basal ganglia, brainstem and cerebellum. 4,8Diagnosis during life is challenging because there is no single confirmatory test.
The differentiation of MSA-C from other causes of lateonset sporadic or non-familial ataxia 5,6 can be vexing, and MSA-P is difficult to distinguish from Parkinson's disease (PD) and other forms of atypical parkinsonism. 9In the 2008 consensus criteria, 4 Possible versus Probable MSA are differentiated by clinical severity of autonomic dysfunction only, while suggestive features of Possible MSA (not required for Probable MSA) include the imaging finding of pontocerebellar atrophy, putaminal rim hyperintensity and hot cross bun sign (HCBS) on anatomic MRI, and increased diffusivity of the putamen and middle cerebellar peduncle (MCP) on diffusion-weighted imaging. 10However, none of these findings is specific to MSA. [11][12][13][14] Concerns over clinical heterogeneity and insensitivity to detection of early disease 3 prompted the Movement Disorders Society to propose new criteria that include imaging features (not further defined) but this applies only to the highest probability diagnosis of Clinically Established MSA. 15 In our previous study of 65 patients with Possible, Probable and subsequently proven Definite MSA-C, 6 we concluded that a sporadic onset, insidiously developing cerebellar syndrome in midlife, with autonomic features of otherwise unexplained bladder dysfunction with or without erectile dysfunction in males, and atrophy of the cerebellum, brainstem and MCP pointed strongly to a diagnosis of MSA-C.Other clinical features of REM sleep behaviour disorder (RBD) and postural hypotension confirmed the diagnosis, while extrapyramidal findings, corticospinal tract signs and pathologic laughing and crying were helpful but not necessary for diagnosis.Inherent in our conclusions, therefore, was the statement that imaging findings of cerebellar and brainstem volume loss accompany the clinical presentation.
Here, we test our hypothesis derived from clinical observation over the past 20 years, which imaging biomarkers are diagnostic of MSA-C.Cerebellar atrophy in MSA-C is rapid and dramatic 16,17 resulting principally from loss of deep and folial white matter, 17 but it is not practicable to measure these neuroimaging changes in the routine clinical setting.We therefore focused on pons and MCP dimensions, as these are readily identifiable and measurable at the point-of-care by healthcare providers, using conventional clinical MRI.We predicted that the rate of change of the diameters of these structures, as a proxy for volumetric analysis, could be used clinically to diagnose MSA-C with certainty during life.
Methods
This study was approved by the Mass General Brigham Institutional Review Board (IRB).
Study cohorts Healthy controls
We acquired normative imaging data from 73 healthy individuals in the 1000 Functional Connectomes, as described below.
Exploratory cohort
We performed a retrospective review of patients seen in the Massachusetts General Hospital (MGH) Ataxia Center between January 2002 and December 2014.Two cohorts were studied.
MSA cohort
We identified 88 patients with MSA-C: 74 met consensus criteria 4 for Possible/Probable MSA, and 14 patients who died during the Exploratory phase met autopsy criteria for Definite MSA-C.Nine of the 74 Possible/Probable MSA-C patients died after the completion of the Exploratory phase of the study, underwent autopsy, and were confirmed as Definite MSA-C.
Validation cohort
We prospectively studied three cohorts of patients in the MGH Ataxia Center between January 2015 and December 2021.Demographic data collected included age, sex, diagnosis, age of motor symptom onset and age at which each scan was performed.
MSA cohort
There were 49 patients with MSA-C (30 Probable, 19 Possible).Whereas 13 patients died during this period (12 Probable, 1 Possible), none came to autopsy and therefore we could not make the consensus criteria designation of Definite MSA.We also included 13 patients with MSA-P (12 Probable and one who died, with Definite MSA-P confirmed at autopsy).
Cerebellar ataxia cohort-not MSA
We studied 99 patients with a range of acquired, genetic and sporadic cerebellar ataxias, as listed in Table 2.
Movement Disorders Cohort without cerebellar ataxia
To test the specificity of our putative imaging biomarker, we studied pontine measures in patients followed in a general movement disorders clinic who did not have cerebellar ataxia.This cohort comprised 79 patients with Parkinson's disease (PD); 40 non-MSA atypical parkinsonism [progressive supranuclear palsy (PSP), corticobasal syndrome, dementia with Lewy Bodies]; 31 essential tremor; 30 idiopathic and genetic dystonia; 9 other primary movement disorders (degenerative chorea, downbeat nystagmus without ataxia, hereditary spastic paraplegia); 57 other neurological disorders (multiple sclerosis, normal pressure hydrocephalus, anterior horn cell disease, multifactorial gait disorder, myelopathy); 79 functional neurological disorder (FND); 11 drug-induced movement disorders and 18 non-neurological conditions (see Table 2).
Clinical measure of ataxia in the MSA cohorts
To determine the rate of clinical change over the course of each patient's trajectory, and to compare this with the brainstem measures, we scored ataxia severity using the Brief Ataxia Rating Scale (BARS).The BARS 19 is a five-item rating scale that scores the canonical manifestations of the cerebellar motor syndrome: gait, heel-to-shin, finger-to-nose, speech and oculomotor performance.The scale is scored from 0 to 30: higher scores indicate greater motor impairment.The BARS is tightly correlated with the Scale for Assessment and Rating of Ataxia 20 and the International Cooperate Ataxia Rating Scale, 21 and has been replicated and validated. 22,237][28][29][30][31] In late MSA-C, parkinsonism can become severe and mask ataxia.We use maximum BARS scores when tasks are no longer possible regardless of whether this was related to parkinsonism or ataxia.
Normative data in healthy controls
We acquired normative data for the brainstem measurements using the 1000 Functional Connectomes Project in the neuroimaging data repository of the Neuroimaging Tools and Resources Collaboratory, Project ID: fcon_1000 (www.nitrc.org).We studied 10 subjects each in the 2nd, 4th, 7th and 8th decades, 12 in the 3rd, 11 in the 5th, nine in the 6th, and one individual in the 9th decade (see Table 3).Images were viewed in MRIcron, a cross-platform NIfTI format image viewer in the nitrc platform.At the time these measures were taken in 2015, the platform did not yet have a function tool for line measurement.We developed a method to derive linear measures of the pons in the axial and sagittal planes and the diameter of the MCPs.We identified the points of interest on the pons and MCP in the sagittal and axial planes (sagittal plane, y + z; axial plane, x + y), determined the voxel space between them translated into mm space, and calculated the hypotenuse representing the diameters.
Images analysed in the patient cohorts
Brain MRI studies were available on all patients in the MGH electronic health record (Amicas PACS viewer until April 2016; thereafter, eUnity PACS viewer v6.10.2.489, Client Outlook, Waterloo, Canada).We performed the measurements on standard MGH desktop computers available to practitioners in the clinic setting, using the ruler tool in viewing programs in Amicas and eUnity.We analysed MPRAGE or SPGR sequences when available because these 3D T 1 -weighted images have high spatial resolution and grey matter-white matter differentiation that makes them ideally suited for anatomical study. 32If these were not available, we used standard T 1 -weighted and T 2 -weighted sequences.We avoided fluid attenuated inversion recovery (FLAIR) sequences in which the brainstem boundaries are not as crisply defined.
In the Exploratory cohort, we analysed scans at every available time-point in all patients.Measures were performed with 0.1 mm accuracy.
In the Validation cohort, we analysed scans at every available time-point in the MSA-C/MSA-P cohort.For all other patients, the first and last scans were assessed from symptom onset through December 2021.These measures were performed with 0.5 mm accuracy, to reflect differences in the accuracy of measurement tools with different viewing programs.
Quantitative outcome measures
Measurements were derived as described below, and as shown in Fig. 1 that includes representative images of sequential MRI scans performed in a single individual with MSA-C over a period of 9 years.
Anteroposterior (AP) Pons diameter
We determined that it is possible to measure the pontine AP diameter accurately by paying careful attention to the imaging parameters in each case.The axial and sagittal images are viewed side-by-side in the imaging program, with the cross-referencing tool a valuable guide.
AP Pons diameter measured using the axial view
The mid-pons in the rostro-caudal dimension is identified in the axial plane with reference to the corresponding sagittal image.The midline AP diameter of the pons is then measured on this image as follows.
A line is drawn with the ruler tool, starting at the posterior/dorsal boundary of the pons demarcated by the reliable indentation of the 4th ventricle, and extending forwards to the anterior/ventral boundary of the pons, targeting the midline where there is a reliable concavity between the two sides of the pons.In subsequent scans, the axial plane was not perpendicular to the long axis of the brainstem and so the more accurate measure was derived from the sagittal images that was at or close to the midline as seen in the corresponding axial sections.The parasagittal views of the cerebellar hemisphere show the progressive cerebellar atrophy with shrinkage of the entire hemisphere, loss of the corpus medullare and prominence of the cerebellar folia.AP, anteroposterior; MCP, middle cerebellar peduncle; MSA-C, multiple system atrophy of the cerebellar type.
Measurements in small text from screen shots of the patient's MRI are shown in black font on white background for readability.
AP Pons diameter measured using the sagittal view
The pontine midline is identified in the sagittal plane using the corresponding axial view as a guide.A line is then drawn on the mid-sagittal section anchored at a point on the posterior boundary of the pons that forms the apex of an obtuse angle in the curvature of the pons as one descends from superior to inferior.The AP diameter of the pons is measured from this posterior location to a point on the anterior boundary of the pons, ensuring that the line is perpendicular to the long axis of the brainstem.
Transverse diameter of the MCPs
The axial view is used to identify the MCPs.The optimal section is between the 5th and 7th/8th cranial nerves, where the MCP merges with the pons forming an obtuse angle of ∼100° between the posterior boundary of the pons and the medial border of the MCP.The line is drawn obliquely forward from the apex of this angle, perpendicular to the long axis of the MCP.
Practical considerations
We recognized and addressed technical considerations in the measurements performed on these clinically derived scans.First, the mid-sagittal plane is not always exactly midsagittal.The ventral aspect of the pons is indented in the midline, so measurement of the AP Pons diameter on an offcentre sagittal image produces a spuriously large result.Second, the axial slice is not always aligned perpendicular to the long axis of the brainstem, so measurement on such an axial image also produces a spuriously large AP Pons diameter.While axial sequences are preferred, if we encounter either of these measurement issues, we use the AP diameter in the sagittal or axial plane that most closely approximates what we determine to be the true AP diameter, i.e. the AP diameter at the mid-rostro-caudal level of the pons, perpendicular to the long axis of the brainstem, exactly at the midline.When measuring the MCPs, we prefer highresolution scans at 2 mm slice intervals.On 5 mm slices, there are usually two levels in which MCPs are well defined, and we use the more rostral level where the MCP transverse diameter is wider.We maintain consistency in how we measure the brainstem in the same patient over time.
Statistical analyses
Statistical analysis was conducted using R 4.
Intra-rater and inter-rater reliability
We investigated the accuracy and reproducibility of these measurements by testing intra-rater and inter-rater reliability among the senior author and three additional trained raters.
Fifteen scans (five each for MSA-C, non-MSA cerebellar disease and controls) were evaluated twice by the four evaluators who were blinded to the results of the other raters.Variability for sagittal and axial pons and right and left MCP measurements were decomposed into between-subject, between-evaluator (within-subject) and residual variants using mixed-model regression analysis.The results were summarized using intraclass correlation coefficients (ICC) and comparing percent variability attributable to the subject and evaluator.
Correlation of pontine AP diameter with pontine volume
We used the PACS imaging system and Vitrea Web program to derive pontine volumes in 22 ataxia patients (18 MSA-C, two ILOCA, one SCA6, one FA).We outlined the pons in a series of axial and sagittal slices within the same plane, calculated the volume and correlated this with the corresponding sagittal and axial AP Pons measures.
Rates of change over time
We analysed pons and MCP diameters (right and left MCP measures averaged) at baseline and determined their rate of change over time, comparing them across Possible, Probable and Definite (autopsy-conformed) MSA-C and all other ataxias.In the Exploratory cohort, we included all 88 patients, 60 (68.2%) of whom had between two and six total scans over the course of their disease.In the Validation cohort, 44 of 49 patients (89.8%) had repeat scans, and we determined the rate of change by comparing the first to the last scans.We also assessed for differences within the synucleinopathies by comparing measurements in Possible/Probable MSA-C versus Probable MSA-P, and MSA-P versus PD.We used mixed-model regression for analyses with repeated measures, AP Pons and MCP diameter as a function of years since symptom onset, with subject as a random intercept, adjusting for age and sex.The SD for the rates of atrophy for MSA for both AP Pons and MCP was estimated using random-slope mixed-model regression.We considered both linear and higher-order polynomial models for years since onset.
Receiver operating characteristic (ROC) curves, and areas under these curves (AUC), were used to illustrate the performance as diagnostic markers of the pons and MCP measures as functions of threshold.
Prognosis in MSA-C: survival time, AP Pons diameter at time of death, residual life
We determined the length of survival in the 86 MSA-C patients who died during the course of this two decade study in both the Exploratory (n = 73) and Validation cohorts (n = 13), with disease onset measured from the first motor symptom (gait ataxia, imbalance, dysarthria, dysgraphia).We also explored predicted AP Pons diameter at time of death, and we used a linear regression model to test whether it is possible to predict residual life/survival from duration of disease and axial pons measurements.
Intra-rater and inter-rater reliability
For the axial AP Pons measures, the intra-rater reliability (ICC) was 0.92, and the inter-rater reliability was 0.89; and for the sagittal AP Pons measure, the intra-rater reliability was 0.86, and the inter-rater reliability was 0.76.For the MCP measure, the intra-rater reliability was 0.83 (L-MCP) and 0.78 (R-MCP), and the inter-rater reliability was 0.78 (L-MCP) and 0.71 (R-MCP).
Diameter versus pontine volume correlations
There was strong correlation between the AP Pons diameter and the volume of the pons in both the sagittal (r = 0.94) and axial planes (r = 0.94), both P < 0.0001.There was a similarly strong correlation between the mean MCP measures and pons volume (r = 0.96, P < 0.0001).The volume of the pons can be predicted from the pons axial measurement with the following equation: Pons Mean MCP diameter at initial scan in Possible/Probable/ Definite MSA-C was 12.0 ± 2.6 mm (n = 82 [MCP transverse diameters could not be measured with certainty in six cases related to the plane of section]), significantly smaller than healthy controls (n = 73, 16.4 ± 1.4 mm), ILOCA (n = 15, 15.1 ± 2.2 mm) and all other cerebellar ataxias (n = 77 [MCP transverse diameter could not be measured with certainty in one case], 14.3 ± 2.1 mm), all P < 0.0001.
In the large, unselected movement disorders subset of the Validation cohort excluding MSA-C, MSA-P, other ataxias and atypical parkinsonism (n = 314), at first visit, there was no clear direction of difference in AP Pons and MCP measurements.The AP Pons was smaller in patients compared to controls (22.6 ± 1.5 versus 23.6 ± 1.6 mm, P < 0.0001), whereas MCP transverse diameters were larger (17.1 ± 1.2 versus 16.4 ± 1.4 mm) P = 0.0001.The same held true for a subset including only FND/drug-induced/non-neurological cases (n = 108): at first visit, the AP Pons was smaller in patients than controls (22.6 ± 1.6 versus 23.6 ± 1.6 mm, P < 0.0001), and MCP transverse diameters were larger (17.0 ± 1.3 mm versus 16.4 ± 1.4 mm) P = 0.006 (Supplementary Tables 1 and 2). 1.
Rate of change over time of AP Pons diameters and MCP diameters
non-MSA diagnoses because of the small numbers of individuals with repeat scans.
We compared the rate of change in AP Pons and MCP diameter in early versus late stages of MSA-C.There was no difference in the rate of change of the AP Pons diameter during the early phase of the disease, years 0-3 (−0.81 mm/ year, n = 31 patients) and the late stage of disease, years 10-13 (−0.75 mm/year, n = 2 patients) (Fig. 3A).
Given the critical clinical need specifically to differentiate MSA-C from ILOCA, both of which are sporadic ataxias without a family history, we re-fit the mixed-model regression used in the above rate comparison, using Exploratory cohort cases with diagnoses of MSA-C (Possible/Probable, Definite, N = 88) and ILOCA (N = 15).The estimated average annual rate of pons atrophy from this analysis was −0.11 mm/year for ILOCA versus −0.86 mm/year for MSA, a difference that is highly statistically significant (T = −13.4with 200 df, Cohen's-d effect size of 0.95, P < 0.0001), and sufficiently large to be of obvious clinical importance.For the MCP, the estimated mean annual rates of decline are −0.08 and −0.89, for ILOCA and MSA, respectively, (T = −11.3,211 df, Cohen's-d effect size of 0.78).
We evaluated Cohen's-d for a mean difference in rates of atrophy between MSA-P (n = 13) and MSA-C (n = 49).The effect sizes are of the form t-statistic/square root(df + 1), where df + 1 is the effective sample size for the regression coefficient.The effect size for the pons was 0.35 and for the MCP (averaging left and right), 0.16.Cohen considered an effect size of 0.5 to be moderate, so these are all small effects, although the effect size is substantially larger for the pons than the MCP and is near the threshold for a moderate effect size.Note that these are average differences between the groups and do not address the question of whether this measure would be useful as a diagnostic tool in an individual patient.
Evolution of BARS scores over time
The range of BARS scores throughout the disease course in patients with Possible/Probable/Definite MSA-C in the Exploratory cohort was 3-25.This range reflected very early disease in some in whom the diagnosis was made with clinical certainty as the course evolved, and near-maximal severity in others.The BARS rate of change (mean ± SD) was 1.4 ± 0.17 points/year.The slope of change was non-linear, however, flattening out as the disease became more severe (Fig. 3B).The estimated rate of BARS worsening at the beginning of the course (in years 0-3, n = 31 individuals) was 2.4 BARS points/year, but much slower in the two individuals evaluated in years 10-13, worsening by 0.56 BARS points/year.This apparent slowing of progression late in the course reflects the ceiling effect of the BARS once patients are severely affected.This relationship holds true for other rating scales that are complicated by floor and ceiling effects at both ends of the scales. 34
BARS score as a reflection of AP Pons measure
Performance on the BARS correlated with the axial AP Pons diameter at the time of the first scan (r = −0.66,P < 0.0001).The correlation of BARS score with AP Pons measures over time was somewhat less robust (r = −0.57[95% CI −0.33, −0.74]), consistent with the slower rate of BARS score change late in the disease.
Sensitivity/specificity analyses for cut-off rates of AP Pons and MCP decline
With an average annual rate of pons diameter decrease of −0.87 mm in MSA-C, approximately half of the MSA-C patients had slower rates of atrophy than this, but their rate of change was still much faster than that in ataxia patients who did not have MSA-C.A cut-off of −0.87 mm is therefore not appropriate for separating MSA-C from other ataxias because it would have very high specificity, but much lower sensitivity in the range of ∼50%.We found that by using both the Exploratory and the Validation cohorts to analyse sensitivity/ specificity and ROC curves using specific cut-offs for AP Pons and MCP decline in MSA-C compared to all other ataxias (Fig. 4), we were able to considerably increase sensitivity to detection of MSA-C with only modest decrease in specificity.Thus, in the Exploratory cohort, using a threshold of AP Pons decline of −0.4 mm/year, we achieved sensitivity 0.92 and specificity 0.87 (AUC =0.94).Similarly, for the MCP, a threshold decline of −0.5 mm/year yielded sensitivity 0.85 and specificity 0.79 (AUC = 0.90).These results were replicated in the Validation cohort, using the same pons decline cutoff of −0.4 mm/year (sensitivity = 0.87, specificity = 0.91, AUC = 0.95) and MCP cut-off −0.5 mm/year (sensitivity = 0.70, specificity = 0.91, AUC = 0.89).This demonstrates that even when using pons/MCP diameter rates of decline that are far from our mean values, the sensitivity/specificity for predicting MSA-C versus other ataxias remains extremely high.
Prognosis in MSA-C: survival time, AP Pons diameter at time of death, residual life
The mean length of survival in the 86 patients who died in both the Exploratory and Validation cohorts, from first motor symptom onset to death, was 8.3 ± 3.1 years (range 3.2-16.1 years).For the 86 patients who died in both the Exploratory and Validation cohorts, the time between last MRI and death (mean ± SD) was 3.7 ± 2.5 years (Exploratory) and 2.7 ± 1.6 years (Validation).AP Pons measures at the time of the most recent scan were 17.4 ± 2.4 mm (Exploratory) and 17.3 ± 1.9 mm (Validation).These differences between the cohorts were not significant (time interval MRI to death, P = 0.08; AP Pons measure on last MRI, P = 0.87).
With a mean linear rate of decline of AP Pons diameter of −0.87 mm/year, the mean predicted AP Pons diameter at time of death was 13.6 ± 3.2 mm (25th percentile measure 11.5 mm, 75th percentile 15.8 mm).
The large SD (3.2 mm) in the predicted mean AP Pons diameter at time of death made it impractical to use this value to predict residual life for an individual patient.This was confirmed by the finding that in the 86 patients who died, there was poor correlation between actual residual life, and predicted residual life calculated using the AP Pons measure, 0.18, P = 0.1.
The annual rate of change of the MCP:pons ratio could be calculated in a subset of patients.Whereas all 88 patients in the Exploratory cohort were included in the mixed-model regression analysis for rate of change of AP Pons and MCP measures, for the MCP:pons ratio, there were 53 of 60 patients with repeat scans suitable for analysis.Of the 78 patients with other ataxias, 19 had first and last scans suitable for this analysis.These images revealed a minimal annual change in the MCP:pons ratio over time: MSA-C (n = 53, −0.037 ± 0.030/ year), other ataxias (n = 19, −0.004 ± 0.012/year), P < 0.0001.This minimal change was also seen in the Validation cohort: MSA-C (n = 44, −0.018 ± 0.030/year), other ataxias (n = 49, −0.0021 ± 0.012/year), P < 0.002.
Midbrain:pons ratio
In the Validation cohort, the mean ± SD midbrain:pons ratio at first scan was higher in Possible/Probable MSA-C (n = 47; 2 scans did not have corresponding sagittal sequences), at 0.80 ± 0.11 (range 0.66-1.0)than in other ataxias (n = 99, 0.72 ± 0.09, P < 0.0001) and MSA-P (n = 13, 0.67 ± 0.05, P < 0.0001).This difference was exaggerated in late-stage MSA-C when measured on the last available scan, at which time the ratio had increased to 0.90 ± 0.14 (range 0.6-1.19),reflecting preservation of the midbrain in the face of progressive atrophy of the pons.
Discussion
The hypothesis of this study was that clinical brain MRI can advance the diagnosis of MSA-C during life, distinguishing MSA-C with certainty from all other causes of cerebellar ataxia.We demonstrate for the first time the critical observation that rate of decrease in the diameter of the pons and MCPs in MSA-C is uniquely rapid, faster than in any other ataxia or neurological disease in our cohorts.We therefore introduce this measure as a simple and powerful imaging biomarker for the diagnosis and progression of MSA-C.
We tested our hypothesis by measuring progressive loss of diameter of the AP Pons and MCPs in MSA-C as our primary outcome measures in large Exploratory and Validation cohorts (total MSA-C, n = 137 patients) in a single centre over 20 years and compared the findings with those in other cerebellar and non-cerebellar neurological disorders.We also conducted an exploratory analysis of these measures in a small number of MSA-P patients compared to PD.
We determined normative values for the AP diameter of the pons and the transverse diameter of the MCPs in a healthy cohort, consistent with previous findings. 16We confirmed the validity of diameter measurements by showing tight correlations with volumetric assessments of the pons, and demonstrated intra-rater and inter-rater reliability in the measurements of these cardinal parameters.
In the Exploratory and Validation cohorts, we show that at first MRI in a patient with MSA-C, the pons and MCP diameters are significantly smaller than in all the other ataxias we investigated, collectively and individually, including SCA types 1, 2, 3, 5, 6, 7, 8 and 17, ILOCA, FA and FXTAS.In some SCAs, such as SCA2, marked olivopontocerebellar atrophy develops as the disease progresses, so comparison of imaging findings at any single time-point in an individual patient must be considered at equivalent stages of the clinical course.With this caveat, the conclusion from this study about pons and MCP dimensions in MSA-C holds true.
Further, while Possible, Probable and Definite MSA-C are defined by clinical features, we show that these stages of MSA-C are indistinguishable from each other when measuring rate of change over time of the AP Pons and MCP transverse diameters.The AP Pons diameter declines at an average rate of 0.87 mm/year.This key radiological feature of volume loss in the pons and MCPs in MSA-C is seen in other cerebellar ataxias, previously designated olivopontocerebellar atrophies, but here we show that the rate of decline of the AP Pons and MCP diameters is uniquely faster in MSA-C.The estimated BARS rate of clinical decline in MSA-C is 1.4 mm BARS points/year when averaged over the course of the disease, and correlates with the brainstem measures, providing clinical validation of the radiological assessments.The BARS rate of worsening slows in the late stages of the disease, a consequence of the ceiling effect of the clinical scoring system in severely compromised patients. 34The AP Pons diameter measurement is therefore an accurate assessment tool throughout the course of disease.
In our cohorts, the duration of MSA-C from first motor symptoms to death was 8.3 ± 3.1 years, range (3.3-16.1 years).The range was more limited (4.9-14.7 years) if we excluded nine outliers.Seven patients died between 3.3 and 4.8 years, of whom four had catastrophic emotional reaction to illness and elected for early hospice, one died from septicaemia, one from traumatic brain haemorrhage and one had developed severe autonomic symptoms 10 years prior to death.There are many potential factors contributing to earlier death in MSA, such as occurs in patients with stridor who are more likely to succumb from sudden death. 35Two of our patients had long survival times (15.9 and 16.1 years), both of whom were elite athletes who noticed motor changes very early in their course.
In our Validation cohort, we show a trend for the pons and MCP diameters to distinguish MSA-P from PD as suggested previously. 13,36The small numbers in this subset of the cohort may have precluded the determination of clinical significance, and this finding will need to be further evaluated.In addition, at the group level, there were small to moderate differences in pontine more than MCP diameter changes in MSA-C versus MSA-P.Annual decline in pontine/MCP measures may prove to be useful as a metric to distinguish MSA-C from MSA-P, but given that the number of MSA-P cases in our cohort was small, the measurements were assessed at only two time-points, and some MSA-P cases were more purely parkinsonian whereas others had both parkinsonism and ataxia from the outset, this will need to be addressed in greater detail in a larger study of MSA-P.
A sex difference in AP Pons and MCP diameters was seen in the general movement disorder patients in the Validation cohort, but not in the healthy controls or other patient cohorts in this study.We did not measure total brain volume or correlate brainstem measures with cranial size and brain volume, as this was outside the scope of this study, but these ratios would be pertinent in investigations that focus on sex differences in these measures in healthy controls.The finding of a sex difference in brainstem measures needs to be replicated, but critically, for the purposes of our study and clinical question, the rate of change of the pons and MCP measures was independent of sex or the diameters of the brainstem measures at onset.
Other imaging findings of note in MSA-C
There is a small decline in the MCP:pons ratio over time.The MCPs convey the axons of the pontine nuclei to the cerebellum, and as the pontine neurons and their axons degenerate, the volume loss in these structures occurs largely in tandem.The small decline in the ratio may reflect less atrophy of the pontine tegmentum compared to the pontine base that contains the neurons of origin of the MCP fibres.In contrast, and in agreement with Peralta et al., 13 the midbrain:pons ratios are higher in MSA-C than in other ataxias and they increase over time, a consequence of the progressive pontine atrophy compared to preservation of the midbrain.
Comparison with the literature
Our findings are in line with recent studies that have focused on the role of imaging in the diagnosis and assessment of progression of MSA-C.Kim et al. 37 compared MSA-C to SCAs and found that the HCBS and MCP hyperintensities were largely confined to MSA-C, with high individual and combined positive predictive values, but the differential value of these MRI signs decreased over time.In other studies comparing MSA-C and MSA-P, the HCBS and hyperintense putaminal rim signs were infrequently observed. 38he Movement Disorders Society Neuroimaging study group noted that specific features on conventional MRI suggestive of MSA-C include an increased midbrain:pons ratio, decreased Magnetic Resonance Parkinsonism Index (pontine area/midbrain area × MCP width/superior cerebellar peduncle width) 39 typically <5, HCBS, cerebellar atrophy, MCP atrophy and the presence of T2 hyperintense signal in the MCPs, putaminal atrophy and presence of bilateral hyperintensity in the posterior half of the putamen on susceptibility weighted imaging and hyperintensity on apparent diffusion coefficient, reduced MCP diameter < 8 mm (more accurately, the MCP height). 13,40Dopamine transporter scans with reduced striatal uptake (not useful to differentiate from PD) and positron emission tomography with decreased metabolic activity in the basal ganglia, putamen, pons and cerebellum were also felt to be suggestive. 13,40icoletti et al. 41 compared MSA to PD measured MCP diameter in the sagittal plane, and determined that mean MCP width (more accurately, height) was different from PD and healthy controls.Gama et al. 42 compared MRI features in PD, MSA-C, MSA-P and PSP, assessing midbrain area, pons area and MCP and superior cerebellar peduncle (SCP) dimensions.Median MCP diameter (measured axially) was 17.1 mm in PD, 14.5 mm in PSP, 9.7 mm in MSA-C and 11.7 mm in MSA-P, consistent with our findings.SCP width was significantly reduced in PSP patients and in MSA-C, pons area below 315 mm 2 showed good specificity (93.8%) and positive predictive value (72.7%).Kim et al. 37 noted that MCP widths (assessing MCP height in the sagittal plane) were smaller and showed a greater decrease in MSA-C than in SCAs.Carré et al. 16 assessed the MRI in 80 patients with ataxia (26 with MSA-C) at baseline and one-year follow-up.Hyperintensity of the MCP and the HCBS was more frequent in MSA-C and had the highest specificity (98.5%) and positive predictive value (91.7%) for MSA-C.The AP Pons diameter was different in MSA-C (20.15 ± 2.22 mm) versus other ataxias (22.00 ± 2.62 mm), as were MCP diameters in MSA-C [12.46 ± 2.77 mm (5-18 mm)] versus other ataxias (14.50 ± 1.68 mm).At one-year follow-up, pons AP diameter in MSA-C dropped to 18.27 ± 2.68 mm in MSA-C but remained static in the other ataxias, while the MCP size decreased to 10.57 ± 2.88 mm in MSA-C versus 14.53 ± 1.81 mm in other ataxias.There was no significant change in the midbrain diameter at baseline/follow-up.These findings are fully consistent with our present observations.
In the SPORTAX registry of sporadic ataxia patients with onset > 40 years, 43 imaging findings were studied together with fluid biomarkers to distinguish patients with MSA-C from those with non-MSA sporadic adult-onset ataxia (SAOA).They found that cerebellar white matter, pons volume and a composite pons and MCP abnormality score (PMAS), together with the level of plasma neurofilament light chain, separated MSA-C from SAOA at baseline.In MSA-C, the pons-MCP score increased faster than SAOA, pons volume had the highest sensitivity to change, and the PMAS was a predictor of faster progression.These results are also fully concordant with our extended longitudinal observations.
Limitations
In the normative dataset we used ∼10 data points for each decade for the measurements of the AP Pons and transverse MCP diameters.The concern that this may not be sufficient data to provide an accurate estimate across the lifespan is offset by the fact that the values are strikingly stable.
The method of assessing the AP Pons and MCP diameters in clinical MRI has inherent challenges.This relates to the angle of the axial plane of section, and the degree to which the mid-sagittal images are truly midline.We were aware of these difficulties in real time as the study progressed, which afforded us the opportunity to develop an approach to overcome this ubiquitous issue in clinical imaging, as described in the 'Methods' section.Imaging software now available can resample images in true axial and sagittal planes, but this depends on the resolution of the images, the thickness of section and the sophistication of the imaging centres.Thus, while it may be optimal to reorient or rotate MRI images to ensure head orientation standardization for manual annotation tasks such as those in this study, this facility may not always be available in the clinical setting.
We note that whereas the high resolution 3D T 1 -weighted imaging exemplified in SPGR and MPRAGE images are ideal for measurements, standard T 1 -weighted and T 2 -weighted images are entirely satisfactory for this approach.We avoid FLAIR images because of indistinctness of the boundaries of the pons and MCP in this sequence.Cognizant of these measurement challenges, we show that this approach provides reliable and reproducible results.
In the Exploratory cohort, measurements were performed to an accuracy of 0.1 mm.In the Validation cohort, measurements were performed with less granularity to an accuracy of the nearest 0.5 mm, to account for differences in measurements across different conventional imaging viewers.For example, some clinical radiology applications have a 1 mm accuracy, as in previous clinically accessible radiology viewing software at our institution.This notwithstanding, the agreement in the measurements between the two cohorts was extremely tight, attesting to the internal consistency of the approach.
Intra-rater and inter-rater reliability for determination of the AP Pons and transverse MCP measures was conducted using 15 MRI scans reviewed by four independent raters.The work survived statistical analysis and showed high intra-rater and inter-rater reliability.The numbers we used are also in line with previous studies developing imaging metrics for analysis, as in the development of the midbrain:pons ratio in PSP by Massey et al., 33 which included a single rater reviewing 21 pathological scans.
Our available data indicate that the rate of atrophy of the AP Pons and MCP are linear throughout the course of the illness.We had access to sequential imaging in only a small number of patients in late-stage disease (n = 2), so the strength of this conclusion must necessarily be tempered by this numerical asymmetry.
In the movement disorder cohort that did not include patients with ataxia or MSA, a sex difference was present in brainstem measures, females smaller than males.This difference was not present in the remainder of the patient cohorts, and it was not reported in the 1000 Functional Connectomes healthy control dataset.Sex differences in regional brain volumes are often explained by differences in head size, but we did not have head size measurement in the clinical studies and therefore cannot comment on this further.It is conceivable that controlling for head size may shrink the standard deviation and improve diagnostic sensitivity in the future.
Since the completion of this study, we have had the opportunity to encounter the exceedingly rare JC virus granule cell neuronopathy (JCV-GCN), which produces very rapid volume loss in the pons and MCPs similar to and perhaps even more aggressively than in MSA-C, without the typical appearance of JC virus associated progressive multifocal leukoencephalopathy. 44 The clinical constellation of JCV-GCN is entirely different from MSA-C and the two disorders should not be confused with each other, but this occurrence emphasizes the pitfall encountered when considering a single finding to be pathognomonic of a medical condition.
Future directions
Quantitative morphometric analyses of the pons and MCP that provide more granularity are likely to detect smaller changes over shorter periods.The measurement interval may then decrease from ≥1 year (as in this study) to a matter of months, enhancing the real-time utility of this approach for clinical care and research.
The morphometric changes in the pons are not confined to its loss of diameter and volume.The shape of the pons becomes severely distorted, with anterior pontine beaking, and asymmetry in the rostral-caudal dimension reflecting our observation that the pons does not shrink uniformly.Novel imaging techniques may shed light on the structural disintegration underlying the pontine atrophy and link the known pathology of axons and their neurons of origin with the evolution of the disease.
Our observations of diminished AP Pons diameter notable already at first visit after motor symptom onset, together with the stable rate of change of AP Pons diameter over time, have implications for even earlier diagnosis of MSA-C.RBD is a harbinger of neurodegenerative synucleinopathy, 45 and otherwise unexplained urinary urgency is also a common early symptom.By comparison of the morphometric findings in the patient with RBD against normative databases, and documentation of an annual AP Pons rate of decline of −0.87 mm/ year, or equivalent volumetric change in quantitative morphometry, we predict that it will be possible to diagnose MSA-C even before the onset of motor symptoms and the inexorable progression of the remainder of the syndrome, opening the way to future prevention of the disorder.
Conclusions
In this 20-year longitudinal clinical and imaging study, we show that AP Pons and MCP transverse diameters are phenotypic imaging biomarkers in MSA-C.In the correct clinical context, AP Pons diameter decline of >0.4 mm/year has a sensitivity and specificity of >90% for the diagnosis of MSA-C.Further, a rate of decline of 0.87 mm/year is sufficient for the definitive diagnosis of MSA-C in an individual patient during life.This simple and powerful approach has deep implications for diagnosis, prognosis and therapy in MSA-C.It also implies the opposite conclusion, that an individual with adult-onset sporadic ataxia who does not have the prerequisite annual change of AP Pons and MCP diameter does not have MSA-C, and the search for the cause of the clinical syndrome should continue.The power of our innovation is that it enables the clinician to make the diagnosis of MSA-C with certainty, and to do this earlier than the current diagnostic criteria 15 suggest.Our method capitalizes on a simple, ubiquitous, imaging tool, without the need for morphometric analysis and complex algorithms.It may nevertheless be useful to combine these imaging features with biomarkers like neurofilament light chain [46][47][48] and novel assays of α-synuclein on skin biopsy 49 to aid diagnosis and further characterize severity and rate of neurodegeneration.Future studies may be able to characterize and quantify these imaging observations with finer granularity and over shorter time frames.
Figure 1
Figure 1 Pons and MCP measures in MSA-C.Top row: sagittal measurements (left image) and axial measurements (middle and right images) of the AP Pons diameter and transverse MCP diameters in a healthy control.Bottom four rows: sequential images from 2011 to 2020 in a patient with MSA-C showing progressive pons and cerebellar atrophy.Note that the AP Pons diameter was ascertained from the axial plane of section only in 2011.In subsequent scans, the axial plane was not perpendicular to the long axis of the brainstem and so the more accurate measure was derived from the sagittal images that was at or close to the midline as seen in the corresponding axial sections.The parasagittal views of the cerebellar hemisphere show the progressive cerebellar atrophy with shrinkage of the entire hemisphere, loss of the corpus medullare and prominence of the cerebellar folia.AP, anteroposterior; MCP, middle cerebellar peduncle; MSA-C, multiple system atrophy of the cerebellar type.Measurements in small text from screen shots of the patient's MRI are shown in black font on white background for readability.
Exploratory cohortWhisker plots for rate of change in AP Pons and MCPs in MSA-C versus other ataxias and line graphs showing predicted initial dimensions and change over time are shown in Fig.2.There was no difference in annual rate of change in AP Pons and MCP diameters between Possible/Probable (n = 74) and Definite (n = 14) MSA-C cases.Using a mixed-model regression analysis, with subject as a random effect, and fixed effects including time since onset and diagnosis, the rate of decline (mean ± standard error,SE) in axial AP Pons diameter in Possible/Probable/Definite MSA-C (n = 88) was −0.87 ± 0.04 mm/year, different from all other ataxias as a group (n = 78, −0.09 ± 0.02 mm/year), P < 0.0001, and individually.The mean ± SE MCP change per year for Possible/ Probable/Definite MSA-C (n = 88) was −0.84 ± 0.05 mm/ year, also different from all other ataxias as a group (n = 78) −0.08 ± 0.02 mm/year, P < 0.0001, and individually.The SD for the change of diameter in the MSA-C cohort was 0.28 mm/year for AP Pons measure, 0.35 mm/year for the MCP.It was not possible to compute SD for the
Figure 2
Figure 2 Rates of decline of pons and MCP diameters in MSA-C versus other ataxias.Estimated change over time in the Exploratory cohort for AP Pons and MCP diameters, derived from a mixed-model regression with subject as random effect.(A) Whisker plot for AP Pons.(C) Whisker plot for MCPs, averaged across both MCPs for each diagnosis.(B) Line graphs for AP Pons.(D) Line graphs for MCPs.AP, anteroposterior; Ctrl, healthy controls; MCP, middle cerebellar peduncle; MSA-C, multiple system atrophy of the cerebellar type; ILOCA, idiopathic late-onset cerebellar ataxia; SCA, spinocerebellar ataxia; FXTAS, fragile X-associated tremor/ataxia syndrome.The number of patients in each diagnostic category is listed in Table1.
Figure 3
Figure 3 Scatter plots showing change over time of pons diameter and BARS scores in MSA-C.Scatter plots of rate of change over time of (A) AP Pons diameter and (B) BARS scores in the Exploratory cohort.There were 88 patients with MSA-C in the Exploratory cohort.Of these, 31 were in the early stage of the disease (0-3 years), and two patients were in the late stage of the disease (10-13 years).AP, anteroposterior; BARS, Brief Ataxia Rating Scale; MSA-C, multiple system atrophy of the cerebellar type.
Figure 4 ROC
Figure 4 ROC curves for predicting the diagnosis of MSA-C.ROC curves for the prediction of MSA-C in (A, B) the Exploratory cohort and in (C, D) the Validation cohort using a threshold of −0.4 mm/year for the AP Pons and −0.5 mm/year for the MCPs.AP, anteroposterior; MCP, middle cerebellar peduncle; MSA-C, multiple system atrophy of the cerebellar type; ROC, receiver operating characteristic; AUC, area under the curve.
Table 3 Normative values for the diameters of the AP Pons measured in the sagittal and axial planes, and the transverse MCP diameters
Normative values for the AP Pons and MCP diameters derived from the 1000 Functional Connectomes Project in the neuroimaging data repository of the Neuroimaging Tools and Resources Collaboratory, Project ID: fcon_1000 (www.nitrc.org),and a Z-score guide to the degree of atrophy for a single case using standard deviations below the mean.In the Z-score guide, the MCP value of 16.4 mm represents the average of the right and left MCP measures.SD, standard deviation; L, left; R, right; AP, anteroposterior; MCP, middle cerebellar peduncle.The mean values for the entire Normative cohort are highlighted in the bottom row in bold for emphasis. | 2024-02-27T18:27:26.727Z | 2023-12-28T00:00:00.000 | {
"year": 2024,
"sha1": "a1cc5ad23a53b4a2f73a69018937ccfc8b6aacb5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "26d429c214872ee729570b9463f67e40939b983c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
6446109 | pes2o/s2orc | v3-fos-license | Adult nasopharyngeal hairy polyp presenting with middle ear effusion
Received 29-12-2014, Accepted 09-01-2015, Available Online 22-01-2015 Ozel Van Istanbul Hospital, Department of Otorhinolaryngology, Van Turkey Celal Bayar University, Faculty of Medicine, Department of Otorhinolaryngology, Manisa Turkey Ozel Iskenderun Gelisim Hospital, Department of Otorhinolaryngology, Iskenderun Turkey Ozel Iskenderun Gelisim Hospital, Department of Pathology, Iskenderun Turkey *Corresponding Author: Burak Ulkumen E-mail: drburak@gmail.com Adult nasopharyngeal hairy polyp presenting with middle ear effusion
Introduction
Hairy polyp (HP) is a congenital benign mass which was first described by Brown-Kelly in 1918 [1].It typically consists of mature ectodermal and mesodermal elements [2].Due to this composition and improbable location it also has been named as choristoma [3].It is mainly seen at birth or in the infancy period and may originate from any sub-region of naso-orofarenks [4,5].HP of infancy commonly present with respiratory distress, feeding difficulties and less frequently with middle ear effusion.Intensity of the symptoms depends on the site of the involvement and size of the lesion [5,6].Adult presentation of HP is very rare and according to our knowledge only 5 cases has been reported so far.In adults main reported symptoms were epistaxis, nasal obstruction and dysphagia [3].Middle ear effusion was not reported in any of the adult HP cases up to now.We present the 6 th adult case in which the main symptom was hearing impairment due to middle ear effusion
Case
A 69 year old woman was referred with a history of hearing loss.She also reported that she had fullness sensation in her left ear for 2 weeks.Oral amoxicillin had prescribed by a general practitioner with the diagnosis of acute otitis media.However, her symptoms worsened and hearing loss with pain in the left ear during swallowing has begun 5-days prior to admission.When thoroughly questioned, the patient was also complained about snoring and left sided nasal obstruction.She stated that snoring had started 6 months ago and had been progressed up to the time of referral.She was otherwise healthy, with no other major medical problems.On oropharyngeal examination a pale, smooth and pedunculated mass hanging from the nasopharynx just posterosuperior to the left palatopharyngeal arch and uvula was detected (Fig. 1).
Right otoscopic examination was normal.In left otoscopic examination tympanic membrane was hyperemic with effusion in the cavum tympani.Transnasal rigid endoscopic nasopharyngeal evaluation revealed the pedunculated skin-covered mass originating from the lateral nasopharyngeal wall.Both nasal passages were otherwise normal.Pure tone audiogram confirmed a mild conductive hearing loss on the ipsilateral ear.
Tympanometry was Type-B for the left and Type-A for the right ear which was compatible with the otoscopic findings.Magnetic resonans imaging (MRI) with intravenous gadolinium diethylenetriaminepentaacetic acid (Gd-DTPA) was done.It revealed a well circumscribed mass extending from inside the left eustachian tube to the nasopharynx and oropharynx measuring 30x7x18 milimeters (Fig. 2).Histopathology demonstrated a 35X15X10 mm polypoid lesion covered by stratified squamous epithelium with associated seromucinous glands and lenfoid folicules which was compatible with the diagnosis of HP.
After 1 year of follow up there was no sign of recurrence neither in the endoscopic evaluation nor in the MRI.The middle ear effusion was also totally resolved
Discussion
Although the classification of germinalcell originated tumors of naso-oropharynx was first done by Arnold in 1870 [7], the term of "hairy polyp" was first used by Brown-Kelly in 1918 for a benign nasopharyngeal congenital mass having both ectodermal and mesodermal components [1,2].It was called as hairy because of its outer layer which composed of mature epidermis that frequently has a hairy appearance.In Arnold's classification; HP had defined as "choristoma" which describes a lesion mistakenly separated from its mother tissue [3,7].He also had described it as a type of dermoid due to its mature bigerminal composition [7].
HP is relatively rare and has an incidence of 1 in 40,000 live births and have a tendency to occur in female newborns [8,9].Due to this presentation, HP is typically defined as a disease of early infancy and is very exceptional after the first year of life [10].Only 5 adult cases have been reported up to now, the oldest being 71 years [4,11].We believe that our case is the 6 th adult HP and also the 2 nd oldest one so far.
HP of naso-oropharynx mostly originates from lateral nasal wall followed by the tonsils, palatal arches and soft palate [4,5].Almost twothirds of lateral pharyngeal wall HPs originate from the eusthachian tube [3].In our case it was also originated from the eusthachian tube resulting with middle ear effusion.HP of infancy usually presents with feeding difficulties, drooling, respiratory distress, hemoptysis, coughing, otorrhea, hearing loss, vomiting and recurrent ear infections [5] while in adults it commonly presents with snoring, recurrent epistaxis, dysphagia and cough [3].In our case the main symptoms were hearing loss and otalgia which were presumably caused by middle ear effusion.Considering the adult cases, middle ear involvement has not been reported before.In the literature the adult HPs were presented mainly with symptoms associated to nasal obstruction and swallowing difficulties [3].Differential diagnosis of HP; including teratoma, hamartoma and dermoid cyst can sometimes be challenging due to similar histopathological findings.Teratomas can be differentiated by trigerminal origin and the observation of endodermal derivatives while hamartomas can be identified by the presence of single germ cell layer [11].When regarding the dermoid cysts; they have the typical keratin flakes.In this presented case none of these above mentioned findings were seen, instead there was a bigerminal structure consisting of ectodermal and mesodermal components with an epidermal lining which was compatible with HP.The mesodermal inner core which mainly composed of fat can also be noticed in the MRI (Fig. 2).Whereas teratomas exhibit more blended appearance of germ cell layers and may sometimes have bone or teeth fragments which may be hyperdense in MRI unlike in our case.
Main treatment modality for HP is total surgical excision.Malignant potential, metastasis or recurrence after complete removal has not been reported so far [8].Although it can be removed trans-nasally or trans-orally; in our opinion the best approach is combined naso-endoscopic and transoral approach which provides better visualization and control [12].Thus, we used the combined endoscopic approach in which we first excise the peduncule from the eusthacian tube under endoscopic view followed by removal of the mass transorally.
Naso-oropharyngeal HPs typically seen in female neonates with left sided predominancy while they are extremely rare in adult population [10,13].We present the 6 th case of adult nasopharyngeal HP with an uncommon presentation.We achieved total removal of the lesion by a combined nasoendoscopic and trans-oral approach.We believe that endoscopic guidance is essential for total removal to prevent recurrence in adult HPs
Figure 1 :
Figure 1: Oropharyngeal view which reveals the hanging polypoid mass just behind the uvula and the palatopharyngeal arch
Figure 2 :
Figure 2: MRI with intravenous contrast.(a) Coronal view revealing a well-circumscribed mass (arrow) which is peripherally enhanced after intravenous contrast.(b) The same lesion (arrow) in axial view in which dilatation in the orifice of the left eusthacian tube can be noticed Surgical excision was done under endoscopic view by dissecting the peduncle from the orifice of the eustachian tube under general anesthesia.After freeing the peduncle the mass was taken out transorally.A concomitant paracentesis to the left ear was also done.There was no major bleeding.Histopathology demonstrated a 35X15X10 mm polypoid lesion covered by stratified squamous epithelium with associated seromucinous glands and lenfoid folicules which was compatible with the diagnosis of HP.After 1 year of follow up there was no sign of recurrence neither in the endoscopic evaluation nor in the MRI.The middle ear effusion was also totally resolved | 2017-09-10T04:31:50.282Z | 2015-01-10T00:00:00.000 | {
"year": 2015,
"sha1": "a4eff8ede16fc97bbf2762943a418c1fb2091247",
"oa_license": "CCBY",
"oa_url": "https://dergipark.org.tr/tr/download/article-file/183885",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "a4eff8ede16fc97bbf2762943a418c1fb2091247",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
96461632 | pes2o/s2orc | v3-fos-license | Parametric Study on Wave Interaction with a Porous Submerged Rubble Mound Breakwater Using Modified N-S Equations and Cut-Cell Method
Article History: Received: 29 Aug. 2016 Accepted: 13 Feb. 2017 In this paper wave transformation in a submerged sloped breakwater and its hydraulic performance was simulated by developing a numerical model in Fortran. The code was established by combining porous flow and a twophase model using VOF method. Modified Navier-Stokes and k-ε equations implemented to the model to simulate the flow in porous media. Cut cell method was modified to simulate fluid transformation from sloped porous media’s boundary in more accurate way and then applied in the governing equations to increase the accuracy of the model. The validity of the present program was investigated based on the comparisons with the available experimental data. The results showed that increasing of inertia coefficient and wave period and also reduction of porosity lead to some phase lags between the incident and transmitted waves. Furthermore parametric studies were performed on effect of submerged porous breakwater crest widths and heights on transmitted waves leading to useful results for design criteria.
Introduction
Breakwaters are usually used to provide a calm area for loading and offloading of ships or protect the shoreline by forcing the waves break and release their energy. These structures have different geometries and shapes related to their position and environmental conditions. Effect of wave motion on breakwaters and hydraulic performance of these barriers have been studied in the past few decades. Vilchez et al. [1] developed a method to evaluate the hydraulic performance resulting from the interaction of perpendicularly impinging water waves on various types of breakwater [1]. They used data obtained from physical tests in a wave flume with irregular waves. Jensen et al. [2] re-examined the porous media equations and considered new calibration cases to achieve a better understanding of the variation of the resistance coefficients. In their work constant values for the resistance coefficients for a broad range of flow conditions were recommended. Liu and Li [3] used velocity potential decompositions and the matched Eigen function expansions in the porous media to obtain a new solution for wave reflection and transmission by a surface-piercing porous breakwater with an analytical approach. They used complex dispersion relations in the porous breakwater and avoided finding complex wave numbers and handling nonself-adjoint eigenvalue problem. Yang et al. [4] used projection schemes and a finite element method on unstructured grids to solve Euler/Navier-Stokes equations for incompressible fluid in an Arbitrary Lagrangian-Eulerian (ALE) frame. They studied the effects of rubble types on the wave dissipation and wave overtopping of rubble mound breakwater. In their research to simulate the porous medium small solid blocks were used in the domain. Wu and Hsiao [5] implemented a numerical model based on the Volume-Averaged Reynolds Averaged Navier-Stokes equations (VARANS) coupled with the non-linear k-ε turbulence closure solver to simulate propagation of solitary waves over a submerged permeable breakwater. The porous medium, consisting of uniform glass spheres, was mounted on the seafloor. They used this model to estimate the reflection, transmission and dissipation of waves using the energy integral method by varying the aspect ratio and the grain size of the permeable obstacle. Hieu and Vinh [6] used a VOF-based two-phase flow model to study the interactions of waves and a seawall protected by a submerged porous structure with a permeable terrace. They concluded that the overtopping rate was strongly dependent on the energy dissipation due to the drag force. Mendez et al. [7] analyzed influence of wave reflection and energy dissipation in the submerged porous media. For this purpose, they obtained analytical expressions for the mean quantities such as mass and energy fluxes in terms of the shape functions. Nevertheless their model seems to be unpractical for arbitrary geometries of a breakwater. Karim et al. [8,9] modeled wave motion in porous structure using VOF method. They used VOF method to treat the free surface. In their model, the equations were applicable for only rectangular porous cells in the wall boundaries. Zhang et al. [10] developed an integrated model based on the VARANS and Biot'sporo-elastic theory to investigate the interaction of waves with a submerged permeable structure. They have studied wave behavior through a parametric study on wave and structure characteristics. Guta and Sundar [11] simulated progressive waves over rectangular porous structure using time dependent incompressible Navier-Stokes-Brinkman system. They used finite volume technique to discretize governing equations. Based on their numerical model, in the case of sloped boundaries, they had to use smaller grids and other techniques which lead to smaller time-steps. Zhao et al. [12] simulated breaking waves by multi-scale turbulent model focusing on the turbulent production and dissipation in wave breaking process. Their focus was on the performance and accuracy of turbulent models in the simulation process. Gracia et al. [13] presented a numerical model to study the wave propagation above a low crested permeable breakwater. In their work wave elevation was recorded in different points above the structure at breaking zone to investigate flow motion in the porous media. Their models had some limitations as it was applicable only for low crested structures. Hieu et al. [14] simulated breaking of linear waves in interaction with a porous submerged breakwater. They investigated hydraulic performance of this structure in interaction with waves and effect of porosity on the reflection, transmission and dissipation coefficients. Interaction of solitary and random waves with porous structures was studied by some other researchers in recent years. For example Lynett et al.
[15] presented a numerical model for solitary wave interaction with vertically walled porous structures. In their research, they focused on the reflection, transmission, and diffraction of solitary waves by the porous structure. In the case of random waves, Lara et al. [16] applied RANS equations to model wave interaction with submerged permeable structures. They obtained good results for height envelopes, mean level and spectral shape of free surface displacement and dynamic pressure inside the breakwater. The broad literature review on wave-porous media interaction in this section shows that there is no clear approach to model sloped boundaries of a porous media with sloped cells. Therefore in this research, a VOF-based numerical solution is presented to model unsteady flow using Navier-Stokes equations. A new algorithm was developed to simulate flow through the porous media based on the Cut-Cell method to increase the performance of the model. This model was employed to simulate wave motion in porous media of a submerged breakwater. Reaction of breakwater to incident waves with different heights and periods was investigated. Special attention was then paid on the effect of rubble mound breakwater characteristics on its performance associated with damping rate and transmitted waves characteristics. It should be noted that using the present model in which cut-cell method and porous equations were combined, unlike the other numerical works in this area there is no need to use small grid for the boundary cells of sloped side of the porous breakwater. Slope of the boundary can be easily modeled with sloped cells. This approach leads to a more efficient and faster computer code which doesn't need to use smaller time steps and more equations near the walls like wall-function method.
Governing Equations
In this study fluid assumed to be viscous and incompressible and so Navier-Stokes and continuity equations were used as governing equations. Extended equations were used within porous medium as [9]: In these equations t is the time, v and w are the velocity components in y and z directions respectively, is the kinematic molecular viscosity; t is the kinematic eddy viscosity; g is the gravitational acceleration; / p gz in which is the mean pressure, is the fluid density, and y R and z R are resistance forces exerted by porous media. These are defined as: are defined based on these coefficients, respectively, using following relationships: (1 ) that for a cell, outside the porous medium volumetric and superficial porosities are unity, while for a cell inside the porous medium these coefficients are between zero and unity.
For a turbulent flow, modified k-ε method for porous medium was used [17]. The kinematic eddy viscosity is defined as follows: where k is the kinematic energy and ε is the energy dissipation. In this method, in order to consider dynamics of turbulent, two model equations are used.
In the first equation, generation of turbulent kinematic energy and in the second one rate of viscous dissipation is considered: and s G is a function of velocities derivatives as: In this paper behavior of free surface was simulated using VOF method. This method is based on the advection equation in which F has a specific value for each cell so that its quantity is related to position of this cell as follows: Youngs' VOF method was implemented in present model to track the free surface. In this method advection of the free surface due to velocity field is reconstructed with sloped lines. The first step of this method is to find angle of free surface cell with horizontal axis (β) that is estimated with eight neighboring cells information (Figure 1) Estimation of advection in this method is based on the four essential situations ( Figure 2). While another twelve situations can be obtained from the conformation of these essential situations. In this process, another angle (α) should be calculated for each cell as:
Cut-Cell Method Implementation to the Porous Media
In this paper, partial cell method was applied for porous medium. This method was used by some researchers to consider sloped and arbitrary meshes in boundary of fluid with solid structure. (Figure 3) to discretize the equations. For example in solving Navier-Stokes equations with simplified marker and cell (SMAC) method, by applying theses coefficients in continuity equation's discretization, the sloped geometry is considered in staggered grid as: If one side of the cell is solid its wall coefficient would be zero. While for no solidness wall it would be unity. This idea is implemented to consider the slope of the boundaries in porous medium (Figure 4). Wall coefficients for each cell were considered as: , , , The volume coefficient can be obtained as: According to the related coefficients for a porous cell, the advection equation was modified as follows: where the corrected coefficients are: This new formulation let one to simulate free surface motion through cut-cell porous cells. We called this approach as Cut-Cell Porous method and was applied in the numerical model in this reserach. It is described in the next sections.
Solution Method
The staggered grid system was applied to discretize the governing equations with finite difference technique. In this grids system, pressure and viscosity are located in the center of the cell and velocity components are defined in the cell faces ( Figure 5). In order to solve Navier-Stokes and continuity equations SMAC method was used [23]. The steps of this method are as follows: where d denotes the discrete gradient operator. Pressure correction, p , is used in the following relationship to modify the velocity and pressure in new time step: The used terms are as follows: Convection flux in the vertical direction: The used terms are as follows: Diffusion term can be calculated using the following equation: where: The minimum value of these time steps was used in the numerical model in each time step.
In the computation domain, for initial condition still water level was considered. Therefore hydrostatic pressure is initial pressure in computational cells at t=0. In top and bottom walls zero velocity was applied for vertical velocities. While Nueman boundary condition was assumed for tangential velocities. In order to generate a linear wave with an amplitude A and an angular frequency ω, in the left hand side of the Numerical Wave Tank All outgoing waves are absorbed at open boundary and there is no reflection in the right hand side of NWT. It should be noted that damped waves mostly behave in a linear way in this system. Figure 6 shows monitoring profile of waves at the right hand side of the flume and its comparison with the linear wave theory. It is seen that it is very close to the linear wave.
Results
In order to assess the accuracy and validity of the numerical model, results of generated linear waves and their interaction with a vertical porous structure were compared with the available data.
Wave Verification
In this section numerical wave tank with a length of 15m, a height of 0.81m and a depth of 0.5m was considered. Generated waves by piston type wavemaker had amplitude of 0.03 m and a period of 1.2s. Three grids in the x and y directions were selected as Table1 for wave generation in numerical model. The results are presented in Figure 8. It is evident from this figure that there is a good agreement between numerical results with three different grids and that of linear wave. Therefore Grid I was applied in this study for all wave numerical modeling.
Validation of Wave-Structure Interaction Results
The height of damped waves in the rear wall of a vertical porous breakwater was estimated by Karim et al. (2003) using numerical model [8]. In their study extended Navier-Stokes equations and VOF method were implemented assuming viscous flow. In the present study their results were used to validate the model for wave and porous medium interaction. Wave height damping was studied for different breakwater widths for a 0.07m height, 1.6 s period incident wave in a NWT. Figure 9 shows this test schematically. The medium porosity, drag and inertia coefficients were selected as 0.6, 3.5 and 0.5 respectively to be consistent with those of Karim et al. tests.
Results of Wave Interaction with a Submerged Breakwater
Schematic of the submerged breakwater in the numerical wave tank is shown in Figure 11. The inertia and drag coefficients were assumed to be 1.2 and 2.5 consistent with Hieua and Tanimoto [14] with a porosity of 0.5. The incident wave had a period and amplitude of 1.2s and 0.06m respectively. The breakwater crest width was selected as a multiple of wave length λ. The wave elevations were recorded in four different gauges and results are presented in Figure 12. This figure shows that the wave height increases as the wave reaches on the crest of the submerged breakwater. In the first step the transmitted wave time histories were considered for different incident wave amplitudes. Figure 13 shows non-dimensional amplitude of transmitted waves at G4 against nondimensional time for different incident wave amplitudes. It is evident from this figure that when the incident wave height increases the transmitted wave amplitude decreases. In the next step the effect of incident wave period on transmitted waves was considered. Figure 14 shows non-dimensional amplitude of transmitted waves at point G4 against non-dimensional time for different incident wave periods but a constant height of 0.06 m.
It can be seen that as the incident wave period increases the phase shift between transmitted waves also increases in the form of time lag.
In order to study the effect of drag coefficient on wave damping, three different drag coefficients were selected. Figure 15 shows the effect of drag coefficient on transmitted wave damping. As shown in this figure, for three different drag coefficients of 2.1, 2.5 and 2.9 there is no major different in damping of transmitted waves. To consider the effect of inertial term, different inertia coefficients were exerted to the numerical model. Figure 16 shows the time histories of transmitted waves for CM= 0.8, 1.2 and 1.6. It is clear from this figure that wave attenuation is almost the same for different inertia coefficients. However a phase shift exists for different inertia coefficients.
To consider the effect of porosity of the structure on the transmitted waves different porosities as n= 0.4, 0.5 and 0.6 were selected. The results are presented in Figure 17. This figure shows that there is a direct relationship between phase delay of transmitted waves and porosity of the structure.
It is also evident in these figures that waves passing the breakwater somehow have a fluctuating nature. This phenomenon shows that this attenuation needs more time to be established. In Figure 16 the variable parameter is inertia force which in turn generates negative acceleration against propagating wave and this leads to requiring more time to reach to the same wave profile. In Figure 17 with decreasing the porosity more wave blockage in the left hand side of the breakwater occurs and therefore the variation of wave height in right hand side of the breakwater is more considerable. It is due to wave's shock attenuation which damps faster as porosity decreases and vice versa.
The effect of crest height on damping of waves was studied using three different crest heights of 38.25 cm; 44.25 cm and 50.25 cm. The corresponding crest water depths were 12.75 cm; 6.75 cm and 0.75 cm respectively (see Figure 18). Considering a constant base width for the breakwater the slope of the structure changed accordingly to its height.
Discussion and Conclusions
In this paper a modified numerical model was developed to simulate the waves and porous structure interaction. On the basis of viscous flow assumption and using the modified Navier-Stokes equations, the hydrodynamic performance of the porous structure was assessed using the Cut-Cell method. The present model was validated by comparison of damped wave heights results in the rear wall of a vertical porous structure with available data. Further tests showed that as the waves passing over the submerged breakwater, the wave height increases in the middle of structure. However, as the waves propagate further in the structure, the wave height decreases. This phenomenon can be justified as inflow area of the structure dramatically decreases. The numerical results also show that increasing of height or period of the incident waves can lead to a time lag between incident and transmitted waves. This is also reasonable as the incident waves height or period increases the porous media would dissipate it more efficiently and this gives rise to more delay on wave propagation. Studies on the drag and inertia coefficients revealed that drag coefficient does not play an important role on the rate of transmitted waves. However increasing of inertia coefficient is associated with more time lags for the incident waves passing the breakwater. This can be related to extra momentum which is needed to accelerate a given volume of water inside the porous medium relative to pure water. Decreasing of the porosity causes more attenuation on incident wave leading to more phase shift between the incident and transmitted waves. This result could be explained considering the sudden reduction of inflow area in the breakwater. Reduction of porosity also results in rise of drag and inertia forces against propagation of incident waves. The studies on effect of submerged breakwater crest width on the transmitted waves shows that a double crest width causes a decrease to transmitted wave height of about 40%. While an increase of 30% on crest height causes a decrease of 80% on transmitted wave height. | 2019-01-02T07:07:49.768Z | 2016-09-15T00:00:00.000 | {
"year": 2016,
"sha1": "7b1b043f92caf320f3962ac0d4669a808721970f",
"oa_license": null,
"oa_url": "https://doi.org/10.18869/acadpub.ijmt.6.31",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0856f5c98d48d731f0caf70493ed80d8b49b60d5",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
264951172 | pes2o/s2orc | v3-fos-license | A Comparative Study of Knee Joint Proprioception Assessment in 12-Week Postpartum Women and Nulliparous Women
Introduction Proprioception is one's capacity to perceive bodily position, alignment, and movement. Several connective tissues, such as skin, ligaments, joint capsules, and muscles in the body, contain proprioceptive sensory receptors. Joint elasticity results from hormonal variations, notably the peak relaxin hormone during pregnancy, which also affects proprioceptive receptors. The musculoskeletal system may be affected by hormones and anatomical changes brought on by pregnancy, including joint laxity and modifications to posture and gait. The capacity to perceive the joint position and movement, or proprioception, may be impacted. To comprehend the impacts of pregnancy on joint function and postpartum women's rehabilitation options, this study compares knee joint proprioception in women who gave birth 12 weeks ago to nulliparous women. The study aims to assess and compare the degree of alteration in knee joint proprioception in 12-week postpartum females. Methodology A total of 160 participants were assessed during the entire study. Women from 18 to 35 years of age were included in the study. Women with any present knee joint injury, multiparty, or relevant surgical history were excluded. The procedure was performed under the author's surveillance at the Department of Community Health Physiotherapy. The knee joint reposition test was used to assess the knee joint proprioceptive error among two groups (80 each), including nulliparous women and the other 12-week postpartum women. An image tool provided by the University of Texas Health Science Centre at San Antonio (UTHSCSA) was created and offers the tool as computer software or a digital application for handling medical pictures and associated data, software 3.0 was used to determine the angular variation between angles in the targeted and achieved positions during the test. Result A significant proprioceptive error was observed among 12-week postpartum women compared to the nulliparous group of women. The mean error of knee joint repositions among 12-week postpartum women was 0.80±6.08 (P=0.0001), and among nulliparous women was 0.09±0.72 (P=0.0001). Conclusion Concluding insight that pregnancy affects postpartum women's risk of fall injuries and joint function due to altered proprioception. Compared to nulliparous women, proprioceptive error for the dominant knee joint was significant among 12-week postpartum females. The hormonal changes during pregnancy affect the proprioceptive receptors, especially the relaxin hormone surge, which results in joint laxity and may impair joint position sensing, increasing the risk of falls. To better acknowledge the effects of pregnancy on joint function and postpartum women's rehabilitation options, this study compares knee joint proprioception in postpartum and nulliparous women. It proves right about altered proprioception post-childbirth. The results of this study might aid medical practitioners in creating successful rehabilitation plans and treatments to stop postpartum women from falling.
Introduction
One's capacity to perceive bodily position, alignment, and movement is known as proprioception.In response, proprioception is commonly referred to as the sense of joint position, kinesthesia, movement sensation, sense of effort, and sense of force.Along with pain, tactile, and thermal sensation, kinesthesia is a part of the bodily sensory system.It is known as an interceptive system because its sensory input originates from changes to internal structures [1].Several connective tissues, such as skin, ligaments, joint capsules, and muscle tissue throughout the limbs, trunk, and neck, contain proprioceptive sensory receptors.Capsular, ligamentous, and cutaneous mechanoreceptors are thought to complement spindle input for position and movement awareness throughout most motions, with the muscle spindles being the primary source of proprioceptive information [2].
During the period of pregnancy, the female body goes through numerous hormonal and anatomic modifications that have an impact on the musculoskeletal system.These changes could cause musculoskeletal disorders, raise the risk of injury, or change the course of existing conditions [3].Pregnancy affects the soft tissues, joints, posture, and gait.Joint laxity develops during pregnancy due to hormonal fluctuations, which lasts even beyond six weeks postpartum [4].Hormonal alterations throughout the pregnancy have been shown to affect the equilibrium of labyrinthine fluids, which has a straightforward impact on the enzyme process and neurotransmitter activity.Relaxin hormone is increased to its peak during pregnancy, resulting in increased laxity.Raised levels of estrogen and relaxin hormones affect the neuromuscular system and alter the mechanics of tendons and ligaments where proprioceptive receptors lie [5].Variations in a childbearing woman's static stability may be caused by weight gain during pregnancy and its uneven distribution, especially in the anterior belly area, reconciling posture modifications essential for the anteroposterior center of gravity position readjustment and increased joint laxity [6].Prevalent alterations in the body during pregnancy involve knee hyperextension, collapsing of the medial longitudinal foot arch, anteriorly tilting the pelvis, lordosis of lumbar curvature, and gaining volume, length, and breadth of the foot [7].
A 27% fall rate has been observed throughout pregnancy, particularly around the third trimester, due to a decrease in maintaining the balance that lasts six to eight weeks after childbirth.The loss of proprioception in pregnant women has not been examined, even though balance difficulties and visual dependency have been reported.Proprioception is considered the sixth sense of the human body.During pregnancy, fluctuation in hormonal levels reaches its peak, and relaxin hormone is one of them, which causes laxity of ligaments joints, resulting in a lack of proprioception.Proprioception dysfunction in the lower extremities, significantly in the knee and mortise joints, is one of the major causes of a higher risk of fall injuries.A previous investigation on knee proprioception revealed literature indicating that knee proprioception changes during pregnancy [8].It is unclear whether these alterations revert to the baseline during the postpartum period.As a result, we wanted to investigate knee joint proprioception in women 12 weeks after the delivery and compare it with the knee joint proprioception of non-pregnant women.As a result, this research is being done to compare the lack of proprioception in women after 12 weeks of delivery and women who are not pregnant.
Materials And Methods
After receiving ethical authorization and approval from the Institutional Ethical Committee of Datta Meghe Institute of Higher Education and Research(DU) with Ref. No. (DMIMS(DU)/IEC/2022/919).This research was conducted in local communities and Acharya Vinobha Bhave Rural Hospital, Sawangi, Wardha, Maharashtra.The study randomly chose 80 women in each group aged between 18 and 35.One group belongs to women in the 12th week of postpartum, and the other includes nulliparous females.Women with any knee contracture or deformity, a history of arthritis, instability in the knee joint, a history of knee surgery, or any neurological disorders were excluded from participating in the research.The following process was conducted on 80 age-matched nulliparous women with regular menstrual cycles.
Procedure
All of the individuals provided their informed consent.The subjects were instructed to sustain their weight by standing against a wall with no more than two fingers of support.The markers were positioned 5 cm above the lateral femoral condyle, at the lateral malleolus, and at the lateral femoral condyle of the dominant lower extremity.The individuals were asked to keep their eyes closed.The knee was tested by flexing, extending ten times, and attaining a specific position.This position was regarded as the ideal angle.The subjects were asked to stay in the same position for 15 seconds and remember it.A footstool with an advanced high-pixel camera perpendicular to the knee was placed 60 cm from the subject's feet.The desired position was captured on camera.Afterward, the patients were instructed to move their knee 10 times in full flexion and extension before positioning it at the desired angle.Once more, this position was captured, and the photos were uploaded to a computer.The University of Texas Health Science Centre at San Antonio (UTHSCSA) image tool 3.0 was used to assess the images and determine the difference between the original and final angles, as seen in Figures 1, 2. The UTHSCSA developed and rendered available software or a digital application for collecting, processing and maintaining medical pictures and related data.It was recorded and taken to examine the reposition error-the difference between the initial and final angles.According to the researchers, the knee joint reposition test's reliability is 0.81 [9].The knee joint reposition test postevaluation in a nulliparous subject is shown in Figure 1, and 12-week post-partum women in Figure 2.
FIGURE 3: Graphical presentation of distribution of volunteers in two groups according to their age in years
In Table 1, the mean error of knee joint repositions among 12-week postpartum women noted was 0.80±6.08 and among nulliparous women noted was 0.09±0.72 present in Figure 4.
FIGURE 4: Graphical presentation of Comparison of knee angle in two groups throughout the repositioning test
All the measures post the study stated that there is marked proprioceptive error among women in their 12th week of postpartum, i.e., 0.80±6.08(P=0.24)whereas in the nulliparous group of women, a minimal proprioceptive error was seen 0.09±0.72 (P=0.24).
Discussion
In this study, two groups, including 80 participants each, one involving post-partum women in their 12th week and other nulliparous women all belonging to the age group 18-35 years, were assessed and reported a significant proprioceptive error even after 12 weeks of delivery when compared to nulliparous women of the same age group.Increased 20% of one's body weight throughout pregnancy elevates the stress on the primary joints of lower extremities, which may persist even after delivery, which can affect a woman's quality of life.In a study, Ritchie et al. claimed that soft tissue swelling is among the major physiological variations during pregnancy.Eighty percent of childbearing women experience oedema at some point throughout their pregnancy, leading to musculoskeletal disorders that stay even after the delivery.The musculoskeletal system is considerably affected during pregnancy, which might result in novel injuries or reduce the threshold for many prevalent disorders [10].Blecher et al. concluded that the hormonal changes during pregnancy are responsible for a transitory increase in laxity of the ACL of the knee joint, which is typically observed in patients after one year following ACL reconstruction [11].Gupta et al. in a study assessed 36 pregnant ladies throughout all trimesters of the antenatal period and recorded proprioception of the knee joint.She used a digital inclinometer for assessment.It concluded that lots of changes take place during pregnancy, causing curtailing of knee joint proprioception and increasing the risk of falls [4].Li et al. worked on a project in which they assessed proprioception of the knee joint in 30 young men using three different methods: joint angle reset, motion minimum threshold measurement, and force sense reproduction [12].Ramachandra et al. stated that ankle proprioception was considerably impacted during pregnancy in the last trimester and did not revert to baseline even six weeks post-childbirth.It briefed that proprioceptive input from the ankle joint is important for maintaining postural stability.It is well established that the ankle plantar flexors are crucial in maintaining postural stability in pregnant women.
The study involved 70 pregnant women assessed during their third trimester and six weeks postpartum for ankle joint repositioning and kinesthetic sense.It concluded that ankle joint proprioception was altered and did not recover after six weeks postpartum.This study successfully assessed the proprioceptive error of knee joints among postpartum women even after 12 weeks of delivery compared to nulliparous women of similar age groups [13].
The study's findings indicate that women who are 12 weeks postpartum have much higher proprioceptive inaccuracy than women who are not pregnant or lactating.This might be owing to the altered proprioceptive input gained from the lax ligaments around the knee joint.It has been shown that relaxin hormone levels rise 10 times faster during pregnancy, predisposing to ligament and joint laxity, which may compromise the receptors' capacity to detect movement [14].
Physical therapy rehabilitation must be begun in the second and third trimesters and continued even after delivery for early recovery of all the physiological disturbances that occurred throughout the period.These interventions may include proprioceptive training, balance training, tai chi exercises, etc., which benefit childbearing women the most and are safer for them [15].To lower the rate of falls caused by postural instability, this study supports the need for proprioceptive training programs in pregnant women, especially immediately following delivery.
The results only broadly apply to primiparous women in the research area who are 12 weeks postpartum.Furthermore, future research is required for multiparous women who needed a different study design and were not included in the current investigation.
Conclusions
The study has illuminated the mysterious connection between pregnancy and postpartum alterations in knee joint proprioception.According to the results, proprioceptive alterations in postpartum women are retained even after 12 weeks of delivery.This research proves that the rising incidence of falls among postpartum women can be caused due to alterations in knee joint proprioception.The effects of hormonal and musculoskeletal modifications go beyond the knee joint, possibly affecting the functioning of the entire musculoskeletal system.This study highlights how crucial it is to do more research to fully comprehend these alterations and their long-term consequences, with possible implications for treatment and prevention strategies.To promote the best possible musculoskeletal health and general well-being, healthcare practitioners must be aware of these changes and adapt treatment and exercise regimens accordingly, especially for postpartum women.The study also emphasizes the need for proactive healthcare approaches that target the special requirements of postpartum women in order to improve their physical recovery and quality of life.
FIGURE 1 :
FIGURE 1: Knee joint reposition test in a nulliparous subject | 2023-11-03T15:08:18.924Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "718029042b5b3cc98ead5f8e818688ae47ee5caf",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/156713/20231101-28894-bc6ebs.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "278bab79971950762cb2e81d0e515d24bb13c170",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
37566188 | pes2o/s2orc | v3-fos-license | A 21-year-old female patient with dyspnoea, chest pain and pleural thickening on chest radiographs
A 21-year-old female patient was referred to the current authors’ hospital in January 2002, with dyspnoea and pleural chest pain.
In addition, thoracic computed tomography (CT) was performed and is shown on figure 2.
Thorax ultrasonography revealed a right minimal effusion, and pulmonary function tests showed a restrictive pattern. On fibreoptic bronchoscopy, there was no endobronchial lesion. There were no acidoresistant bacilli in bronchoalveolar lavage fluid (BAL) and cultures were also negative. Cytology of BAL showed 95% polymorphonuclear leukocytes and 2% lymphocytes. Later on, an open pleural biopsy was performed and a histopathological analysis was carried out, the results of which are shown in figure 3.
Answer 1
Chest radiography revealed a right hemithorax retraction and blunting of the right costophrenic angle.
Answer 2
Thoracic CT showed volume loss of right hemithorax, and pleural thickening of the inferior and middle zone of the right lung.
CASE PRESENTATION
A 21-year-old female patient with dyspnoea...
Discussion
Bromocriptine is an ergot derivative with dopaminergic activity. It is used in the treatment of Parkinson's disease and hyperprolactinemia. Higher doses are required to cross the blood-brain barrier in these diseases. There are some reported side-effects of this drug, such as headache, nausea and orthostatic hypotension. Rarely, pleural effusion and pleural thickening have been reported as a result of long-term and high-dose treatment, but others have denied a causal relationship [1][2][3][4][5][6][7][8][9][10][11][12][13].
Pleuropulmonary changes during bromocriptine treatment have been reported by Rinne [9], who reviewed the charts of 123 patients with Parkinson's disease receiving long-term bromocriptine treatment. Seven of these patients developed pleuropulmonary disease, which included pleural effusions, pleural thickening and pulmonary infiltrates. The patients received bromocriptine therapy alone or in association with levodopa at doses of 20-90 mg daily for 6-27 months prior to the development of symptoms. No specific cause was determined for the pleuropulmonary changes and, with the withdrawal of bromocriptine, clinical and radiological improvement were observed in two patients.
Tornling et al. [11] reported four cases of pleuropulmonary disease during treatment with bromocriptine at doses of 20-50 mg·day -1 . Three patients who received ≥50 mg·day -1 of bromocriptine developed respiratory symptoms, an elevated ESR, pulmonary infiltrates and pleural fibrosis with associated effusions, with partial resolution following withdrawal of the drug or reduction in dosage. However, pleural changes were not completely reversible in all patients.
Kinnunen and Viljanen [5], McElvaney et al. [8], Vergeret [14], Wiggins and Skinner [15], and Le Witt and Calne [7] also described 14 cases with bromocriptine-dependent pleuropulmonary changes. The patients had Parkinson's disease and were all treated with bromocriptine in a dosage range of 22-100 mg·day -1 . Physicians observed pleuropulmonary changes 9-48 months after the beginning of treatment. They reported clinical and radiological improvement after the discontinuation of bromocriptine treatment, but progression in the cases of continuation of therapy at 2 years of follow-up.
Morelock and Sahn [16] reported more than 20 cases of pleuropulmonary disease attributed to bromocriptine in the period 1966-1998 in the literature. Radiographical changes reported in these patients included pleural effusion, pleural thickening and interstitial infiltrates. The onset occurred 12-48 months after institution of therapy.
While pleural fluid eosinophilia (12-30%) has been noted in two patients [8], the majority of pleural fluid analysis has revealed a lymphocytic predominant exudate (51-99%) without eosinophilia. Drug therapy withdrawal leads to resolution of pleural effusions, but pleural thickening and interstitial parenchymal changes do not resolve completely in all patients.
McElvaney et al. [8] have previously reported 23 patients receiving bromocriptine therapy who developed pleuropulmonary disease. All were male and the majority had a history of long-term cigarette smoking. The authors also proposed that age could be a risk factor, since the majority of cases have been reported in patients >60 years, but this may be also related to the age at which Parkinson's disease occurs.
The mechanism of bromocriptine-induced pleuropulmonary disease is unclear, but is probably an idiosyncratic or hypersensitivity reaction. Similar pleuropulmonary changes can occur with other dopamine agonist ergot derivatives (ergotamine), because their molecular structures and pharmacological properties are comparable. Furthermore, bromocriptine, methysergide and ergotamine have all been associated with the development of retroperitoneal fibrosis [8].
Pleuropulmonary changes have been observed with high doses of bromocriptine treatment and long-term usage. Thus, the cumulative dose of bromocriptine may be responsible for these effects [2]. In some patients, radiological and clinical improvement have been observed after the discontinuation of bromocriptine. Prednisolone therapy has been used in some patients, but usefulness of prednisolone is not established [13].
The patient presented here received bromocriptine at a dose of 60 mg·day -1 for 10 months. There was no other identified cause of pleural thickening. The open pleural biopsy specimen displayed chronic inflammation. It was accepted that all of these findings were dependent on bromocriptine and the therapy was stopped. Subsequently, steroid treatment was initiated. The symptoms of the patient consequently regressed. She was discharged and followed in the outpatient clinic for radiological remission.
In conclusion, bromocriptine-dependent pleural thickening is rare and other usual causes of pleural diseases should be first excluded. However, physicians should remember that pleuropulmonary changes can occur in high-dosage bromocriptine therapy. | 2019-02-15T14:19:22.697Z | 2004-12-01T00:00:00.000 | {
"year": 2004,
"sha1": "bd7263bab9f506039861d0cb0aa4e4218334fbf5",
"oa_license": "CCBYNC",
"oa_url": "http://breathe.ersjournals.com/content/1/2/165.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "e7451902baacc1f2a9b4a50d7216f425a62b56a8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
34340904 | pes2o/s2orc | v3-fos-license | Krtap16, Characterization of a New Hair Keratin-associated Protein (KAP) Gene Complex on Mouse Chromosome 16 and Evidence for Regulation by Hoxc13*
Intermediate filament (IF) keratins and keratin-associated proteins (KAPs) are principal structural components of hair and encoded by members of multiple gene families. The severe hair growth defects observed upon aberrant expression of certain keratin and KAP genes in both mouse and man suggest that proper hair growth requires their spatio-temporally coordinated activation. An essential prerequisite for studying these cis-regulatory mechanisms is to define corresponding gene families, their genomic organization, and expression patterns. This work characterizes eight recently identified high glycine/tyrosine (HGT)-type KAP genes collectively designated Krtap16-n. These genes are shown to be integrated into a larger KAP gene domain on mouse chromosome 16 (MMU16) that is orthologous to a recently described HGT- and high sulfur (HS)-type KAP gene complex on human chromosome 21q22.11. All Krtap16 genes exhibit strong expression in a narrowly defined pattern restricted to the lower and middle cortical region of the hair shaft in both developing and cycling hair. During hair follicle regression (catagen), expression levels decrease until expression is no longer detectable in follicles at resting stage (telogen). Since isolation of the Krtap16 genes was based on their differential expression in transgenic mice overexpressing the Hoxc13 transcriptional regulator in hair, we examined whether bona fide Hoxc13 binding sites associated with these genes might be functionally relevant by performing electrophoretic mobility shift assays (EMSAs). The data provide evidence for sequence-specific interaction between Hoxc13 and Krtap16 genes, thus supporting the concept of a regulatory relationship between Hoxc13 and these KAP genes.
The distinct functional properties of diverse epithelial cell types are largely determined by their cytoskeletal architecture that includes keratins and keratin-associated proteins (KAPs) 1 as essential components. In vertebrates, there exist two major classes of keratins, acidic and basic, that are commonly known as type I and type II keratins, respectively, and which are encoded by more than 50 genes in mouse and man (1)(2)(3). Specific pairs of complementary type I and type II keratins are usually co-expressed and have an intrinsic capacity to form ␣-helical coiled-coil heterodimers; these heterodimer units are known to self-assemble into 10-nm intermediate filaments (IFs) through both end-to-end association and anti-parallel alignment (4). The cytoplasmic IFs build an intracellular fibrous network that extends from the cell membrane to the nucleus, and its properties are greatly influenced by interactions with keratin-associated proteins, also known as KAPs (1,5). The keratin and KAP gene expression profiles change with keratinocyte differentiation status, and the progression of hair follicle differentiation is characterized by the sequential activation of distinct sets of hair-specific keratin and KAP genes (5)(6)(7)(8)(9)(10)(11). Hair follicle development is initiated prenatally through mesenchymal-epithelial interactions that result in the formation of multilayered cylindrical structures (12). Two major outer layers of functionally distinct keratinocytes known as outer root sheath (ORS) and inner root sheath (IRS) surround the hair shaft which is composed of the cuticle, cortex, and medulla. Differentiation of the cells contributing to the formation of these compartments progresses along the proximaldistal axis of the follicle and originates in the lower bulbous portion that harbors largely proliferating and undifferentiated cells. In terminally differentiated cells of the hair shaft, the bulk of the structural molecules include hair-specific IF keratins and KAPs, which combined are estimated to include up to a 100 different proteins (5). In these cells, IFs are embedded into a dense matrix of KAPs that according to their biochemical composition have originally been classified into 3 major groups including high sulfur (HS), ultra-high sulfur (UHS), and high glycine-tyrosine (HGT). Due to the growing complexity among these KAPs, the three main groups have subsequently been divided into structurally distinct subgroups encoded by at least 23 hair KAP gene families (9 -11). The essential role of keratins in the structural organization of hair is underscored by the linkage of severe hair disorders to mutations in human hair keratins as exemplified in monilethrix (13). Furthermore, aberrant expression of keratins and KAPs in transgenic mice has been shown to affect hair structure and growth (14 -20). Combined, these data suggest that structural integrity and tight regulation of both groups of genes is essential for proper hair growth.
Based on differential gene expression analysis in postnatal skin of mice overexpressing Hoxc13, we have previously identified a novel subset of HGT-type KAP genes that we collectively termed Krtap16-n whose expression was uniformly down-regulated in the skin of this GC13 transgenic mouse; we mapped these genes to a distal region of mouse chromosome 16 (MMU16) that is of conserved linkage with human chromosome (HSA) 21q22.11 (19). Here we demonstrate that these genes are integrated into a larger KAP gene domain of about 0.82 Mb in size and that this region is homologous to a recently described domain of HGT-and HS-type KAP genes in humans (10). We examined the Krtap16 gene expression patterns in both developing and cycling hair follicles by in situ hybridization. Furthermore, we show that a HOXC13 consensus binding motif, 5Ј-TT(A/T)ATNPuPu-3Ј, implicated in the regulation of human IF hair keratin genes (21) matches HGT-2, a conserved motif that had previously been speculated to play a role in the transcriptional regulation of certain HGT-type KAP genes (15). Our in vitro DNA binding studies provide evidence for Hoxc13-dependent regulation of Krtap16 genes involving the HGT-2 motif. The Krtap16 gene complex described here might thus serve as a model for examining regulatory interactions of a Hox factor with clustered downstream target genes.
Genomic Map, Protein Sequence Comparison, and Phylogenetic Tree
Analysis-A genomic map of the distal region of MMU16 harboring the Krtap16 genes was derived from the NCBI mouse genome data base (www.ncbi.nlm.nih.gov/genome/guide/mouse). A region of ϳ0.82 Mb stretching roughly from position 90,094 to 90,912 kb of the MMU16 draft sequence was used for establishing this map. All KAP genes and KAP-related ESTs with continuous open reading frames (ORFs) present in this region were included in the map, and the corresponding protein sequences were derived from conceptual translation of the ORFs starting with an ATG start codon positioned in a sequence context that resembles a common consensus motif for eukaryotic translational initiation (22). Alignment of deduced KAP protein sequences was achieved by using the AlignX program included in the Vector NTI software package. Phylogenetic tree analysis was performed using the same software tool.
Expression Analysis-For determining Krtap16 gene expression patterns in skin of 5 day postnatal FVB mice, the animals were euthanized in an atmosphere of CO 2 . Skin from the scapular region was fixed in 4% paraformaldehyde in phosphate-buffered saline at 4°C overnight, then transferred to 30% sucrose solution in phosphate-buffered saline and incubated at 4°C overnight, finally frozen in OCT compound and stored at Ϫ80°C. In situ hybridization was performed with 10-m cryosections essentially as described (23,24). Briefly, plasmids containing Krtap16 cDNAs (19) were linearized with appropriate restriction endonucleases to generate templates for the synthesis of digoxigenin (Roche Applied Science)-labeled antisense and sense (control) probes using SP6 and T7 RNA polymerases. Prior to hybridization, sections were treated with proteinase K (1 g/ml) for 10 min and acetylated with acetic anhydride in 0.1 M triethanolamine. Hybridization with 5 l of probe (1/20 of in vitro transcription reaction product) in 150 l of hybridization solution containing 10 mM Tris, pH 7.5, 600 mM NaCl, 1 mM EDTA, 50% formamide, 10% dextran sulfate, and 200 g/ml yeast tRNA was carried out at 65°C overnight. To reduce background, sections were treated with RNaseA at 37°C for 30 min after the hybridization. Post-hybridization washes were performed in 50% formamide with descending concentrations of SSC starting at 2ϫ SSC until 0.2ϫ SSC at 65°C. Hybridization signals were visualized by using the standard NBT/BCIP detection system (Roche Applied Science).
For expression analysis in cycling hair, synchronized follicle growth in dorsal skin of FVB mice was induced by depilation as described (25). Fresh skin samples at 9 days, 17 days, and 23 days post-depilation were used for the preparation of RNA according to Chirgwin et al. (26). RNA samples were used for the synthesis of complex cDNA probes and employed in reverse Northern hybridizations to Krtap16 cDNAs arrays as described (27,19); parallel skin samples were embedded into an OCT compound for the preparation of frozen sections. In situ hybridization to these sections was performed with 35 S-labeled antisense and sense (control) RNA probes synthesized from linearized plasmids containing Krtap16 cDNAs (see Ref. 19 for a description of all Krtap16 cDNAs used here, except for Krtap16-10; primers used for synthesizing Krtap16-10 cDNA were 5Ј-TCACGGCAACTACTATGGT-3Ј[forward] and 5Ј-AGG-GACTGAAGTAGCCATAA-3Ј [reverse]). A detailed description of the in situ hybridization protocol utilizing 35 S-labeled probes including probe synthesis, probe adjustment, section pretreatment, hybridization, and washing conditions has been published (28). Hybridized sections were exposed to Kodak NTB2 autoradiographic emulsion for 24 h, and developed slides were counterstained with hematoxylin and eosin.
Annealed oligonucleotides were end-labeled with [␥-32 P]ATP and T4 polynucleotide kinase, and purified oligonucleotides (Ϸ10,000 cpm) were incubated with nuclear extracts (Ϸ2 g of protein) from rat kangaroo kidney cells (PtK2 cells) transiently transfected with a human HOXC13 expression vector; nuclear extracts from non-transfected cells were used as a control. Protein/DNA complexes were resolved by electrophoresis in 6% polyacrylamide gels in 0.5ϫ TBE. Binding specificity was ascertained by adding 100-fold excess of unlabeled oligonucleotides (see Fig. 7) in parallel reactions. For supershift assays, 1 l of HOXC13specific antiserum (21) was incubated with nuclear extracts prior to the incubation with labeled oligonucleotides.
For immunohistochemical detection of Hoxc13 in 10-m frozen skin sections, HOXC13-specific antiserum (21) was used at a 1:1000 dilution; bound antibody was visualized by incubating the slides with biotinylated goat anti-guinea pig IgG and the Vector Elite ABC kit was used for substrate reaction and color development.
Genomic and Structural Characterization of Krtap16
Genes-Based on differential gene expression analysis in 5 day skin of normal versus Hoxc13 overexpressing transgenic mice, we have recently identified a subset of new HGT-type KAP genes designated originally Krtap16-1-10, and by using a mouse-hamster radiation hybrid panel we mapped them to the distal portion of MMU16 (Ref. 19; the sequence for Krtap16-10 was submitted separately to the GenBank TM data base under accession number AF477980). The subsequent publication of a draft sequence of MMU16 and the entire mouse genome (29,30) facilitated a closer analysis of the corresponding genomic region (see mouse genome data base, www.ncbi.nlm.nih.gov/ genome/guide/mouse). This analysis revealed redundancy in the original assignment of these Krtap16 cDNAs to separate genes, thus reducing the actual number of Krtap16 genes isolated by us to 8 (see Table I). A map of the region harboring the Krtap16 and neighboring genes is presented in Fig. 1A. Whereas all 8 Krtap16 genes are located on the minus strand within a subregion of about 0.54 Mb of DNA, 6 of them are clustered within less than 100 kb; the seventh (Krtap16-8) and eighth (Krtap16-7) gene are located at a distance of ϳ90 and 450 kb upstream of the 6-member subcluster, respectively. The Krtap16 genes are integrated into a larger domain of 0.82 Mb of genomic DNA harboring predominantly KAP genes (Fig. 1A). Within this domain, the Krtap16 genes are found among 16 annotated KAP genes in addition to 12 ESTs and cDNAs with similarities to known KAP genes. A conceptual translation of the ORFs of these 28 KAP genes and KAP gene-related cDNA sequences revealed considerable heterogeneity with regard to glycine/tyrosine and cysteine content, as well as the presence of conserved amino acid sequence motifs ( Fig. 2 and Table I). The glycine/tyrosine content of the predicted Krtap16 proteins ranges from 40.7 mol% (Krtap16-3) to 65.4 mol% (Krtap16-8) (Table I), thus placing them into the HGT-type class (5) that is known to include several structurally distinct subfamilies in both mouse and human (5,10). Remarkably, there is a more than 4-fold variation in cysteine content among the Krtap16 proteins ranging from 1.76 mol% for Krtap16-10 to 18 mol% for Krtap16-7 (Table I). Interestingly, the two presumptive proteins with the highest cysteine content of 13 mol%, i.e. Krtap16-7 and Krtap16-8, are also those with the highest glycine/tyrosine content of 61.8 and 65.4 mol%, respectively. These differences in cysteine and, perhaps to a lesser degree, in glycine/tyrosine content between individual Krtap16 proteins are associated with distinct amino acid repeats and terminal peptide motifs (Fig. 2), as well as distinct patterns of spatial segregation and genomic clustering of the corresponding genes ( Fig. 1A).
To gain a better understanding of the structural and phylogenetic relationships between Krtap16 and surrounding KAP genes, we performed a phylogenetic tree analysis based on the protein sequences predicted for these genes. This analysis revealed the existence of distinct clusters of phylogenetically related genes, thus permitting a subdivision of the 28 KAP genes listed in Table I into 8 different groups and subgroups ( Fig. 1B). Overall, these groups correlate well with the distribution of specific amino acid motifs and peptide repeats (Fig. 2), as well as the genomic clustering of the corresponding genes (Fig. 1A). The three members of group I all encode proteins with a cysteine content of about 12 mol% that consequently fall into the class of HS-type KAPs. These proteins are structurally highly conserved and share an N-terminal M(A/V)YSCCSGN-FSS motif ( Fig. 2A). The genes of this group are located at the proximal end of the KAP domain ( Fig. 1A) and are followed by two members of group II, Krtap14 (pmg-1; Refs. 31 and 32) and Krtap15 (pmg-2; Ref. 33). We assigned two additional KAP genes including Krtap11-1 (originally known as Hacl-1, Ref. 34) and a Krtap11-related cDNA (GenBank TM accession number: AK017387) that mark the distal end of this domain to group II ( Fig. 1, A and B). A characteristic of all four genes is their overall heterogeneity and lack of extended conserved sequence motifs (Fig. 2B). In addition, the relatively low glycine/tyrosine and cysteine content of their products (Table I) averaging around 18 and 8 mol%, respectively, makes it difficult to assign them either to the HGT or HS class, although Krtap11-1 might be an exception with a cysteine content of close to 12 mol% (Table I).
These heterogeneous pmg-related genes flank larger clusters of HGT-type KAPs that were assigned to groups III, IV, and V. Group III is the largest group including 12 genes that are organized into 3 structurally and spatially distinct sub-clusters, III-A, -B, and -C. In addition to Krtap8-2, group III-A includes 5 of the 8 Krtap16 genes, i.e. Krtap16-1, Ϫ16-3, Ϫ16-4, Ϫ16-5, and Ϫ16-9; except for Krtap16-3, all of these KAPs encode a consensus MS(Y/H)YX(S/G)Y(Y/S)GGLG N terminus and, Krtap16-3 included, a conserved YGFSXFY C terminus (Fig. 2C). Embedded within the group III-A cluster are the two members of group III-C, Krtap16-10, and a Krtap16-10-related gene (GenBank TM accession number: AY026312) that are lo- The first column lists known KAP genes under their originally published name; corresponding GenBank TM accession numbers are listed in the second column. ESTs or cDNAs that have previously not been reported in the literature are only identified by their GenBank TM accession numbers. The third column indicates gene location on the centromeric (ϩ) or telomeric (Ϫ) strand of the chromosome. The remaining columns specify characteristics of the conceptual translation product including peptide size, molecular weight, glycine-tyrosine content and cysteine content. Table I. Optimal alignment was achieved by grouping the sequences according to the clustering observed upon phylogenetic tree analysis (Fig. 1B) cated just downstream of the III-A distal-most member, Krtap16-3. The group III-C genes encode proteins that share a distinct MSYY(H/Y)GNYYGG N-terminal domain (Fig. 2E). Members of subgroup III-B form a loose cluster stretched over 100 kb near the middle of this 0.83 Mb domain (Fig. 1A) and contain previously unknown KAP gene sequences. The presumptive proteins encoded by these sequences share a characteristic MCYY(R/G)(G/S)YYGGLG N terminus (Fig. 2D). Interspersed between groups III-A and III-B is a tight cluster of 5 highly conserved Krtap6-1-related KAPs assigned to group IV-A, which includes in addition to Krtap6-1, Krtap16-8, as well as 3 additional KAP genes (GenBank TM accession numbers: D86419, D86421, and AK003924). Conceptual translation of the ORFs of all 5 genes indicates that their protein sequences are nearly identical throughout and start with a distinct MC-GYYGNYYGGRGYG N terminus (Fig. 2F). The two members of subgroup IV-B, Krtap16-7 and Krtap6-2 are located more than 180 kb distal to the III-B subcluster. Their conceptual protein structures show strong similarities between the Nterminal halves and both have extensive -CGYGSGYG-repeats; however, their C termini are unique (Fig. 2G). Finally, two cDNAs (GenBank TM access numbers: AK004025 and D86423) were found to encode structurally heterogeneous HGT-type KAPs; since they showed little similarity to members of the other groups, they were assigned to a separate group V (Figs. 1 and 2H). These two genes are located between groups III-A and IV-A (AK004025), as well as group IV-B and the two distal members of group II (D86423).
We have pointed out earlier that the distal region of MMU16 where we originally mapped the Krtap16 genes is of conserved linkage with HSA 21q22.11 (19), and recently a corresponding human KAP gene domain has been defined in this region (10). A comparison between the organization of the genes and subclusters included in the KAP gene domain on MMU16 and those found in the corresponding human KAP gene domain is schematically depicted in Fig. 3 and discussed below. Overall, the results reveal a great degree of similarity with respect to the order and arrangement of subclusters and the orientation of these domains in mouse and human.
Krtap16 Expression Patterns-The circumstance that all Krtap16 genes described here were isolated as differentially expressed cDNAs in 5 day postnatal skin of our GC13 hair mutant (19) implies that they are transcriptionally active in skin at that stage of development. Since the Krtap16 genes bear structural similarities to previously characterized hairspecific HGT-type KAP genes, we focused on examining expression in hair. To define spatial patterns of Krtap16 expression, we performed in situ hybridization analysis in 5-day postnatal skin that is known to harbor fully differentiated hair follicles (35). Furthermore, to determine whether Krtap16 genes might continue to be active also in cycling hair, we examined expression in adult skin containing depilation-induced anagen phase hair follicles.
Expression analyses in 5 day postnatal skin was performed by using digoxigenin-labeled RNA probes specific for each of the Krtap16 genes, except for Krtap16-10, whose expression was only examined in cycling hair (see below). Hybridization in longitudinal follicle sections yielded similar hair-specific patterns for all genes examined (Fig. 4). In most cases, expression was restricted to a relatively narrow region of the hair shaft with a proximal boundary near the neck of the bulb, a region known to harbor cells undergoing terminal differentiation (36); for Krtap16-7, however, the proximal expression boundary was shifted slightly more distal compared with the remaining genes (Fig. 4F). Hybridization signals were observed in all differentiated hair follicles found in scapular skin samples, and the patterns suggested expression to be restricted to the cortical region of the hair shaft (Fig. 4, A-D and F-I). Further evidence for the cortically restricted expression was obtained by hybridization to follicular cross sections. In all cases, expression was found to be restricted to a circular layer of uniformly shaped cells surrounding the medulla, which itself lacked detectable hybridization signals (Fig. 5).
For five of the six members of group III, we determined expression in skin at different stages of the hair cycle including the progressive growth phase (anagen), the phase of follicle regression (catagen), and the resting phase (telogen). This was done by either reverse Northern (Krtap16-1, Ϫ16-3, Ϫ16-5, Ϫ16-9) or in situ hybridization (Krtap16-10). The results show a drastic decline in expression levels during follicle regression (catagen), and in skin containing telogen follicles expression was no longer detectable (Fig. 6). In addition, we examined expression of the remaining Krtap16 genes in cycling anagen hair follicles by in situ hybridization with radioactive probes; this revealed expression patterns that were very similar to those observed upon hybridization with digoxigenin-labeled probes in 5-day postnatal hair (data not shown). Combined, these findings are consistent with the hair cycle-dependent expression of other Krtap16-related HGT-type KAP genes in mouse skin previously reported (37).
Combined, the expression data show that all Krtap16 genes described here are transcriptionally active in both developing and cycling hair follicles and that their expression is restricted to cortical keratinocytes. The reduction and lack of expression demonstrated for a subset of Krtap16 genes during catagen and telogen, respectively, suggests the existence of hair cycle-dependent control mechanisms for the regulation of transcrip-tional activity. The majority of Krtap16 genes, including Krtap16-1, Ϫ16-3, Ϫ16-4, Ϫ16-5, and Ϫ16-9 have, together with the previously isolated Krtap8-2 gene (GenBank TM accession no. D86422, Ref. 37), been assigned to group III-A as based on our phylogenetic tree alignment (Fig. 1B), and consequently the great similarities in expression (Fig. 4) to the cortical pattern reported for Krtap8-2 (37) are not surprising.
Furthermore, the cortical patterns of group III-A genes seem to be remarkably similar to the spatially restricted patterns of expression of certain members of the KAP19 subfamily of human HGT-type KAP genes in human beard hair, including KAP19. 2, -19.3, and -19.7 (10). Group III-A genes are structurally most closely related to KAP19 genes located in a domain of HGT-type KAP genes on human chromosome 21q22.11 ( Fig. 3; Ref. 10). However, in contrast to the relative uniformity in Krtap16 expression patterns, some members of the human KAP19 family (i. e. KAP19.1, -19.4, and -19.6) exhibit rather heterogeneous patterns that may either include or be restricted to cuticular cells of beard follicles (10).
DNA Binding Studies-The fact that all Krtap16 genes were found to be down-regulated in the skin of Hoxc13-overexpressing transgenic mice (19) might suggest a regulatory relationship between Hoxc13 and these KAP genes. As a first step toward examining potential direct interactions with Hoxc13, we searched for the presence of presumptive Hoxc13 binding sites in Krtap16 flanking regions. The consensus binding sequence (5Ј-TT(A/T)ATNPuPu-3Ј) for human HOXC13 has previously been defined (21), and given the high degree of overall similarity of 98% between mouse Hoxc13 and human HOXC13 proteins, as well as the structural identity of the DNA binding homeodomains in both proteins, it is to be anticipated that murine Hoxc13 will interact specifically with the same consensus sequence. A systematic search for bona fide Hoxc13 binding sites within a genomic interval of 23 kb harboring a subcluster of four tandemly arranged Krtap16 genes, i.e. Krtap16-9, -16-1, - 1A and 7A), identified 34 perfect matches for the HOXC13 consensus binding sequence (5Ј-TT(A/T)AT-NPuPu-3Ј) in this region alone (Fig. 7A). In most cases one or two presumptive binding sites were closely associated with a TATA box (Fig. 7A). To obtain evidence for interaction of HOXC13 with these presumptive binding sites, we selected the two most proximal sites upstream of Krtap16.5 for EMSAs. The experiments were performed with nuclear extracts from HOXC13-transfected and non-transfected PtK2 cells and 32 Plabeled double-stranded oligonucleotides containing a bona fide HOXC13 binding sequence and a mutated version of this sequence.
The results show band shifts only for reactions performed with nuclear extracts from HOXC13 transformed cells in the presence of oligonucleotides that contain an intact 5Ј-TTAAT-GAG-3Ј sequence matching the HOXC13 consensus binding site 5Ј-TT(A/T)ATNPuPu-3Ј in both cases, thus indicating the formation of sequence-specific HOXC13/DNA complexes ( Fig. 7B and C; compare lanes 1, 2, and 3). In the presence of excess cold competitor oligonucleotide containing either the authentic sequence or an alteration from 5Ј-TTAATGAG-3Ј to 5Ј-TGCCG-GAG-3Ј, the shifted bands were either abolished or maintained, respectively (Fig. 7, B and C; compare lanes 4 and 5). Supershift assays performed by adding HOXC13-specific antiserum (21) typical pattern of Hoxc13 expression (Fig. 7D) previously reported in both mouse and human hair follicles at RNA (19,38), as well as both RNA and protein levels (21), respectively. Furthermore, this apparent Hoxc13 expression is shifted toward the keratogenous zone of the cortex in GC13 mutant hair follicles (Fig. 7E). DISCUSSION Our data show that the Krtap16 genes are embedded within a larger KAP gene domain that occupies ϳ0.82 Mb of DNA and includes 28 KAP genes (Fig. 1); 20 of these genes, including the Krtap16 genes, have previously been reported in the literature, on occasion under alternative names attributed by different authors. The remaining 8 KAP genes were listed in the mouse genome data base as ESTs and cDNAs with KAP gene similarities (www.ncbi.nlm.nih.gov/genome/guide/mouse). In addition, 3 ESTs for unknown genes without similarities to KAP genes have been mapped to this region (Fig. 1A). Outside of this 0.82 Mb region, no further KAP genes or KAP-related ESTs were identified, thus suggesting that the map presented in Fig. 1 defines the limits of a murine KAP gene complex in the distal portion of MMU16.
Like all KAP genes reported to date (see Refs. 5,[9][10][11], the Krtap16 genes have a simple structure consisting of a single exon with a short ORF, in this case ranging from 165 to 423 bp. Although all Krtap16 genes may be classified as HGT-type KAPs as based on the high glycine/tyrosine content of their presumptive products ranging from 40.7 to 65.4%, they do -1, -16-3, -16-4, -16-5, -16-7, -16-8, and -16-9, (A, B and D-H) and a sense (SE) control probe specific for Krtap16-3 (C) were hybridized to 10-m frozen sections of scapular skin derived from 5-day postnatal FVB mice. Hybridization signals were visualized by the NBT/BCIP color reaction that resulted in a dark precipitate; no signal was obtained with the control probe. A schematic representation of the sections depicting the individual follicle layers is shown in the scheme at the lower right (I). Expression of all genes is restricted to cortex (Ctx), while no hybridization signal was detected in the medulla (M) and surrounding cuticle (Cu). Space bar, 20 m.
FIG. 6. Krtap16 gene expression during the hair cycle. A, reverse Northern hybridization to arrayed cDNAs specific for Krtap16-1, - 16-3, -16-5, and -16-9 as indicated with complex cDNA probes derived from adult mice whose hair growth was at different stages of the hair cycle. The hair cycle had been synchronized by depilation (25) and skin samples were taken at days 9, 17, and 21 post-depilation (p.d.) that corresponded to anagen, catagen, and telogen phases, respectively. Please note the dramatic reduction in signal and loss of signal for all four genes in catagen and telogen, respectively. S, total mouse genomic DNA used as a standard. B/BЈ, D/DЈ, E/EЈ, in situ hybridization with 35 S-labeled Krtap16-10-specific antisense probe to frozen sections of adult skin that was depilated for synchronized entry into anagen, catagen, and telogen phase of the hair cycle as described above; C/CЈ, hybridization with Krtap16-10-specific sense control probe. Hybridized sections were counterstained with hematoxylin/ eosin as shown in brightfield (B-E), and silver grains corresponding to hybridization signal were visualized by light scattering (bright grains) using darkfield microscopy (BЈ Ϫ EЈ). Please note Krtap16-10 expression in the lower cortex (Ctx) of the growing hair shaft during anagen (B/BЈ) and the reduced expression in catagen hair (D/DЈ); no specific hybridization signal was detectable in the shaft of telogen hair (E/EЈ), and no signal was detectable in anagen follicles upon hybridization with the sense probe (C/CЈ) used as control. DP, dermal papilla; HS, hair shaft; M, medulla. Space bar, 50 m. belong to diverse groups as revealed by phylogenetic tree analysis (Fig. 1B), and they are found scattered throughout much of the 0.82 Mb KAP gene complex. Interestingly, putative proteins encoded by Krtap16-7 and Ϫ16-8, the two Krtap16 proteins with the highest glycine/tyrosine content of 61.8 and 65.4%, respectively, have also the highest cysteine content of 18 and 13% (Table I). The same applies also to other members of the Krtap16-8 and Ϫ16-7 subgroups IV-A and -B, respectively, as well as members of subgroup III-B (Table I, Fig. 1). This circumstance might illustrate that the traditional dichotomy of KAP classification into HS (and UHS) and HGT-type KAPs is insufficient for adequate categorization, as it has been recognized previously (5).
By and large, the clustering of the described KAP genes into distinct groups and subgroups as demonstrated by phylogenetic tree alignment corresponds well with their spatial clustering in the genome (Fig. 1, A and B). This overall genomic organization is conserved between mouse and human where a corresponding KAP gene domain has recently been defined on HSA 21q22.11 (10). In both cases, the domains start at the centromeric end with a group of Krtap13-related (HS-type) genes that is followed by Pmg-related KAPs, which precede a subcluster of HGT-type KAP genes related to murine Krtap8-2, i.e. groups III-A and KAP19 in mouse and human, respectively (Fig. 3). This is followed by groups IV-A and KAP6 in mouse and human, respectively, which in both cases include genes with similarities to Krtap6-1. Subsequent to this, the density of KAP genes decreases and includes group III-B and IV-B genes in mouse and their respective human KAP20 and KAP21 equivalents. Finally, the complexes terminate distally with heterogeneous Pmg-like genes in both cases. This remarkable conservation in gene order on a relatively small scale reflects on a much larger scale the conserved synteny between distal MMU16 and HSA21 corresponding to segments of 22.37 and 28.42 Mb of the mouse and human chromosomes, respectively (29). The order of 111 genes in this MMU16 segment has been reported to follow exactly the order of corresponding human genes in this region (29), and consequently, the overall conservation of the murine KAP gene domain is of no surprise. Although the mouse and human genomes have been extensively shuffled by chromosomal rearrangements since their phylogenetic separation from a common ancestor ϳ65-75 million years ago, the gene order in syntenic segments has been found to be largely intact (30). However, this does not exclude isolated local rearrangements and gene family expansions that are likely to have occurred during the evolution of the murine KAP gene domain on chromosome 16 and its human counterpart and resulted in paralogous genes.
The expression patterns of the Krtap16 genes in the keratogenous zone of the hair shaft most closely resemble the patterns of two previously isolated members of their domain including Krtap8-2 (originally isolated under the name HGTptype Ia, see Ref. 37) and Krtap11-1 (originally known as Hacl-1, see Ref. 34). Although these two genes are structurally heterogeneous (Fig. 2) and belong to different groups according to our phylogenetic alignment analysis (Fig. 1B), their reported expression patterns are very similar. This relative uniformity contrasts with a considerable diversity in expression patterns among members of the human equivalent of the MMU16 KAP gene domain. In that case, several genes exhibited expression in both cortical and cuticular cells with remarkable variation in proximal-distal expression domains, which for certain Pmg-like KAP genes, including hKAP13.1 and -15.1, initiated in the matrix of beard follicles (10). Furthermore, several of the human KAP genes showed striking cylindrical asymmetries in expression, a feature that previously had been observed also for Krtap6-1 expression in hair follicles of sheep and rabbit (39). These asymmetries are likely to reflect differential expression (21). EMSAs were performed with two separate double-stranded oligonucleotides, K16-5 (1) (B) and K16-5 (2) (C) that contained the presumptive binding sites, as well as two corresponding control oligonucleotides, K16-5 (1) M (B) and K16-5 (2) M (C) that contained mutated versions of these sites (changes in HOXC13 consensus binding sequence from 5Ј-TTAATGAG-3Ј 3 5Ј-TGCCGGAG-3Ј). For a description of gel lanes, see key to the right. Specific HOXC13-DNA complexes are only seen in lanes 2 and 5 for either oligonucleotide as indicated by the arrows at the left. Lane 1, reaction with nuclear extracts from non-transfected PtK2 cells; lane 6, supershift assay with HOXC13-specific antibody (21) as marked by the asterisk. D and E, immunohistochemical detection of Hoxc13 protein expression pattern (brown staining) with anti-HOXC13 antiserum in normal (D) and GC13 mutant (E) hair follicle of 5-day postnatal skin counterstained with hematoxylin. Follicles depicted in D and E have been aligned and divided into 5 horizontal zones of 75 m comparable to the zones seen in Fig. 4. Hoxc13 expression originates in the matrix (Mtx) of the bulb and reaches into precortical/cortical regions above the apex of the dermal papilla (P). In GC13 hair (E), this expression reaches into the 4th zone (red arrow); in the same zone no cortical staining is visible in the normal follicle (D) as marked by the red arrow, although specific staining is seen in the medulla (M). Mtx, matrix. Space bar, 50 m.
in local subpopulations of cortical keratinocytes and might be a contributing factor in determining the texture and crimp of hair (39). Accordingly, the greater diversity in human KAP gene expression patterns compared with their counterparts on MMU16 is likely part of the molecular signature determining textural differences between human beard and murine coat hair.
The down-regulation of the Krtap16 genes in Hoxc13 overexpressing (GC13) mice led us to speculate about a regulatory relationship between Hoxc13 and these KAP genes. In support of this idea we found multiple putative Hoxc13 binding sites associated with members of the MMU16 KAP gene domain (Fig. 7A); EMSAs carried out for two of these sites located 5Ј of Krtap16-5 indicated sequence-specific interaction with human HOXC13, whose DNA binding homeodomain is identical to the murine Hoxc13 homeodomain. Furthermore, both Hoxc13 and the Krtap16 genes are expressed in the same lineage of cortical keratinocytes, although at different stages of differentiation. While the Krtap16 genes are expressed in the keratogenous zone of the cortex, Hoxc13 is expressed in the matrix and the precortical region, which means that there is little overlap between the Krtap16 and Hoxc13 expression domains in normal hair follicles. Taking this circumstance into account, one may consider at least two alternative mechanisms for explaining the down-regulation of KAP genes in skin of GC13 mice. The first is based on indirect regulation mediated by another transcription factor, whose expression might directly be controlled by Hoxc13. A good candidate for this is Hoxc12, whose expression domain in the cortex and precortex of differentiated hair follicles conspicuously overlaps with both the Hoxc13 domain (38) and the Krtap16 expression domains (Fig. 4). During axial patterning, Hox genes frequently restrict the activity of their next downstream neighbor within the same cluster that usually occupies a more anterior expression domain along the longitudinal embryonic axis, a phenomenon known as posterior prevalence (see Refs. 40 and 41). A similar principle might apply to the regulation of Hox gene activities along the longitudinal axis of the differentiating hair follicle (42). In our case, the concept of a regulatory relationship between Hoxc13 and Hoxc12 is supported by the presence of multiple bona fide Hoxc13 binding sites upstream of Hoxc12, (as well as Hoxc13; data not shown), which itself might positively regulate Krtap16 expression. In this context it is worth mentioning that Hoxc13 might control also its own expression via negative autoregulation as it has been postulated based on the reduced expression of a Hoxc13-lacZ reporter gene construct in Hoxc13 overexpressing mice (19). The presence of putative Hoxc13 binding sites found in genomic regions flanking Hoxc13 is consistent with this idea.
The second mechanism for explaining the down-regulation of Krtap16 genes in hair follicles overexpressing Hoxc13 is based on an expansion of the Hoxc13 expression domain toward the keratogenous zone, thus resulting in Hoxc13 expression in cells that normally express Krtap16 genes but not Hoxc13. Under the premise that Hoxc13 normally acts as a negative regulator of Krtap16 genes to prevent their premature activity, this would result in their reduced expression. Our comparative immunohistochemical analysis of Hoxc13 expression in normal and GC13 hair follicles indeed suggests a distal expansion of the Hoxc13 domain into a zone where Hoxc13 and Krtap16 gene expression overlap (compare Figs. 7 and 4). In summary, the phylogenetically conserved KAP gene complex on mouse chro-mosome 16 might provide a useful paradigm for studying mechanisms that regulate the coordinated expression of clustered hair-specific genes in a cyclical manner. | 2018-04-03T02:37:14.921Z | 2004-12-03T00:00:00.000 | {
"year": 2004,
"sha1": "5e150df2a3771e3435c284ecb284f15df08f87f4",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/49/51524.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7eb6a436d7f956053632bdaabb8fdd3a5c938ee3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
17959313 | pes2o/s2orc | v3-fos-license | The neurovascular relation in oxygen-induced retinopathy.
Purpose Longitudinal studies in rat models of retinopathy of prematurity (ROP) have demonstrated that abnormalities of retinal vasculature and function change hand-in-hand. In the developing retina, vascular and neural structures are under cooperative molecular control. In this study of rats with oxygen-induced retinopathy (OIR) models of ROP, mRNA expression of vascular endothelial growth factor (VEGF), semaphorin (Sema), and their neuropilin receptor (NRP) were examined during the course of retinopathy to evaluate their roles in the observed neurovascular congruency. Methods Oxygen exposures designed to induce retinopathy were delivered to Sprague-Dawley rat pups (n=36) from postnatal day (P) 0 to P14 or from P7 to P14. Room-air-reared controls (n=18) were also studied. Sensitivities of the rod photoreceptors (Srod) and the postreceptor cells (Sm) were derived from electroretinographic (ERG) records. Arteriolar tortuosity, TA, was derived from digital fundus images using Retinal Image multi-Scale Analysis (RISA) image analysis software. mRNA expression of VEGF164, semaphorin IIIA (Sema3A), and neuropilin-1 (NRP-1) was evaluated by RT–PCR of retinal extracts. Tests were performed at P15–P16, P18–P19, and P25–P26. Relations among ERG, RISA, and PCR parameters were evaluated using linear regression on log transformed data. Results Sm was low and TA was high at young ages, then both resolved by P25–P26. VEGF164 and Sema3A mRNA expression were also elevated early and decreased with age. Low Sm was significantly associated with high VEGF164 and Sema3A expression. Low Srod was also significantly associated with high VEGF164. Srod and Sm were both correlated with TA. NRP-1 expression was little affected by OIR. Conclusions The postreceptor retina appears to mediate the vascular abnormalities that characterize OIR. Because of the relationships revealed by these data, early treatment that targets the neural retina may mitigate the effects of ROP.
High oxygen has long been associated with pathologic retinal vascular abnormalities [1][2][3][4], the clinical hallmark of retinopathy of prematurity (ROP) [5]. But persistent dysfunction of the neural retina is increasingly recognized as an essential component of the ROP disease process. Persistent deficits in rod and rod-bipolar cell sensitivity are detectable years after acute ROP has resolved [6][7][8][9][10][11]. The severity of these neural deficits varies with the degree of the antecedent vascular disease. The abnormalities in retinal blood vessels that characterize ROP appear within a narrow preterm age range when the developing rod outer segments are elongating rapidly, accompanied by an increase in the rhodopsin content of the retina and burgeoning energy demands in the photoreceptors [12].
In rat models of ROP, rod photoreceptor dysfunction antedates [13] and predicts [14] the subsequent retinal vascular abnormalities, and persists after their resolution [14,15]. The mechanisms that underpin these phenomena remain to be elucidated.
The postreceptor retina, too, is affected by ROP. Moreover, postreceptor sensitivity recovers hand-in-hand [14,15]. Indeed, the retinal vasculature and the postreceptor neural retina are in close physical proximity, are immature at the same ages, and develop together by processes termed angiogenesis and neurogenesis, respectively. It stands to reason that there must be "remodeling" [16] mechanisms that mediate the neurovascular congruency. Molecules, called growth factors, that cooperatively control both angiogenesis and neurogenesis [17] are abundant in the developing retina, and, thus, are candidate mediators of the neurovascular interplay documented in ROP. We studied mRNA expression of neurovascular growth factors in rat models of ROP.
From the angiogenesis pathway, we selected vascular endothelial growth factor (VEGF). VEGF is essential for normal blood vessel growth in the developing retina [18,19] and is implicated in the pathogenesis of vasoproliferative retinal diseases like ROP [20][21][22]. From the neurogenesis pathway, we selected semaphorin because it acts as an axon growth cone guidance molecule [23] involved in postreceptor retinal development and likely in plasticity and stabilization (as during recovery from an insult) of postreceptor signaling [24]. We also assayed neuropilin-1 (NRP-1), a coreceptor for both VEGF [25,26] and semaphorin [27,28]. NRP-1 is expressed both in vascular endothelial cells and in retinal neurons [29], including in the progenitors of photoreceptors [30]. That NRP-1 mediates both neural and vascular development by the competitive binding of two disparate ligand families, VEGF and semaphorin, supports the hypothesis that retinal neurogenesis and angiogenesis are inseparably linked [31,32]. This is further supported by the observation that VEGF, semaphorin, and NRP-1 are expressed in temporally and spatially overlapping domains during retinal development [24,33]. In addition, semaphorins play a direct role in angiogenesis not mediated by neuropilin [34][35][36]. Thus, as has been documented in oncogenesis where semaphorins have a demonstrated role in the development of vascular supply [37], semaphorins likely play a role in the development of retinal vasculature as well as retinal neurons.
We selected specific isoforms of VEGF and semaphorin based upon the degree of specificity in VEGF/neuropilin and semaphorin/neuropilin binding affinity and activity. NRP-1 is specifically sensitive to the VEGF164 isoform (VEGF164) [26], the ortholog of primate VEGF165, and of the semaphorin family of ligands, has highest affinity for semaphorin IIIA (Sema3A) [38]. Herein, we studied the mRNA expression of these growth factors in rats with oxygen-induced retinopathies (OIR) that model the gamut of severity of human ROP [14]. In every rat, we also obtained numeric measurements of rod photoreceptor and postreceptor neural function and of blood vessel abnormality.
Subjects:
This study followed a cross-sectional design and used 54 Sprague-Dawley albino rats (Charles River Laboratories Inc., Worcester, MA) from nine litters. Rats were assigned to one of three groups (n=18), either of two OIR paradigms or controls. Tests of neural function, vascular abnormality, and growth factor mRNA expression were performed at postnatal day (P) 15-P16, P18-P19, or P25-P26. P15-P16 is immediately after the induction of ROP. At P18-P19, the vascular abnormalities are quite marked [14,15]. At P25-P26, vascular abnormalities are rapidly resolving [14,15]. All experiments were conducted according to the ARVO Statement for the Use of Animals in Ophthalmic and Vision Research with the approval of the Animal Care and Use Committee at Children's Hospital Boston. Induction of retinopathy: As described previously in detail [14], OIR was induced by placing pups and dam in an OxyCycler (Biospherix Ltd., Lacona, NY) and exposing them to one of two different oxygen regimens designed to produce a range of effects on the retinal vasculature and the neural retina. The first OIR regimen, the 50/10 model, involved exposure to alternating 24 h periods of 50±1% and 10±1% oxygen from P0 (the day of birth) to P14 [39]. The second OIR regimen, the 75 model, was exposure to 75±1% oxygen from P7 to P14. Controls were reared in room air (21% oxygen). While the 50/10 model reliably produces peripheral neovascularization [39] and tortuosity of the posterior retinal arterioles [14,15], rat models created by exposure to continuous high oxygen, similar to our 75% model, reliably produce tortuosity of the central retinal vasculature [14], but infrequently produce peripheral neovascularization [40,41]. Both OIR paradigms target the ages at which the rod outer segments are forming and when the rhodopsin content of the retina is rapidly increasing. Of note, at birth, about 50% of the eventual rod photoreceptors are differentiated; however, only a small number of the second-order bipolar cells are present. By P9, differentiation of both rod photoreceptor and postreceptor neurons is essentially complete [42]. But rod outer segment development and vascular coverage remain incomplete.
Analyses of neural function:
Calibration of stimuli-Function of rod photoreceptor and postreceptor retinal neurons was assessed by electroretinography (ERG). The stimuli were delivered using an Espion e 2 with ColorDome Ganzfeld stimulator (Diagnosys LLC, Lowell, MA). The rate of photoisomerization per rod (R*) for the green LED flash was calculated by measuring the flux density incident upon an integrating radiometer (IL1700; International Light, Newburyport, MA) positioned at the location of the rats' cornea, and following the procedures detailed by Lyubarsky and Pugh [43]. The LED was treated as monochromatic with λ equal to 530 nm. The intensity of the flash was given by is the number of photoisomerizations per rod (R*) elicited by the flash, Q(λ) is the calculated photon density at the cornea, T(λ) is the transmissivity of the ocular-media and pre-receptor retina (approximately 80% at 530 nm [44]), and apupil,aretina, and arod(λ) are respective estimates of the area of the dilated pupil (approximately 20 mm 2 [45]), the area of the retinal surface (approximately 50 mm 2 [46]), and the end-on light collecting area of the rod photoreceptor (approximately 1.5 µm 2 at 530 nm). arod(λ) takes into account the length of the outer segment, the absorption spectrum of the rod, and the optical density of the photopigment, as well as the radius of the photoreceptor [47]. Since several of these parameter values are unknown for the rat rod that is affected by OIR, stimuli are expressed as the expected values in adult control rats. We calculated Q(λ) by Pλ is the radiant flux (watts), h is Planck's constant, and c is the speed of light [48]. To evaluate the intensity of "white" xenon arc flashes, we recorded an intensity series with interspersed green and white flashes and estimated the 1 2 equivalent light based on the shift of the stimulus/response curves for the scotopic b-wave.
Preparation-Prior to ERG testing, rats were dark adapted for a minimum of 2.5 h. Preparations were made under dim red illumination. Subjects were anesthetized with a loading dose of approximately 75 mg·kg −1 ketamine and approximately 7.5 mg·kg −1 xylazine, injected intraperitoneally. This was followed, if needed, by a booster dose (50% of loading dose) administered intramuscularly. The pupils were dilated with a combination of 1% phenylephrine hydrochloride and 0.2% cyclopentolate hydrochloride (Cyclomydril; Alcon, Fort Worth, TX). The corneas were anesthetized with one drop of 0.5% proparacaine hydrochloride (Alcon). A Burian-Allen bipolar electrode (Hansen Laboratories, Coralville, IA) was placed on the cornea and the ground electrode was placed on the tail.
Analysis of rod function-Sample ERG responses are shown in Figure 1A. The a-wave results from the suppression of the circulating current of the photoreceptors.
Rod function was evaluated by ensemble fitting the Hood and Birch formulation [49] of the Lamb and Pugh model [50,51] of the activation of phototransduction to the leading edge of ERG a-waves elicited by five bright white flashes ( Figure 1B). The model takes flash intensity, i (R*) and elapsed time, t (sec), as its inputs, such that Srod is the sensitivity measure (R* −1 ·sec −2 ), td is a delay of approximately 3.5 ms, and RmP3 is the saturated amplitude of the photoreceptor response (μV). Fitting was restricted to the a-wave trough. Srod summarizes the amplification time constants involved in the activation of phototransduction.
Analysis of postreceptor function-The amplitude of the b-wave was measured from the trough of the a-wave to the peak. At low intensities, under dark adapted conditions, the bwave reflects mainly the activity of the rod bipolar cells ( Figure 1C) [52,53]. Sensitivity of the b-wave is defined in the linear range of the response/intensity relationship as the amplitude of the b-wave scaled by stimulus intensity [54]. The b-wave amplitude increases in linear proportion to stimulus intensity over a narrow range of dim flash intensities [55]. Therefore, we calculated the sensitivity at threshold by scaling the amplitude of each b-wave by the intensity used to elicit it and fitting: to the resulting sensitivities. Sf (µV·R* −1 ) is the fractional sensitivity of the b-wave response to a flash of i intensity, Sm (µV·R* −1 ) is the sensitivity of the postreceptor retina at threshold, and i1/2 (R*) is the stimulus intensity at which sensitivity has fallen to half that at threshold. Analysis of retinal vessels: In the same experimental sessions, digital photographs of the fundi of both eyes were obtained (RetCam; Clarity Medical Systems Inc., Dublin, CA). Several images were assembled into a composite (Photoshop CS3; Adobe Systems Inc., San Jose, CA) to create a complete view of the posterior pole, defined as the region within the circle bounded by the vortex veins and concentric to the optic nerve head (ONH). The arterioles were identified and their tortuosity measured using Retinal Image multi-Scale Analysis (RISA) software as previously described in detail [14]. In summary, arterioles were cropped from the main image and segmented individually. The segmented image was manually edited to remove extraneous features such as the background choroidal vasculature. RISA constructed a vessel skeleton from which the integrated curvature of the arteriole was measured. Integrated curvature, the sum of angles along a vessel normalized by its length, has demonstrated good agreement with human observer assessments of arteriolar tortuosity [56]. Arteriolar tortuosity, TA (radians·pixel −1 ), was calculated for each subject as the mean integrated curvature of all measurable arterioles in both eyes (median 9).
Analysis of growth factor expression:
Tissue preparation-Rats were euthanized with approximately 100 mg·kg −1 pentobarbital administered intraperitoneally. Their corneas were incised and both retinas extracted. These were pooled, flash frozen in liquid nitrogen, and stored at −80° C.
RT-PCR-RNA was extracted using an RNeasy Mini Kit (Qiagen, Valencia, CA). The quantity of RNA extracted was determined spectrophotometrically (SmartSpec 3000; Bio-Rad Laboratories Inc., Hercules, CA). cDNA was reverse transcribed from each RNA sample in triplicate to mitigate the effects of noise and error. In each of three sample tubes, a quantity of RNA solution containing 300 ng of RNA was added to a solution of 8 µl 5X iScript Reaction Mix and 2 µl iScript Reverse Transcriptase (iScript cDNA Synthesis Kit; Bio-Rad). Nuclease-free water was then added to obtain a final volume of 40 µl per sample. Reverse transcription was achieved by incubating the mixtures at 25 °C for 5 min, 42 °C for 30 min, and 85 °C for 5 min. cDNA was stored at −20 °C.
PCR was performed on all three cDNA products employing primers (Table 1) for VEGF164, NRP-1, and Sema3A and using an appropriate temperature gradient. In addition, glyceraldehyde 3-phosphate dehydrogenase (GAPDH) served as the internal control.
Each reaction contained 3.0 µl cDNA and the following reagents (Bio-Rad): 5.0 µl 10x iTaq buffer, 1.5 µl 50 mM MgCl2, 1.0 µl 10 mM dNTP mix, 0.25 µl iTaq DNA polymerase, 15 µl 10 mM forward primer, 15 µl 10 mM reverse primer, and 9.25 µl sterile water for a total volume of 50 µl per reaction. The linear range of each target was determined empirically by increasing the number of cycles and resolving the products on a 2% agarose gel (Bio-Rad). The 12 products of RT-PCR (three each for VEGF164, NRP-1, Sema3A, and GADPH) and a molecular ruler (EZ-Load 100 bp; Bio-Rad) were resolved on a single 2% agarose gel. Gel imaging was performed using a GEL Logic 100 imaging system with Stratagene Transilluminator 2040 EV (Kodak Scientific Imaging Systems, New Haven, CT). The optical density of each band was determined (ImageJ version 1.38x; NIH, Bethesda, MD). The optical density of each growth factor band was divided by the optical density of its corresponding control gene (GAPDH) band. The two ratios in closest agreement for each growth factor were averaged in subsequent analyses.
Selection of control gene-Though commonly employed as an internal standard in experimental OIR, GAPDH expression is reportedly altered in severe hypoxia [57]. To assure the suitability of GAPDH as a control gene in the OIR models, we ran a pilot study on three 50/10 model, three 75 model, and three control rats (aged P15-16). β-Actin mRNA and 18S rRNA expression, as well as GAPDH, were measured in this study. The ratio of the former genes expression to GAPDH was taken. The expression ratios were nearly constant across group (<0.1 log unit maximum difference). Specifically, in 75 model rats the expression of β-Actin and 18S were each approximately +0.01 log unit relative to controls; between 50/10 model and control rats, the change was approximately +0.1 log unit for β-Actin and −0.1 log unit for 18S. Thus, GAPDH appears to be an appropriate control gene in our experiment. Data analyses: All data were expressed as ΔLogNormal for the P25-26 controls: The Posthoc testing was performed using Tukey's honestly significant difference (q) statistical test. Relations between parameters were evaluated by Pearson product moment correlation. The significance level (α) for all tests was p<0.01. Figure 2 shows representative fundus images and the retinal arterioles as segmented by RISA from a 50/10 model, a 75 model, and a control rat imaged at P25-26. The average tortuosity of the arterioles, TA (radians•pixel), was highest in the 75 model rat and lowest in the control. Figure 3 plots mean±SEM ΔLogNormal rod photoreceptor sensitivity, Srod, postreceptor sensitivity, Sm, and arteriolar tortuosity, TA. Srod was not significantly affected by OIR or age. However, Sm was significantly affected by OIR (F=32.2; df=2,45; p<0.001), being more than 0.6 log unit below control values in both OIR models at early ages (P15-P16, P18-P19), but recovered significantly by P25-P26 (F=19.5; df=2,45; p<0.001). The results for TA mirrored those for Sm, with TA high when Sm was low and TA becoming lower when Sm increased. TA was high in OIR rats (F=73.4; df=2, 45; p<0.001) at early ages but became significantly more normal by P25-P26 (F=29.0; df=2,45; p<0.001). Of note, Sm remained markedly low even at P25-P26 in 50/10 model rats, while TA remained markedly high in 75 model rats. Figure 4 plots the three interrelations between these parameters (Srod versus Sm, Srod versus TA, and Sm versus TA) across age and group. All three parameters were correlated. Postreceptor sensitivity depends in part upon photoreceptor sensitivity, and these parameters (Srod, Sm) were positively correlated. Consistent with previous findings in OIR rats, high TA correlated with low Srod. However, the value of the correlation coefficient was larger between Sm and TA than between Srod and TA. Indeed, 33 of the 36 OIR rats tested had reduced postreceptor sensitivity and increased vascular abnormalities relative to the mean for P25-P26 controls, and 29 fell outside the range of values observed in the controls at any age. significantly altered in OIR rats. In both OIR models, VEGF164, mainly associated with angiogenesis, was elevated at early ages (P15-P16, P18-P19) and decreased significantly with age (F=7.5; df=2,45; p=0.002) to levels well below normal at P25-P26. In control rats, VEGF164 changed little with age. Sema3A, mainly associated with neurogenesis, was significantly elevated in 50/10 model rats (F=8.4; df=2,45; p=0.001); 75 model rats did not significantly differ from controls. NRP-1, a receptor for both VEGF164 and Sema3A, displayed no significant effect of group or age (nor a group by age interaction).
RESULTS
As shown in Figure 6, across group and age, deficits in postreceptor sensitivity (Sm) were negatively correlated with VEGF164 and Sema3A mRNA expression. NRP-1 receptor mRNA was weakly negatively associated with postreceptor sensitivity (p=0.011, not shown). Overexpression of VEGF164 is documented in OIR models and is also known to promote pathological angiogenesis [58,59]. High expression of VEGF164 mRNA was a significant predictor of high TA. Srod was, in turn, negatively correlated with VEGF164 expression; that is, poor rod function was associated with high VEGF164 mRNA.
DISCUSSION
In these rats with OIR, the function of the postreceptor neural retina was significantly altered. Postreceptor sensitivity, Sm, was low in both OIR models at P15-P16 but recovered with age ( Figure 3). These age-related improvements in neural function were accompanied by improvements in the vascular parameter, TA. In addition, we found that Sm, which measures postreceptor sensitivity at threshold, was correlated with TA in these OIR rats (Figure 4). Rod photoreceptor sensitivity, Srod, was also correlated with TA, though the strength of the correlation was weaker. The mRNA expression of growth factors mediating both angiogenesis and neurogenesis, expressed relative to normal, was also altered ( Figure 5). We found that mRNA of VEGF164 was elevated in OIR rats, in agreement with other reports [58,59]. To our knowledge, this is the first report of alterations in a "neural" growth factor in OIR; Sema3A mRNA expression was upregulated in 50/10 model rats. In ROP, the severity of consequent vascular abnormality depends upon the extent of earlier rod photoreceptor dysfunction, although the rods and the retinal blood vessels are anatomically separated. We replicated the finding [14] that low rod sensitivity (Srod) predicts abnormal blood vessels (TA; Figure 4). Srod was, in turn, negatively correlated with VEGF164 expression (Figure 6). The postreceptor neural retina is driven by rod photoreceptors in the outer retina, and is supplied by the retinal vasculature that traverses the inner retinal surface, sending capillaries deep into the inner neural layers. Thus, the postreceptor neural retina may be the bridge between rods and retinal vasculature and is in a position to mediate the rod and retinal vascular relation (Figure 4). Consistent with this position, in these data, postreceptor sensitivity was significantly associated not just with rod function and vascular abnormality (Figure 4), but also with mRNA expression of VEGF164 and Sema3A ( Figure 6).
There are at least two explanations for the strong negative relation between Sm and TA: 1) the pathologic vasculature associated with high values for TA presumably adversely affects the retinal circulation, while low values for TA putatively represent a salubrious postreceptor environment favorable to Sm; 2) the distressed postreceptor neural retina may signal the need for improved circulation by upregulating mRNA of proangiogenic growth factors such as VEGF or semaphorins. An excess of these signals may induce the pathologic vasculature that resulted in high TA. These explanations are not necessarily mutually exclusive. However, although Sema3A expression was markedly altered by OIR, it was not a significant predictor of blood vessel tortuosity.
Since rod photoreceptor dysfunction antedates and predicts consequent vascular abnormality [14], rod dysfunction may underpin both postreceptor neural dysfunction and retinal vascular abnormality. Deficits in postreceptor sensitivity in OIR are presumably due, in part, to diminished rod signaling, but may be worsened by direct insult to the rod bipolar cell, such as from hypoxia consequent to an avascularized inner retina and exacerbated by oxygendemanding neighboring rods. Natural responses in the postreceptor retina would be to: 1) remodel neuron-to-neuron connections to enhance sensitivity; and 2) promote vascular development. Both outcomes could be achieved through the mediation of growth factors such as VEGFs and semaphorins expressed by intermingled glia [60,61].
Two retinal targets for intervention are suggested by these data, the molecular crosstalk between postreceptor neurons and their vasculature and the immature rod photoreceptors themselves. Mitigation of VEGF, either by the reduction of its expression or by antagonism of its receptors, has garnered much attention [62][63][64][65][66]. Successful treatment of the retinal vasculature would result in enhanced vascular supply to the postreceptor neurons and presumably improve visual function. However, in our data, at early ages when postreceptor sensitivity was low, VEGF expression was elevated. As postreceptor sensitivity recovered, VEGF expression plummeted. Endogenous VEGF is required for visual function [67]. Possibly, the high VEGF expression instigated successful receptor remodeling. Indeed, anti-VEFG pharmaceuticals have the potential for adverse effects on developing neurons [67][68][69], and thus any anti-VEGF therapy in ROP calls for caution. Moreover, since neural dysfunction antedates the vascular abnormalities [13], it seems unlikely that anti-VEGF therapy alone can fully normalize retinal function in ROP.
If the rods mediate the neurosensory dysfunction in ROP, then treatments that protect the rods may also protect the postreceptor neurons and would possibly reduce VEGF expression. Thus, treatments designed to relieve the burgeoning aerobic demands of the immature rods during the ages when the rod outer segments are elongating should be among the treatments considered for ROP. Simple approaches, such as treatment with light to suppress the circulating current [70], have shown small beneficial effect in OIR rats [71]. Pharmaceutical protection of the immature rods therefore represents a promising, though untested, approach to the management of ROP [72].
It should be noted that several recent approaches may derive some of their efficacy from neuroprotective effects. For example, omega-3 polyunsaturated fatty acids (ω3-PUFAs) increased vessel regrowth after the induction of retinopathy in a mouse model of ROP, reducing consequent pathologic vascularization [73]. One action of ω3-PUFAs is to lower inflammatory cytokine production in retinal microglia in the inner retina, directly mediating angiogenesis. However, ω3-PUFAs are also neuroprotective during ischemia [74], suggesting that some of their effect on the retinal vasculature may, in fact, be due to preservation of rod function. Unfortunately, we are unaware of any assessments of retinal function that have been applied, to date, in an antiangiogenic treatment in OIR models, leaving the role of the rods and postreceptor neurons equivocal. | 2014-10-01T00:00:00.000Z | 2008-12-26T00:00:00.000 | {
"year": 2008,
"sha1": "55f2203951e1a7c505649c1197ea14425f1e3d2c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4ff4e334cb3f36aea955a301b6b952ecaf7ee220",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
237230326 | pes2o/s2orc | v3-fos-license | Preliminary Observations on the Pathogenesis of a Virulent Strain of Newcastle Disease Virus in Chickens
Development of Newcastle disease, after experimental and natural infection with the virulent strain VLT of Newcastle disease virus, and its growth and distribution in some selected tissues as assayed by the enumeration of plaques are reported.
The plaque technique developed by Dulbecco (1) has been extensively used in studies of Newcastle disease virus (NDV; references 2, 6-8). Despite its potential value in yielding precise determination of virus, the technique was little utilized in studies of the pathogenesis of the disease in the domestic fowl or in epizootiological investigations.
Earlier workers (3,4,9) have reported successful isolation and enumeration of different strains of NDV in embryonated eggs from various tissues after experimental or natural infection of chickens. In this paper we describe the development of Newcastle disease (ND) in chickens after experimental and natural infection and present data on proliferation of the virus in some selected tissues, as enumerated by the plaque technique.
The virulent strain VLT of NDV, isolated during 1968 from an outbreak of highly fatal ND among chickens at Talamara, Lebanon, was plaque purified and used in its third chick embryo passage level for these experiments. Virus stock was prepared by inoculating with approximately 104 plaque-forming units (PFU) in 0.1 ml of seed virus into the allantoic cavity of 9-day-old chick embryos. Infected allantoic fluid was stored in 1-ml amounts at -60 C. Primary chick embryo cell cultures were prepared from minced 9-day-old decapitated chick embryos subjected to repeated trypsinization. The cells were grown in Eagle's minimum essential medium (MEM) containing 5% calf serum and 8% Tryptose phosphate broth. Approximately 5 x 106 cells per 2-oz prescription bottle were seeded, and monolayers were overlaid after infection with 6 ml of overlay medium, which consisted of Hanks balanced salt solution without phenol red, 0.5 j' lactalbumin hydrolysate, 1.0 Noble agar (Difco), 3 %-y, horse serum, 1.5% of 1:1,000 dilution of neutral red, 5% of 4.4%,, sodium bicarbonate solution, 100 units of penicillin and 100 ,ug of streptomycin per ml. Monolayers were used for virus assays 3 days after seeding (10). Seventy White Leghorn six-week-old chicks were divided into seven groups of 10 each. They were caged and placed in a room in which birds were never kept before. To avoid the risk of transmission of infection through feed and water supplies, each group was provided with its own feed and water. One group was maintained as contact controls, and the birds in the remaining groups were inoculated intramuscularly (pectoral muscles) with 0.5 ml of serial 10-fold dilutions of VLT strain of NDV. Chicks were observed twice daily. At postmortem examination, brain, spleen, trachea, and lung tissues were removed aseptically with separate, sterile instruments to avoid cross contamination from dead chickens as well as from sick ones killed by cervical dislocation. A 10%, (w/v) suspension of tissue was made in MEM containing 1,000 units of penicillin and 1,000 ,ug of streptomycin per ml. Tissue suspensions were kept frozen at -60 C until tested. All end points were calculated by the Reed and Muench method (5).
The embryo-propagated virus stock had a titer of 109.6 50% chicken lethal doses per 0.5 ml, indicating that the virus strain is highly virulent for susceptible chickens. The incubation period varied from 3 to 4 days, but the majority of birds showed symptoms on the third day. Respiratory symptoms with rales were pronounced in almost all birds. Most birds also showed typical nervous symptoms. Death generally oc-946 curred within 2 to 4 days after the onset of the symptoms. Torticollis and lateral movement of the head were commonly observed. A few birds developed paralysis of both legs. The gross pathological lesions consisted of extensive involvement of the proventricular submucosa and intestinal follicles. Severe hemorrhagic necrotic lesions adjacent to lymphoid plaques were also common. Infection spread easily to the contacts as signs appeared 4.8 days after probable exposure (Table 1).
A summary of virus titers in various tissues is presented in Table 2. The VLT strain multiplied extensively in the tested tissue, namely, spleen, trachea, brain, and lungs. Virus titers varied between 10-3 and 108.1 PFU/g of the brain tissue, indicating the relationship between concentration of virus in the brain and occurrence of nervous symptoms. Large amounts of virus were present in the tested tissues even on the day the chickens first showed symptoms, indicating that generalization of NDV in chickens probably occurred before the onset of clinical signs. Virus may therefore be excreted 1 or 2 days preceding the appearance of clinical signs. From the practical standpoint, it seems that probably the trachea is more suitable than other tissues for virus recovery. Bird 1848, which had a moderate quantity of virus in its trachea but not in the spleen, brain, or lung, remained apparently healthy, whereas its cagemates died on the sixth day. In bird 1844, which almost recovered after showing respiratory and nervous signs (paralysis of extremities), a high concentration of virus was found in the trachea and brain when killed on the 10th day. It is possible that such birds can become effective carriers. From our study it can be concluded that the strain VLT is highly pathogenic for 6-week-old chickens. In our experience it spread readily and multiplied extensively in various tissues after experimental and natural infection. Critical organs, damage of which reflects the occurrence of symptoms, were the brain, trachea, and lungs. The virus seemed to generalize before the onset of clinical signs, and we consider that it could disseminate during this period. It has been observed that some birds can recover and may possibly become carriers after infection. The po-947 NOTES tential value of the plaque technique in primary chick embryo cell culture for the study of pathogenesis of ND has been indicated. | 2020-12-10T09:04:11.248Z | 1971-05-01T00:00:00.000 | {
"year": 1971,
"sha1": "b500eb974f99d2dbd4691fe5972c8e813fff6d90",
"oa_license": null,
"oa_url": "https://aem.asm.org/content/aem/21/5/946.full.pdf",
"oa_status": "GOLD",
"pdf_src": "ASMUSA",
"pdf_hash": "cf4743f200c01ce777f9c4a0b40b9128ab61bdc0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
237631303 | pes2o/s2orc | v3-fos-license | A prospective observational study depicting role of lung ultrasound in pediatric pneumonias
In everyday pediatric practice, a large number of acute respiratory infections are encountered. These infections can be bacterial, viral or fungal. Pneumonia is infection and inflammation of airspaces in the lungs which leads to activation of inflammatory cascade. This leads to leakage of plasma, exudates and loss of surfactant resulting in air space loss and consolidation. Globally pneumonia is leading cause of pediatric morbidity and mortality particularly in children younger than 5 years of age. In India Pneumonia accounts for 15.6 % of under 5 deaths and estimated incidence of clinical pneumonia in India is 0.37 episodes per child year.
INTRODUCTION
In everyday pediatric practice, a large number of acute respiratory infections are encountered. These infections can be bacterial, viral or fungal. Pneumonia is infection and inflammation of airspaces in the lungs which leads to activation of inflammatory cascade. This leads to leakage of plasma, exudates and loss of surfactant resulting in air space loss and consolidation. Globally pneumonia is leading cause of pediatric morbidity and mortality particularly in children younger than 5 years of age. 1 In India Pneumonia accounts for 15.6 % of under 5 deaths and estimated incidence of clinical pneumonia in India is 0.37 episodes per child year. 2.3 Pneumonia is broadly classified into community acquired pneumonia, hospital acquired pneumonia and aspiration pneumonia. Community acquired pneumonia is a major form of pediatric pneumonia and is broadly classified further into lobar pneumonia and bronchopneumonia. Lobar pneumonia is an acute bacterial infection with resultant hepatisation of lung parenchyma. There is confluent consolidation involving partial or whole lobe of lung. In bronchopneumonia there is a patchy consolidation involving 1/more lobes, in same or bilateral lungs.
In infants, pneumonia is usually caused by organisms like S. pneumoniae, S. aureus, and H. influenza, most common being S. pneumonia. 4 In children aged between 2-5 years, main causative agents are viruses like haemophilus infuenzae and respiratory syncytial virus (RSV). [5][6][7] Streptococcus pneumoniae is common bacterial agents in this age group. In older children, S. pneumonia and mycoplasma pneumoniae are main causative agents. 8,9,10 In pneumonia, first line imaging modality is chest x-ray. Imaging can also include CT in some cases for detecting complications and differentiation from other pathologies. The sensitivity of CT is higher, however, chest x-ray continues to be the main diagnostic modality for pneumonia as the cost and radiation hazard of CT is very high. 11,12 A chest x-ray delivers 0.1 mSv, while a chest CT delivers 7 mSv which is 70 times as much radiation as compared to x-ray. 13 The risk from x-ray imaging is small when compared to CT, but repeated x-rays accumulate radiation exposure. So, efforts should be made to minimize radiation risks by reducing unnecessary exposure to ionizing x-rays.
Ultrasound is fast, non-ionizing and widely available imaging modality which can be used as a possible alternative to x-ray. In a normal subject the pleura is the only visible structure and due to high acoustic impedance of the air, ultrasound waves are almost completely reflected. Due to repetitive reverberation A-lines are generated that are parallel to the pleura and are seen as a series of echogenic parallel lines distally, equidistant from one another. 14 However lung parenchyma can become accessible to ultrasound beams if air content is replaced by pathological process. Pneumonia leads to consolidation and if the consolidation reaches the pleural surface, it is possible to visualize the lesion by ultrasound.
On USG consolidation appears as a hypoechoic area that contains multiple echogenic lines that represent air bronchograms. 15 It is important to differentiate consolidation from obstructive atelectasis. The presence of dynamic air bronchogram which shows branching echogenic linear lines moving to and fro movements with breathing helps to differentiate consolidation from obstructive atelectasis. 16 In case of whole lobe involvement consolidation appears as well defined. But in cases of small lobar consolidations deeper borders of consolidation demonstrate irregular interface due to underlying aerated lung.
The objective of this study was to compare the diagnostic ability of lung ultrasound with chest x-ray in pediatric pneumonia to assess whether lung ultrasound can be used as an alternative imaging modality and avoid long term effects of radiation using X rays.
METHODS
This was a prospective observational study, conducted in department of radio-diagnosis at Indira Gandhi medical college and hospital, Shimla (Himachal Pradesh) from 1 st July 2018 to 30 th June 2019.
There were 70 patients with clinical diagnosis of pneumonia who reported to department of radiodiagnosis for CXR, out of which 55 were enrolled for study. Rest of the patients had severe pneumonia and therefore were excluded from study. The inclusion criteria were, age of children<18 years, indoor or outdoor patients with clinical suspicion of pneumonia. Children having severe pneumonia, having respiratory distress requiring oxygen or septic shock were excluded from study.
The research procedure was in accordance with the approved ethical standards of institute. An informed written consent was taken from the parents/guardians of the admitted children. Each child with clinical suspicion and blood investigations suggestive of pneumonia was taken for chest x-ray. Chest x-ray AP supine or PA view were done depending on age of child. Patients with positive finding for pneumonia on chest x-ray were subjected to lung ultrasound. The lung ultrasound was performed on GE LOGIQ P6 ultrasound machine, using convex probe with frequency range of 4-5.5 MHz and linear probe with frequency range of 10-13 MHz transducer. During LUS, longitudinal and transverse scans of the anterior, lateral and posterior aspects of thorax were performed. Thorax was scanned in supine and seated positions. The anterior chest wall was delineated from the parasternal to the anterior axillary line. The lateral area was delineated from the anterior to the posterior axillary line. The posterior area was considered as the zone beyond the posterior axillary line. Lung ultrasonography findings were described as either normal pattern with A-lines or subpleural lung consolidations. Lobar and patchy consolidations were separately described. Air bronchogram and dynamic air bronchogram were subsequently looked for in every consolidation patch detected by ultrasound. Data was compared for determining diagnostic accuracy of lung ultrasound with chest x-ray. The sensitivity, specificity, positive predictive value, negative predictive value, positive likelihood ratio, negative likelihood ratio with 95% confidence interval were calculated using software Epi info version 7.
RESULTS
This study was performed on 55 children fulfilling inclusion criteria, out of which 33 (60%) were male and 22 (40%) females. The mean age was 3.7 years. Maximum patients were in age group of 0-1year (n=26) comprising 47.2% of total (Table 1). On chest x-ray abnormalities were detected in all 55 cases. Out of 55 cases, on basis of chest x-ray final diagnosis of lobar pneumonia was made in 32 (58.20%) cases and bronchopneumonia in 23 (41.80%) cases. Among 32 cases of lobar pneumonia, LUS detected consolidation in 29 (90.6%) cases 3 (9.37%) cases were not picked by LUS. So, LUS was found to be an effective imaging modality for lobar pneumonia with high sensitivity (90.63%) and specificity (100%) ( Table 2).
Among 23 cases of bronchopneumonia diagnosed on chest x-ray, consolidation patches were detected in 20 (86.9%) cases by LUS. Remaining 3 (13.04%) cases were not detected by LUS while 3 cases were false positive for patchy consolidation. LUS was found to be slightly less sensitive (86.96%) and specific (90.63%) for bronchopneumonia in comparison to its ability in lobar pneumonia (Table 3).
Dynamic air bronchograms were seen consistently with high sensitivity and specificity in lobar pneumonia consolidation patches, however sensitivity of dynamic air bronchogram sign was on lower side in bronchopneumonia as (Table 4 and 5).
DISCUSSION
Mortality due to pneumonia is strongly linked to factors like undernutrition, air pollution and lack of adequate health care. UNICEF and NFHS accounts 15% of under five deaths due to pneumonia. 17,18 Awasthi et al, in a study conducted on 3351 children found that children in the age group of 2-11 months have 3 to 5 times higher incidence of community acquired pneumonia than those in 12-59 months age category. 19 Our study showed that maximum number of patients were in age group of 0-1-year i.e., infants which constituted 47.2 % of all cases. Children up to 5 years of age constituted 85.3% of all cases in our study.
Chest x-ray is primary imaging modality for diagnosing pneumonia in children. Owing to radiations risks in children chest ultrasound was tried as an alternative imaging modality. Reali et al in a study comparing chest x-ray with lung ultrasound showed that ultrasound has sensitivity and specificity of 94% and 96% respectively for detection of consolidation in pneumonia. 20 Balk et al in similar study reported that LUS has a sensitivity of 95.5% and specificity of 95.3% for detection of pneumonia. 21 Caiulo et al and Maria et al showed similar high diagnostic capability of lung ultrasound for lobar pneumonia. 22,23 Our study showed that LUS has high sensitivity and specificity for detection of lobar pneumonia as well as for bronchopneumonia. Two cases with lobar consolidation and similarly three cases of bronchopneumonia were not detected by LUS as consolidation patches were lying deep to scapula and had no pleural contact. Bronchopneumonia was found to be comparatively hard to detect on LUS owing to small patchy consolidation and motion artefacts while performing LUS on child.
Dynamic air bronchograms on LUS are linear or dot like hyper-echoic artefacts in consolidation which are visualized propagating with respiration. They are specific for consolidation because dynamic nature proves that small bronchi in consolidation are in communication with main bronchi and are not just air trapped in atelectasis. Lichtenstien et al described, that the dynamic air bronchogram has a sensitivity of 61% and specificity of 94% for detection of consolidation on LUS. 16 Bitar et al studied 73 cases with consolidation on LUS and found dynamic air bronchogram in 58 patients with sensitivity of 73.41%. 24 Our study is in accordance with these studies reported in literature which showed dynamic air bronchogram sign in every case of lobar consolidation detected by LUS. In bronchopneumonia sensitivity of dynamic air bronchogram was on lower side compared to lobar pneumonia. It was due to the fact that some infants and toddlers were uncomfortable during scan, so dynamic nature of air bronchogram was difficult to record. Furthermore, dynamic air bronchogram was difficult to record in small subpleural patches of consolidation. The limitations of study were small sample size and nonavailability of portable ultrasound unit. Statistical analysis could have been better with large sample size. Portable ultrasound unit could have helped in recruiting more patients in study.
CONCLUSION
In India, Pneumonia is a major cause of morbidity and mortality in pediatric age group. Imaging of pediatric pneumonia is relied mainly upon chest x-ray. The present study was undertaken to evaluate lung ultrasonography as an alternative to chest x-ray in pneumonias. In our study, lung ultrasound showed high accuracy, sensitivity, and specificity in detection of lobar consolidation as well as for bronchopneumonia. Only drawback of ultrasound in comparison to chest x-ray was that it was unable to detect deep lying consolidation patches having no pleural contact. Dynamic air bronchogram sign, which differentiates consolidation from atelectasis, was found consistently in all lobar pneumonia cases detected by LUS. In bronchopneumonia dynamic air bronchograms were missed in few cases resulting in lower sensitivity of 73.91% due to difficulty of detecting sign in small consolidation patches. On the basis of the findings in this study and non-ionizing nature of ultrasound, it is recommended that lung ultrasonography should be considered as an alternative to chest x-ray in pediatric pneumonias. | 2021-09-25T16:23:27.657Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "ccec27740be7bc42b1094a4bce526fee583d41ef",
"oa_license": null,
"oa_url": "https://www.ijpediatrics.com/index.php/ijcp/article/download/4394/2820",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "79f072988aa6fae3c30b930a1fdc39a0aaaaadbe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
4039142 | pes2o/s2orc | v3-fos-license | Maternal Prepregnancy Body Mass Index and Gestational Weight Gain on Pregnancy Outcomes
Objective The aim of the present study was to evaluate the single and joint associations of maternal prepregnancy body mass index (BMI) and gestational weight gain (GWG) with pregnancy outcomes in Tianjin, China. Methods Between June 2009 and May 2011, health care records of 33,973 pregnant women were collected and their children were measured for birth weight and birth length. The independent and joint associations of prepregnancy BMI and GWG based on the Institute of Medicine (IOM) guidelines with the risks of pregnancy and neonatal outcomes were examined by using Logistic Regression. Results After adjustment for all confounding factors, maternal prepregnancy BMI was positively associated with risks of gestational diabetes mellitus (GDM), pregnancy-induced hypertension, caesarean delivery, preterm delivery, large-for-gestational age infant (LGA), and macrosomia, and inversely associated with risks of small-for-gestational age infant (SGA) and low birth weight. Maternal excessive GWG was associated with increased risks of pregnancy-induced hypertension, caesarean delivery, LGA, and macrosomia, and decreased risks of preterm delivery, SGA, and low birth weight. Maternal inadequate GWG was associated with increased risks of preterm delivery and SGA, and decreased risks of LGA and macrosomia, compared with maternal adequate GWG. Women with both prepregnancy obesity and excessive GWG had 2.2–5.9 folds higher risks of GDM, pregnancy-induced hypertension, caesarean delivery, LGA, and macrosomia compared with women with normal prepregnancy BMI and adequate GWG. Conclusions Maternal prepregnancy obesity and excessive GWG were associated with greater risks of pregnancy-induced hypertension, caesarean delivery, and greater infant size at birth. Health care providers should inform women to start the pregnancy with a BMI in the normal weight category and limit their GWG to the range specified for their prepregnancy BMI.
Introduction
Improvements of maternal, fetal, and child health are key public health goals. In recent years, maternal prepregnancy body mass index (BMI) has increased among the childbearing age women in developed countries [1]. It has been shown that women who are overweight or obese at the start of pregnancy are at increased risks of poor maternal and child health outcomes. Several recent studies reported that prepregnancy BMI was positively associated with infant birth weight [2,3]. Furthermore, women who gain weight excessively or inadequately during pregnancy are at increased risks of poor maternal and child health outcomes [4][5][6]. Weight gain during pregnancy within the recommended range (11 to 40 pounds) remained constant during the last 10 years [7]. Several studies have shown that maternal excessive gestational weight gain (GWG) was associated with increased risks of pregnancy-induced hypertension, gestational diabetes mellitus (GDM), caesarean delivery and large for gestational age infant, and maternal inadequate GWG was associated with increased risks of low birth weight and small for gestational age infant [4][5][6]. The Danish National Birth Cohort found that excessive GWG increased risks of caesarean delivery and large for gestational age infant, and inadequate GWG increased the risk of having a small baby [3].
In 2009, the Institute of Medicine (IOM) published new recommendations for weight gain during pregnancy [8]. A recent US study reported that 73% of pregnant women had excessive GWG according to 2009 IOM guidelines [9]. The IOM guidelines based on different prepregnancy BMI are not only suitable for women in developed countries, but also suitable for Chinese women [10]. It has been shown that being prepregnancy overweight or obese and having an excessive GWG, as well as being underweight and having an inadequate GWG, were associated with increased risks for adverse pregnancy outcomes in women from China and other countries as well [11]. However, few studies estimated the joint associations of maternal prepregnancy BMI and GWG with pregnancy outcomes [3,9]. Therefore, the aim of the present study was to evaluate the single and joint associations of maternal prepregnancy BMI and GWG with pregnancy outcomes in Tianjin, China.
Study Sample
Tianjin is the fourth largest city with over 12.9 million residents in northern China, and 4.3 million residents live in six central urban districts. Tianjin consists of 16 county-level administrative areas, including six central urban districts, one new urban district, and nine counties that govern towns and rural areas. The prenatal care and children health care in six central urban districts are a routine of a three-tier care system consisting of approximately 65 primary hospitals, 6 district-level Women's and Children's Health Centers (also including secondary hospitals), and a city-level (Tianjin) Women's and Children's Health Center (also including tertiary hospitals). In Tianjin, all pregnant women are registered at the primary hospitals, and in the 32 nd gestational week, they are referred to a secondary hospital or a tertiary hospital for management till delivery. All children are given health examinations in the postnatal period, infancy, and at preschool. Tianjin Women and Children's Health Center is the leader of the 3-tier care system and responsible for organization, co-ordination and implementation of women and child health care, research and promotion projects.
Health care records for both pregnant women and their children have been collected and available in electronic form since 2009 [12,13]. Pregnant Women Health Records start within the first 12 weeks of pregnancy, and include general information (age, occupation, education, date of first visit, numbers of pregnancy/ infants, last menstrual period, expected delivery date, smoking habits, etc), history of diseases, family history of diseases, clinical measurements (height, weight, blood pressure, gynaecological examinations, ultrasonography, GDM screening test and other lab tests), complications during pregnancy, pregnancy outcomes (delivery modes, labor complications, etc), and postnatal period examinations (,42 days after delivery) [13]. Children Health Records include information from newborns (date of birth, sex, gestational weeks of birth, birth weight, birth recumbent length, Apgar score, etc), postnatal period (,42 days after birth) (names of the child and his/her parents, family history of diseases, feeding modalities, weight, and recumbent length) [13]. We collected 43,854 records of both mothers and their infants who were born in the central urban districts between June 2009 and May 2011. The present study included 33,973 mother-child pairs (77.5%) with all information and clinical measurements after excluding multiple births (n = 987), stillbirth (n = 143), multiparous women (n = 2), and mother-child pairs missing any variables required for this analysis (n = 8,749). Compared with mothers excluded in the present study, the mothers included were younger (27.6 vs. 27.8 years old) and had a lower prepregnancy BMI (22.1 vs. 22.6 kg/ m 2 ). The study and analysis plan was approved by the Tianjin Women's and Children's Health Center Institutional Review Board. Tianjin Women's and Children's Health Center has agreed to waive the need for written informed consent from all participants involved in our study because we use the electronic dataset from health care records.
Measurements
Mothers' anthropometric data were collected during the pregnancy by specially trained gynecologists in the primary hospitals by using the same devices. Weight and height were measured in light clothing and no shoes using a beam balance scale (RGZ-120, Jiangsu Suhong Medical Instruments Co., China). Blood pressure was measured using a standardized mercury sphygmomanometer (XJ11D, Shanghai Medical Instruments Co., China). Weight was measured to the nearest 0.01 kg using a digital scale (TCS-60, Tianjin Weighing Apparatus Co., China). Length was measured to the nearest 0.1 cm using a recumbent length stadiometer (YSC-2, Beijing Guowangxingda, China). We have done a validity study to compare the electronic data of measurements of birth weight and hospitals' measurements of birth weight among 454 children in six major hospitals. The correlation between two measurements is 0.991. We have also done a validity study to compare the electronic data of measurements of height and weight with the same visit's measurements of height and weight by trained health workers among 200 pregnancy women in four different local health centers. The correlations between electronic data and measurement data are 0.998 for body weight and 0.997 for height in these pregnancy women.
Body mass index (BMI) was calculated by dividing weight in kilograms by the square of height in meters. Prepregnancy BMI was categorized as underweight (BMI,18.5 kg/m 2 ), normalweight (18.5 kg/m 2 #BMI,24 kg/m 2 ), overweight (24 kg/ m 2 #BMI,28 kg/m 2 ), or obese (BMI$28 kg/m 2 ) using the standard of Working Group on Obesity in China [14]. The Chinese BMI classification standard was used due to the best sensitivity and specificity for identifying risk factors including hypertension, type 2 diabetes, and dyslipidemia in the Chinese population [15][16][17]. The prepregnancy BMI was calculated using the weight and height recorded at the first prenatal visit within the first 12 weeks of pregnancy. A previous study reported that there was a high correlation between self-reported prepregnancy weight and weight recorded at the first visit [18]. Weight gain of mothers during pregnancy was calculated as the difference between prepregnancy and delivery weight. Adequacy of GWG was defined according to the Chinese maternal prepregnancy BMI status and the 2009 IOM GWG recommendations (1): 12.5-18 kg (prepregnancy BMI,18.5 kg/m 2 ), 11.5-16 kg (BMI 18.5-23.9 kg/m 2 ), 7-11.5 kg (BMI 24.0-27.9 kg/m 2 ), and 5-9 kg (BMI.28 kg/m 2 ) [8]. We used the translation of US IOM GWG recommendations because no official recommendations exist in China.
We considered the risks of GDM, pregnancy-induced hypertension, caesarean section, preterm birth (preterm delivery), large for gestational age infant, small for gestational age infant, macrosomia and low birth weight as pregnancy complications and pregnancy outcomes. GDM was diagnosed based on a 75-g 2hour oral glucose tolerance test (OGTT) at pregnancy 24-28 weeks [19]. Women with impaired glucose tolerance (IGT) (fasting glucose ,126 mg/dL and 2-hour glucose $140 and ,200 mg/ dL) and diabetes (fasting glucose $126 mg/dL or 2-hour glucose $200 mg/dL) were defined as GDM according to WHO diagnostic criteria [20]. Pregnancy-induced hypertension was diagnosed by a systolic blood pressure $140 mmHg or diastolic blood pressure $90 mmHg in the 3 rd trimester or using antihypertensive drugs [21]. Preterm delivery was defined as gestational weeks of delivery ,37 weeks. Z scores for birth weight for gestational age, and birth length for gestational age were calculated using our own study population means and standard deviations. A small-for-gestational-age (SGA) infant was defined as an infant having a standardized birth weight ,10th percentile, whereas a large-for-gestational-age (LGA) infant was defined as an infant having a standardized birth weight .90th percentile. Neonatal outcomes also included low birth weight (birth weight ,2500 g) and macrosomia (birth weight $4000 g).
Statistical analyses
The general characteristics of both mothers and children based on different categories of maternal prepregnancy BMI and GWG were compared using the General Linear Model and chi-square test. Logistic regression was used to assess the single and joint associations of maternal prepregnancy BMI and GWG with the risks of pregnancy and neonatal outcomes. The analyses were adjusted for maternal age, maternal height, maternal education, smoking, family income, maternal occupation, gestational age, and birth weight (if needed). The significance of the trend over different categories of maternal prepregnancy BMI and GWG categories was tested in the same models by giving an ordinal numeric value for each dummy variable. The criterion for statistical significance was ,0.05 (for two-sided tests). All statistical analyses were performed with PASW for Windows, version 20.0 (Statistics 20, SPSS, IBM, USA)
Results
The general characteristics of both mothers and children based on maternal prepregnancy BMI and GWG categories are presented in Table 1. Mothers who were overweight or obese before pregnancy were older, and had a lower education level and a lower family income compared with mothers with prepregnancy normal weight. Compared with mothers with adequate GWG, mothers with excessive GWG were younger, had a higher prepregnancy BMI, and reported a lower education level, and mothers with inadequate GWG reported a lower education level and a lower family income. Table 2 shows the relative risks of maternal outcomes by single and joint effects of maternal prepregnancy BMI and GWG. Numbers of subjects of maternal outcomes by joint effects of maternal prepregnancy BMI and weight gain during pregnancy are presented in Table S1. After adjustment for all confounding factors, maternal prepregnancy BMI was positively associated with risks of GDM, pregnancy-induced hypertension, caesarean delivery, and preterm delivery. Maternal excessive GWG was associated with increased risks of pregnancy-induced hypertension and caesarean delivery, and a decreased risk of preterm delivery, and maternal inadequate GWG was associated with an increased risk of preterm delivery, compared with maternal adequate GWG. In the joint analyses of maternal prepregnancy BMI and GWG with maternal outcomes, the positive associations of prepregnancy BMI with the risks of GDM, pregnancy-induced hypertension, caesarean delivery, and preterm delivery were consistent in subjects with different levels of GWG. Women with both prepregnancy obesity and excessive GWG or adequate GWG had the highest (2.2-7.1 folds) risks of GDM, pregnancy-induced hypertension, and caesarean delivery compared with women with normal prepregnancy BMI and adequate GWG. Table 3 shows that the relative risks of neonatal outcomes by single and joint effects of maternal prepregnancy BMI and GWG. Numbers of subjects of neonatal outcomes by joint effects of maternal prepregnancy BMI and weight gain during pregnancy are presented in Table S1. After adjustment for all confounding factors, maternal prepregnancy BMI was positively associated with risks of LGA and macrosomia, and inversely associated with risks of SGA and low birth weight. Maternal excessive GWG was associated with increased risks of infant LGA and macrosomia, and decreased risks of infant SGA and low birth weight, and maternal inadequate GWG was associated with an increased risk of infant SGA, and decreased risks of infant LGA and macrosomia at birth, compared with maternal adequate GWG. The positive associations of maternal prepregnancy BMI with the risks of infant LGA and macrosomia, and the inverse associations of maternal prepregnancy BMI with the risks of infant SGA and low birth weight were consistent in mothers with different levels of GWG except in obese mothers with inadequate and adequate GWG. Infants born to mothers with prepregnancy obesity and excessive GWG had the highest (4.0-4.1 folds) risk of LGA and macrosomia, infants born to mothers with both prepregnancy lean (BMI,18.5) and inadequate GWG had the highest (2.2 folds) risk of SGA, compared with those children born to mothers with both prepregnancy normal weight and adequate GWG.
Discussion
The present study indicated that maternal prepregnancy obesity and excessive GWG were associated with greater risks of pregnancy-induced hypertension, caesarean delivery, and greater infant size at birth. Meanwhile, maternal prepregnancy underweight was associated with increased risks of infant SGA and low birth weight, and maternal inadequate GWG was associated with increased risks of infant preterm delivery and SGA.
Several studies found that the risk of pregnancy-induced hypertension was greater among women who entered prepregnancy with overweight or obesity, and who had excessive GWG [3,[22][23][24]. The Avon Longitudinal Study of Parents and Children (ALSPAC) found that greater GWG in early pregnancy (up to 18 weeks) was independently associated with an increased risk of gestational hypertension, and GWG in midpregnancy (18-29 weeks) was not associated with blood pressure change in late pregnancy (29-36 weeks) [24]. Obesity is known as one important risk factor for pregnancy related hypertension and preeclampsia [25]. Frederick et al. found that every 1 kg/m 2 increase in prepregnancy BMI resulted in an 8% increased risk of preeclampsia (adjusted RR = 1.08; CI = 1.05-1.11) [26]. Obese women have been shown to have increased blood volume and cardiac output, and increased blood pressure during pregnancy [8,27]. Thus, women who develop hypertension during pregnancy are more likely to experience edema than women who remain normotensive, and this in turn may result in greater GWG. In the present study, women who were prepregnancy obese and had excessive GWG showed an almost 6-fold risk of pregnancyinduced hypertension compared with women with normal prepregnancy BMI and adequate GWG. In addition, we also found that women with prepregnancy overweight or obesity and adequate GWG had a higher risk of pregnancy-induced hypertension. Our findings indicate that higher prepregnancy BMI might play an important role in the development of pregnancyinduced hypertension.
In the present study, the relative risks of GDM were higher in those women with prepregnancy overweight and obesity. In the joint 12 analyses of maternal prepregnancy BMI and GWG, women with prepregnancy overweight or obesity and adequate GWG had a 2.6-3.6 fold risk of GDM, and women with prepregnancy overweight or obesity and excessive GWG had a 1.6-2.2 fold risk of GDM compared with those women with normal weight and adequate GWG. Thus, our findings indicate that higher prepregnancy BMI plays an important role in the development of GDM. Previous studies reported that GDM was an adverse pregnancy outcome of excessive GWG [28,29]. However, like some other studies [30], the present study did not find an association of excessive GWG with GDM risk. This might be that women who were diagnosed as GDM would take more lifestyle interventions and control weight gain during pregnancy. In addition, a previous study showed insulin sensitivity might increase or decrease during early pregnancy depending on the prepregnancy insulin sensitivity status of the women. In the very insulin-sensitive women, insulin sensitivity most often decreases and is accompanied by an increase in adipose tissue [31]. In contrast, among more insulin-resistant women (e.g. those have GDM), insulin sensitivity often increases and is accompanied by a decrease in potential loss of adipose tissue [32]. These physiologic changes may help to explain in part the relative no more weight gain during pregnancy in GDM women.
The positive associations of maternal higher prepregnancy BMI and excessive GWG with the risk of larger birth weight of infants were similar to previous studies [3,11,33]. A clear association exists between maternal obesity and infant size at birth. In recent years researchers have recognized that excessive GWG is also associated with increased weight at birth [3]. In the present study, maternal excessive GWG had a 2.32 fold risk of infant LGA compared with those women with adequate GWG. Similarly, mothers with prepregnancy overweight or obesity had a 1.73-2.80 fold risk of infant LGA compared with those mothers with normal prepregnancy weight. A recent study reported that the greatest difference in neonatal fat mass was observed among prepregnancy overweight women with excessive GWG compared with overweight women with adequate GWG [34]. For women within the excessive GWG category, infants born to normal weight mothers had lower percent body fat (11.8%) than infants born to overweight mothers (13.7%) and obese mothers (14.2%). Infants born to mothers with excessive GWG had greater fat-free mass than infants born to mothers with adequate GWG [34]. This indicated that maternal excessive GWG might play an important role as prepregnancy BMI in the offspring's overweight, and might contribute to the overweight epidemic among infants and children.
One important issue of reverse causality should be also considered in the analyses of maternal GWG with infant LGA. It has been suggested that associations of maternal GWG with infant LGA do not result from GWG itself, but rather to underlying factors that influence both weight gain and the outcomes, such as maternal diet composition and physical activity level. In addition, it is important to determine whether these relationships are independent of prepregnancy BMI or if they differ by prepregnancy BMI. The present study indicated that the positive association of maternal GWG and the risk of infant LGA was consistent among women different prepregnancy BMI and independent of maternal prepregnancy BMI.
In the present study, we also found that maternal higher prepregnancy BMI and excessive GWG were associated with caesarean delivery. This may be that large size baby birth could cause delivery complications, such as caesarean delivery. A US study reported that the rate of caesarean delivery was 27.2% in women who gained more than the weight that the IOM recommended [9]. Another study reported that increased prepregnancy BMI was associated with an increasing incidence of caesarean section in a population of Chinese women in Hong Kong [35]. The rate of caesarean delivery in the present study (65.6%) was higher than in other studies from developed areas, but similar to a previous study in urban areas of China (64.1%) [36]. The higher rate of caesarean section in China may be influenced by socioeconomic factors such as education, household income, and access to health insurance. The introduction of the one-child policy in 1979 may have contributed indirectly to the rise. Parents Table 2. Odd ratios (95% confidence intervals) of maternal outcomes by joint effects of maternal prepregnancy body mass index and weight gain during pregnancy. who expect to have only one child may prefer birth by caesarean section to vaginal delivery because they think it is safer and free from pain and anxiety. The present study evaluated the single and joint associations of maternal prepregnancy BMI and GWG with maternal and neonatal outcomes. We found that maternal prepregnancy BMI plays a more important role than GWG in maternal outcomes, especially in pregnancy complications. Pregnancy-induced hypertension and gestational diabetes are the two key common pregnancy complications. Previous studies have reported that maternal obesity is associated with increased risks of adverse pregnancy outcomes including gestational diabetes and pregnancy-induced hypertension [3,37]. Women with prepregnancy overweight or obesity would take more lifestyle interventions and control weight gain during pregnancy, and these two diseases will both affect weight gain in pregnancy. However, the present study found that only women with prepregnancy underweight and adequate GWG had decreased risks of pregnancy-induced hypertension and caesarean section compared with women with normal prepregnancy weight and adequate GWG. And maternal prepregnancy underweight with excessive GWG was associated with an increased risk of caesarean section. So, it is important to help women gain adequate weight during pregnancy based on their prepregnancy BMI to improve pregnancy outcomes. For neonatal outcomes, both higher prepregnancy BMI and excessive GWG could result in high maternal glucose, free fatty acid, and amino acid concentrations, thus leading to the risk of greater infant size at birth. Therefore, maternal prepregnancy BMI has similar effects as GWG in the neonatal outcomes.
The major strength of our study is the use of GWG category instead of net weight gain according to the new IOM guidelines [8]. These new guidelines are formulated as a range of weight gain for each category of prepregnancy BMI. Our study assessed the single and joint associations of maternal prepregnancy BMI and GWG with the risk of pregnancy and neonatal outcomes. A limitation of our study is that women in the present study are all living in urban areas. We did not include information of women who live in rural areas. However, the present study is an ongoing project, and we will obtain more information from both urban and rural areas. Another limitation is that the numbers of part of pregnancy outcomes in several multiple cells are low in the joint analyses of maternal prepregnancy BMI and GWG with pregnancy outcomes, which may limit statistical power in some subgroups.
In summary, our study indicated that pregnancy-induced hypertension, caesarean delivery, and infant size at birth were important outcomes of maternal prepregnancy overweight/obesity and excessive GWG. Health care providers should inform women to enter pregnancy with a BMI in the normal weight category and limit their GWG to the range specified for their prepregnancy BMI. It is important to pay more attention to maternal influences during pregnancy to prevent the intergenerational cycle of obesity. Strategies to raise public awareness of the risks of maternal adiposity and weight gain during pregnancy on offspring's future health are required.
Supporting Information
Table S1 Numbers of subjects of maternal and neonatal outcomes by joint effects of maternal prepregnancy body mass index and weight gain during pregnancy. (DOC)
Author Contributions
Conceived and designed the experiments: GH. Performed the experiments: EQL JG LP BJL PW JL YW GSL. Analyzed the data: NL. Wrote the paper: NL AAB LFH GH. | 2018-04-03T03:42:07.390Z | 2013-12-20T00:00:00.000 | {
"year": 2013,
"sha1": "89c500977d7ceac60ec1b3b04dcc9ddd90a44b08",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0082310&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fb89ac7a685b67ce1f60334344684b75e680eec5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26878080 | pes2o/s2orc | v3-fos-license | Spread of neuronal degeneration in a dopaminergic, Lrrk-G2019S model of Parkinson disease
Flies expressing the most common Parkinson disease (PD)-related mutation, LRRK2-G2019S, in their dopaminergic neurons show loss of visual function and degeneration of the retina, including mitochondrial abnormalities, apoptosis and autophagy. Since the photoreceptors that degenerate are not dopaminergic, this demonstrates nonautonomous degeneration, and a spread of pathology. This provides a model consistent with Braak’s hypothesis on progressive PD. The loss of visual function is specific for the G2019S mutation, implying the cause is its increased kinase activity, and is enhanced by increased neuronal activity. These data suggest novel explanations for the variability in animal models of PD. The specificity of visual loss to G2019S, coupled with the differences in neural firing rate, provide an explanation for the variability between people with PD in visual tests.
F lies expressing the most common
Parkinson disease (PD)-related mutation, LRRK2-G2019S, in their dopaminergic neurons show loss of visual function and degeneration of the retina, including mitochondrial abnormalities, apoptosis and autophagy. Since the photoreceptors that degenerate are not dopaminergic, this demonstrates nonautonomous degeneration, and a spread of pathology. This provides a model consistent with Braak's hypothesis on progressive PD. The loss of visual function is specific for the G2019S mutation, implying the cause is its increased kinase activity, and is enhanced by increased neuronal activity. These data suggest novel explanations for the variability in animal models of PD. The specificity of visual loss to G2019S, coupled with the differences in neural firing rate, provide an explanation for the variability between people with PD in visual tests.
The discovery of genes associated with Parkinson disease offers the hope that genetic animal models would provide revolutionary advances, through novel insights into the molecular pathology and the mechanisms of cell death. Many mouse models have been disappointing, and failed to capture the essential features of PD. However, work with that most tractable of genetic organisms, the fruit fly, has provided a range of insights. For example, several PD-related genes (park/parkin, Pink1 and Lrrk/LRRK2) belong to a common pathway. Again, fly models have emphasized mitochondrial We have now extended the contribution made by Drosophila, showing that expression of the common PD-related mutation (Lrrk-G2019S) in dopaminergic neurons leads to spread of degeneration from neuron to neuron, due to the increased kinase activity of the G2019S mutant protein, and that this spread is exacerbated by increasing demands on neuronal activity. It was known that some people with PD had reported deficits in their sight, including contrast adaptation, and that both mammals and flies had dopaminergic neurons and receptors in their visual system. As loss of dopaminergic neurons is characteristic of PD, we therefore exposed flies to a 500-ms flash of light and recorded their response using electroretinograms (ERGs, Fig. 1A). We used the powerful GAL4-UAS system to express either the normal (wild-type) or the mutant (G2019S) form of human LRRK2, in dopaminergic neurons. Up until 10 d, both wild-type and G2019S flies respond consistently, but by 28 d the G2019S response has nearly flat-lined, while the wild-type flies still respond fully (Fig. 1B). The almost complete loss of ERG response indicates that the photoreceptors are no longer sensitive to light. The loss of photoreceptor function is accompanied by signs of degeneration throughout the visual lobes of the brain, including dilation and disorganization of the mitochondria in the G2019S photoreceptors, and stronger staining of the G2019S photoreceptors by markers of apoptosis and autophagy (Fig. 1C).
AutophAgic punctum
The first key development is that our data potentially model the "spreading pathology" model of PD championed by Braak. This arises from the fact that fly photoreceptors are not dopaminergic, but histaminergic, yet it was in the dopaminergic neurons that the transgene is expressed, and in the photoreceptors that the visual function and degeneration are assessed. Thus, in this model of PD (unlike previous experimental systems), the loss of function and neuronal degeneration is not cell autonomous, but requires cell−cell transmission. When G2019S is expressed pan-neuronally, the loss of visual function is much less than when it is expressed in just the dopaminergic neurons, suggesting that an asymmetry between adjacent cells in G2019S expression enhances the spread of the degeneration. Cell−cell transmission of another PD-related protein, SNCA/ α-synuclein, has been implicated in analysis of fetal grafts implanted in PD patients, in graft-host transmission in animal models and in cell culture. It has always been said that flies have no SNCA homolog, but other possible signals include the transmitter dopamine, growth factors or cytokines. Like SNCA, LRRK2 is secreted in exosomes, for example from the kidney, so another possibility is transmission by LRRK2-G2019S itself.
A second key point is that the model is highly specific to G2019S: mutations at other points along the human LRRK2 or Drosophila Lrrk gene do not induce loss of visual response. These include mutations affecting the GTPase domain of LRRK2. Equally, the kinase dead, G2019S-K1906M form, of LRRK2, does not induce degeneration. This high degree of specificity makes the model very suitable for "first in vivo" drug testing. It also provides an explanation for the varying data from visual tests on PD patients: the differences in visual dysfunction between patients may reflect the fact that only some of them carry the LRRK2-G2019S mutation. At the moment this remains a hypothesis, as most data on the visual responses of people with PD was obtained before the genetic era.
A third important aspect of this report is the finding that increasing the neural activity in the visual system (either by keeping the flies in a flashing "disco" chamber, or by genetically downregulating voltagegated potassium channels) accelerates the loss of visual function. Neuronal function (action potentials, transmitter release and recycling) is energetically demanding, and the brain already consumes up to 20% of resting metabolism. Additional neural activity will lead to an increased demand for ATP, as membrane pumps are activated to maintain intracellular levels of signaling cations and transmitters. This demand for ATP will provide extra load on the mitochondria, leading to oxidative stress generation, apoptosis and autophagy. The accelerated decline of vision in flies constantly adapting their eyes to new light intensities led Hindle et al. to suggest a new explanation for discrepancies in dopaminergic neuron loss between fly models of PD. Previous explanations had focused on variation in microscopy or food; now it may be essential to take differences in energy demand (e.g., from changing illumination) into consideration. Furthermore, increasing the activity of the visual system of mammalian models may make their phenotype stronger and more consistent. Finally, in human populations, the penetrance of the G2019S disease ranges from 25-50% at age 70, and it may be that part of this variability derives from differences in neuronal energy demand.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed. | 2016-05-12T22:15:10.714Z | 2013-03-25T00:00:00.000 | {
"year": 2013,
"sha1": "2f6eed975d9dcf3e58d29dba10002aac47081f09",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/auto.24397?needAccess=true",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f6eed975d9dcf3e58d29dba10002aac47081f09",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
214394269 | pes2o/s2orc | v3-fos-license | The use of natural fiber from oil palm empty fruit bunches for soft soil stabilization
The use of natural fiber as a construction material, especially soil stabilization continues to grow. This paper focuses on the use of natural fiber to increase shear strength and bearing capacity of soft soil. The fiber used is obtained from oil palm empty fruit bunches (EFB) as a by-product of palm oil mills. Soft soil mixed with fiber with a composition of 5, 6, 7, and 8% fiber in the mixture. Some tests such as the standard compaction test, unconfined compression test, laboratory vane test, and California Bearing Ratio test were carried out. The results show that the soft soil used in this study can be compacted at fiber content higher than 5%. The maximum density obtained is 0.92 g/cm3 at 7% fiber content. The compacted soil-EFB mixtures successfully increase the shear strength and bearing capacity of the soft soil shown by the results of the UCT, laboratory vane, and CBR tests. The soil consistency changes from soft to medium soil. The maximum qu, su, and CBR obtained are 0.8 kg/cm2, 0.65 kg/cm2, and 6%, respectively at optimum fiber content of 6 to 7%.
Introduction
The use of fiber for construction materials has been carried out, especially for increasing the strength of concrete [1,2,3]. Addition of steel fiber to concrete increases strength and maximum displacement, and also reduces the number of cracks in concrete [2]. Fiber is also developed for soil stabilization using either synthetic fibers such as tire shredders [4], Nylon fiber [5], polypropylene fiber [6,7], glass fiber [8], basalt fiber [9], as well as natural fibers such as coir fiber [10], wheat straw, barley straw, and wood shaving [11], and bamboo fiber [12].
The interaction between soil and fiber is interesting to learn in improving the nature of soil engineering. [13] stated that fiber-reinforced soil shear strength has two components, including the shear strength of the soil matrix and the tensile stress acting on the fibers. Besides that, [14] stated that the bonding of soil and fiber caused the contribution of fiber to the increase in shear strength to the pull out the mechanism and the tensile strength of the fiber itself. This mechanism explains the interaction of soil and fiber in general, where other possible interactions occur between soil and fiber, especially for natural fibers. [11] found that natural fiber absorbs more water than the soil. This behavior is needed, especially for the stabilization of soft soils that have high water content. Some other advantages of using natural fiber are environmentally friendly alternatives, locally available, can create composites with cement/lime, inexpensive, and biodegradable [5,17]. Therefore, this paper discusses the alternative use of natural fiber from oil palm empty fruit bunches (EFB) for stabilizing soft clay soils. The fibers are a byproduct of the palm oil industry. The use of this fiber as an alternative to soft soil stabilization is rarely discussed. [18] has succeeded in increasing the strength of laterite brick slightly by adding 3% EFB in the mixture of laterite, sand, and cement.
For applications in the field, fiber reinforcement is possible to repair slopes and strengthen thin layers of soil where synthetic materials such as geotextiles and geogrids are challenging to implement [13]. Fiber can also be used to stabilize subgrade soil from sand to high plasticity clay [20]. Therefore, this paper focuses on providing new information about the possibility of using natural fiber from an oil palm empty fruit bunch as soft clay reinforcement material.
Soil
The soft soil used was taken near the city of Banjarmasin, the capital of South Kalimantan. The soft soil characteristics are summarized in Table 1. The physical properties were determined using the tests based on ASTM standards [21]. The soft clay was classified as an organic soil with high plasticity (OH) based on USCS classification system.
Fiber
The fiber used is oil palm empty fruit bunches (EFB) as a byproduct of palm oil processing in PT. Perkebunan Nusantara XIII, Pleihari (Figure 1 (a)). The fiber has a water content of 9.8% and the density of 0.45 g/cm 3 , as shown in Table 2. The fiber density is smaller than that reported by [18]. However, the value is very close to the realistic density of natural fibers of coil and sisal (i.e., 0.67-1.07 g/cm 3 ) as reported by [19]. The fiber has a diameter of 200-500 m measured from the SEM image ( Figure 1(b)) and rough surface, as shown in Figure 1 (c).
A test was carried out to determine the water absorption of EFB used by soaking it for one, two, and three days. The results are shown in Table 2. After soaking, the fiber moisture content increases to 384.99% and does not change significantly on the second and third days with water levels of 422.71% and 415.87%, respectively. The result confirmed previous findings that the fibers absorb more water than soil and continue to increase with increasing fiber content [11].
Techniques and procedures
In sample preparation, the size of fiber used varies depending on the test performed. For compaction, CBR, and vane tests, the fiber length used was a maximum of 10 cm or smaller than the mold size used (i.e., 11.51 cm). For the UCT test, a 1 cm fibers size was used to adjust the diameter of the sample used (i.e., 4.77 cm).
Because the soft soil used had very high water content, the initial test was performed to determine the percentage of fiber at which compaction can be carried out. Based on trial tests, the mixture can be compacted at the fiber content of 5%. The compositions of fiber used in this study were 5, 6, 7, and 8%. Moreover, four types of specimens were prepared with different EFB contents (on a dry mass base); namely, 5% EFB, 6% EFB, 7% EFB, and 8% EFB. The specimens were dynamically and statically compacted depend on the tests performed. Table 3. The fiber contents in the sample were evenly distributed with an average of 5.07% and 5.99% for samples of 5% EFB and 6% EFB, respectively. Table 4 shows compaction test data using standard Proctor for samples with fibers content of 5, 6, 7, and 8%. Compaction was performed by three layers and 25 blows per layer. There are two fundamental things obtained from Table 4; there are changes in water content and sample density. The water content of the sample decreases with increasing fiber content to a 7% EFB, as shown in Figure 2 (a). This reduction in water content was due to the addition of fiber that fills the soil pores. The figure also shows that the sample that can be compacted has moisture content smaller than the liquid limit. At high water content, the pore water pressure increases in compaction proses so that fiber does not contribute much to this condition. This agrees well with the findings reported by [14]. soil has begun to be plastic (i.e., water content less than liquid limits) , the bond between fiber and soil has begun to form. The more fiber, the higher the soil-fiber bonding due to pull-out. While at 8% fiber content, the interaction of soil and fiber decreases because the amount of fiber reduces the bonds between soil and fiber. In this condition, compaction is difficult to perform, and the large pores appear in the sample. Visually, Figure 3 shows pictures of compacted samples for each percentage of fiber.
Unconfined compression test
The unconfined test was performed to a statically compaction specimen due to the presence of fiber. The sample with a diameter of 4.75 cm and 9.24 cm high were used. The result obtained from the test was unconfined compression strength (qu). Figure 4 shows the qu as a function fiber content. Consistent with compaction result, the qu increases by increasing fiber content. The peak of qu (i.e., 0.8 kg/cm 2 ) was obtained at the fiber content of 7%. The value decreases at 8% fiber content. The result reveals that the presence of fiber changes the soil consistency from very soft to medium soil based on [22] (i.e., medium soil 0.48-0.96 kg/cm 2 ). Figure 5 shows CBR as a function of fiber content. Different from compaction and qu results, the maximum CBR of 6% was obtained in between 6 to 7% fiber content (i.e., 6.4%). Random distribution of fiber may result in optimum fiber content shifted between 6 and 7%. Consistent with UCT, the
Laboratory vane test
To avoid the vane being resisted by fiber as reported by [17], the small vane of 1.85 cm was used. Figure 6 shows undrained shear strength (su) obtained from laboratory vane test at different compaction energy. As shown in the figure, the maximum su of soil compacted by 10 and 25 blows are 0.46 kg/cm 2 and 0.65 kg/cm 2 placed at the fiber content of 6%. The energy of 25 blows is a standard Proctor compaction energy. The result reveals that the optimum fiber content obtained from laboratory vane test gives the smallest optimum fiber content. The smaller vane used in this study may result in reducing the fiber contribution in the mixtures. The test was measured the shear strength of soil between fiber; it was entirely the shear strength of soil-fiber mixtures. By increasing the compaction energy (i.e., 56 blows), the su increases to 0.71 kg/cm 2 at the fiber content of 6.7% approximately.
Conclusion
The effects of natural fiber made from oil palm EFB in enhancing soft soil shear strength were presented. Several important findings obtained from the study are: EFB function is not only as fiber to increase shear strength but also absorbs water, allowing soft soil to be compacted. The soft soil used in this study can be compacted at fiber content higher than 5%. The maximum density obtained is 0.92 g/cm 3 at 7% fiber content. The compacted soil-EFB mixtures successfully increase the shear strength and bearing capacity of the soft soil shown by the results of the UCT, laboratory vane, and CBR tests. The soil consistency changes from soft to medium soil. The maximum qu, su, and CBR obtained are 0.8 kg/cm 2 , 0.65 kg/cm 2 , and 6%, respectively at optimum fiber content of 6 to 7%. Other paragraphs are indented (BodytextIndented style). | 2019-11-28T12:37:43.750Z | 2019-11-21T00:00:00.000 | {
"year": 2019,
"sha1": "27cb91bed4d92284694dcd921883a62e74af8f1b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/669/1/012026",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "6c0e2983fd843aba4d2ef8faa2a2cc67b403ea6e",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Engineering"
]
} |
119647355 | pes2o/s2orc | v3-fos-license | The multi-time correlation functions, free white noise, and the generalized Poisson statistics in the low density limit
In the present paper the low density limit of the non-chronological multitime correlation functions of boson number type operators is investigated. We prove that the limiting truncated non-chronological correlation can be computed using only a sub-class of diagrams associated to non-crossing pair partitions and thus coincide with the non-truncated correlation functions of suitable free number operators. The independent in the limit subalgebras are found and the limiting statistics is investigated. In particular, it is found that the cumulants of certain elements coincide in the limit with the cumulants of the Poisson distribution. An explicit representation of the limiting correlation functions and thus of the limiting algebra is constructed in a special case through suitably defined quantum white noise operators.
INTRODUCTION
The reduced dynamics of a quantum open system interacting with a reservoir in certain physical regimes is approximated by Markovian master equations. These regimes include the weak system-reservoir interactions and dilute reservoirs and in the theoretical framework they are described by certain limits. For a weakly interacting system one considers the limit as the coupling constant goes to zero (Weak Coupling Limit, WCL) whereas for a dilute reservoir one considers the limit as the density of the reservoir goes to zero (Low Density Limit, LDL) and an appropriate time rescaling should be performed in order to get a non-trivial limit. The Markovian reduced dynamics in these limits is considered in the review papers by Spohn and Lebowitz 1,2 . The reduced dynamics in the LDL was considered in details later by Dümcke 3 using the method based on the quantum Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy.
The total dynamics in these limits is governed by various quantum stochastic equations. There is a unique up to now approach, called the stochastic limit method, which allows an efficient derivation of the stochastic equations in the WCL. This approach is based on the quantum white noise technique and was developed by Accardi, Lu, and Volovich 4 .
The convergence of the evolution operator of the total system in the LDL to a solution of a quantum stochastic equation was proved by Accardi and Lu 5 and by Rudnicki, Alicki, and Sadowski 6 . Recently the low density limit was investigated with the quantum white noise technique 7 ,8 . This technique, well developed for the WCL, was non-trivially modified to include the LDL and for this case was called the stochastic golden rule for the low density limit. This technique was applied to the derivation of the quantum stochastic equations in the LDL. An advantage of the obtained equations is that they, in contrast with the exact Schrödinger equation, are explicitly solvable. At the same time they provide a good approximation of the exact dynamics.
The approach of 7,8 uses the so called Fock-antiFock representation for the canonical commutation relations (CCR) algebra (this representation is unitary equivalent to the Gel'fand-Naimark-Segal representation). The difficulty with this approach is that the creation and annihilation operators in the Fock-antiFock Hilbert space do not describe creation and annihilation of physical particles and thus do not have direct physical meaning. To avoid this difficulty the investigation of the LDL directly in terms of the physical fields was performed 9 . Using this approach the chronological correlation functions in the LDL were found and the corresponding stochastic equations derived.
In the present paper we investigate the low density limit of the non-chronologically ordered correlation functions of boson number type operators. The investigation is related with ab initio derivations of quantum stochastic equations describing quantum dynamics of a test particle interacting with a dilute gas. We find the limiting truncated correlation functions of the number type operators and show that they can be computed by representing the number operators through creation and annihilation operators and then considering only a sub-class of diagrams associated to non-crossing pair partitions.
This fact allows to represent the limiting truncated correlation functions as the nontruncated correlation functions of number operators of a free quantum white noise thus making a connection with the Voiculescu free probability theory. We find the limiting statistics and show that the cumulants of certain elements coincide in the limit with the cumulants of the Poisson distribution.
The free probability theory was developed by Voiculescu around 1985 as a way to deal with von Neumann algebras of free groups. Then the theory was separated from this special context and began to develop as an independent field. In particular, applications of the free independence theory to random matrices were found. The details of free probability theory and its applications to random matrices could be found, for example, in references 10 ,11 .
Expectations of free random variables are characterized by diagrams associated to non-crossing pair partitions. The vanishing of crossing diagrams in the stochastic weak coupling limit for nonrelativistic QED and for the Anderson model was found in Refs 4 and 12 , respectively, thus making a connection between the WCL and free probability. The WCL is typically described by the quantum Boltzmann statistics 4 . In Ref 13 a generalized version of Boltzmann commutation relations, the so called entangled commutation relations, was found in the weak coupling limit for nonlinear interactions and possible applications to photon splitting cascades were discussed.
The investigation of the multitime non-chronologically ordered correlation functions could have a connection with the behavior of fluctuations in certain asymptotic regimes. The latter is described in the review paper by Andries, Benatti, De Cock and Fannes 14 .
In that approach the limiting statistics is defined in terms of ground state distribution determined by non trivial pair partitions. The authors conjecture the appearance of exotic statistics in certain asymptotic regimes. The asymptotic fluctuations are the limiting correlation functions of appropriate centered elements and thus the results of the present paper could be applied to study the fluctuations in the low density limit.
In Sec. II the truncated non-chronologically ordered correlation functions are defined and their low density limit is established (Theorem 1). In Sec. III the irreducible diagrams (pair partitions) which contribute to the limiting correlation functions are found (Theorem 2). In Sec. IV the limiting truncated correlation functions are represented as correlation functions of a suitable free white noise. In Sec. V we identify the independent in the limit subalgebras (Theorem 4) and calculate the limiting cumulants which for some elements coincide with the cumulants of the Poisson distribution (Theorem 5). In Sec. VI an explicit representation of the limiting correlation functions and thus of the limiting algebra is constructed for a special case by using suitable quantum white noise operators.
THE CORRELATION FUNCTIONS IN THE LDL
We begin this section with construction of a general class of non-commutative probability spaces relevant for the investigation of the low density limit. The framework of a * -probability space is used. A relation between the objects defined in this section and the model of a test particle interacting with a dilute gas is given in Appendix A.
Definition 1 A * -probability space is a pair (A, ω), where A is a unital * -algebra over C and ω : A → C is a state, i.e., a linear normalized, ω(1 A ) = 1, and strictly positive functional.
Let H be a Hilbert space with inner product denoted by ·, · (called as one particle Hilbert space), {S t } t∈R a one parameter unitary group in H (a one particle free evolution),n a bounded positive operator in H (density operator) such that ∀t ∈ R, S −tn S t =n, and B a countable set of real numbers.
Let Γ(H) be the symmetric Fock space over H. For any trace class self-adjoint operator T acting in H we denote by N(T ) ≡ dΓ(T ) its second quantization operator in Γ(H) and extend this definition by complex linearity to the set of all trace class operators T (H). For any T ∈ T (H), ω ∈ B, and a positive number ε > 0 we define the following operator in Γ(H): For any open subset Λ ⊆ R let L(Λ) be the set of functions from L(R) with support in Λ. We denote by A Λ,ε the * -algebra generated by operators N T,ω,ε (ϕ) := dtϕ(t)N T,ω,ε (t) with T ∈ T (H), ω ∈ B, ϕ ∈ S(Λ) and denote A ε := A R,ε . Let A ± (g), g ∈ H be the creation and annihilation operators in Γ(H) [we denote in the sequel A − (g) ≡ A(g)] with the canonical commutation relations [A(f ), A + (g)] = f, g and let A CCR be the algebra of polynomials in A ± (·). Any operator N(T ) can be represented in terms of the creation and annihilation operators. For example, if T = |f g|, where we use Dirac's notations for elements f, g ∈ H, then N(T ) = A + (f )A(g). An arbitrary operator N(T ) can be expressed in terms of A ± using the fact that any trace class operator T is a limit of finite rank operators. Thus the algebra A ε is a subalgebra of A CCR .
Let ωn be a gaussian gauge-invariant mean-zero state on A CCR with the two point correlation function ωn(A + (f )A(g)) := g,nf (thus ωn(N(T )) = Tr(nT ) and here we use the assumption for T being trace class). Denoting by the same symbol its restriction to A Λ,ε , we finally have for any ε > 0 and for any open subset Λ ⊆ R the * -probability space (A Λ,ε , ω εn ).
Remark 1
The condition ∀t: S −tn S t =n leads to the invariance of the state ωn under the free evolution generated by S t .
With the notations above we define the non-chronologically ordered multitime correlation functions as We will use for the correlation functions (2) and (3) also the shorter notations W ε (t 1 , . . . , t n ) and W ε (ϕ 1 , . . . , ϕ n ). The reason for introducing the averaged operators N T,ω,ε (ϕ) and the averaged correlation functions (3) is that, as we will show below, the non-averaged operators N T,ω,ε (t) and the correlation functions (2) in the limit as ε → 0 become singular distributions. Clearly, one has the relation and for n > 1 by induction through the relation: The truncated correlation functions are often used in quantum field theory and in quantum kinetic theory 15 . They entirely determine the corresponding non-chronological correlation functions. Thus the investigation of the limit of the non-chronological correlation functions can be reduced to the investigation of the limit of the truncated correlation functions.
We define the "projection" P E := (2π) −1 dtS t e −itE [it has the property P E P E ′ = δ(E − E ′ )P E ] and for any k = 1, 2, . . . , n denoteω k = ω n + . . . + ω k . The following theorem states the low density limit of the truncated correlation functions. Theorem 1 One has the limit in the sense of distributions in variables t 1 , . . . , t n : where Tr denotes trace and δω 1 ,0 is the Kronecker delta symbol.
The theorem is a corollary of Theorem 2 from Section 3.
THE NON-TRIVIAL DIAGRAMS
In the present section we investigate the low density limit of the non-chronologically ordered correlation functions for the particular case of operators of the form T l = |f l g l | and find the diagrams which are non-trivial in the low density limit. In order to simplify the notations we will use the following energy representation for the creation and annihilation operators: (a slightly different version of the energy representation was introduced in 7 ). One has N T l ,ω l ,ε (t l ) = e −it l ω l /ε dE l A + l A l . Notice that the operator A + l is not the adjoint of A l . The symbols A l , A + l are used only to simplify the notations below. A multitime correlation function can be expressed using Gaussianity of the state ωn and the energy representation for the creation and annihilation operators as W ε,n,T 1 ,ω 1 ,...,Tn,ωn (t 1 , . . . , t n ) = exp −i n l=1 where ′ is the sum over k = 1, . . . , n, 1 = i 1 < i 2 < . . . < i k , j k+1 < . . . < j n , i l ≤ j l for l = 1, . . . , k and j l < i l for l = k + 1, . . . , n. The sum contains terms of the form To each such term we associate a diagram by pairing in the string A + 1 A 1 A + 2 A 2 . . . A + n A n the operators A + i l and A j l for l = 1, 2, . . . n. Definition 3 We say that the expression (6) corresponds to a reducible diagram if there exists a nonempty subset I ⊂ {1, . . . , n} (strict inclusion) such that i l ∈ I ⇔ j l ∈ I. Otherwise we say that the expression (6) corresponds to an irreducible diagram.
An important property of the truncated correlation functions (Def. 2) is that they keep only all irreducible diagrams. The following are the examples of irreducible (first) and reducible (second) diagrams for n = 2: Given an reducible diagram, one can represent the set {1, . . . n} as a union of several disjoint subsets I 1 , . . . , I l such that the diagram contains only pairings between operators with indices from the same subsets. In this sense a general reducible diagram can be represented as a union of mutually disjoint irreducible diagrams. Examples of the truncated correlation functions, the corresponding irreducible diagrams, and their limits as ε → 0 for n = 1, 2, 3 are given below.
Example 1 n = 1. The invariance of the state under the free evolution leads to the identity W T ε (t) ≡ W ε (t) ≡ W ε (0) = g 1 ,nf 1 .
This expression corresponds to the first (irreducible) diagram in (7) which is non-zero in the limit. Application of Lemma 1 (see Appendix B) to the r.h.s. of (8) gives This expression corresponds to the sum of the two irreducible diagrams: In this case only the first diagram is non-zero in the limit and Lemma 1 gives The case of arbitrary n is described by the following theorem.
Theorem 2 Let T l = |f l g l |, where f l , g l ∈ H for l = 1, 2, . . . , n. One has the limit in the sense of distributions in variables t 1 , . . . , t n : For each n only the following irreducible diagram is non-zero as ε → 0: Proof. Case (a): ω 1 = ω 2 = . . . = ω n = 0. Using the correlation functions Define the permutations p i and p j of the set (1, . . . , n) by p i (l) = i l and p j (l) = j l for l = 1, . . . , n and let p α = p i p −1 j . Consider the expression in the square brackets in the exponent in (11). The term proportional to t l in this expression has the form t l (E l − E α l ), where α l = p α (l). Thus (11) can be written as 1 ε n exp i t n (E n − E αn ) + . . . + t 1 (E 1 − E α 1 ) ε ε k F (E) + O(ε k+1 ) and with the notations Ω l (E) = E n + . . . + E l − E αn − . . . − E α l for l = 2, . . . , n as If the expression (6) corresponds to an irreducible diagram then the functions Ω l (E) are linearly independent and, since they are linear in their arguments, the convolution δ(Ω 2 (E)) . . . δ(Ω n (E)) is well defined. In the case k > 1, since for any l = 2, . . . , n (see Lemma 1): and k − 1 > 0, the limit of (12) equals to zero. In the case k = 1 the expression (6) corresponds to the diagram (10) and one has where Ω l (E) = E l − E 1 . Using (13) one finds that the limit of the r.h.s. of (14) is Integration over E 1 . . . E n gives the equality (9) in the case (a). Case (b): arbitrary ω 1 , . . . , ω n . In this case the expression (14) in the decomposition (5) is multiplied by the factor exp(−i l ω l t l /ε). The product can be written as e i(tn−t n−1 )(Ωn(E)−ωn)/ε ε . . .
Ifω 1 = 0 then the statement of the theorem follows by the same arguments as in the case (a). Ifω 1 = 0 then the limit of this term equals to zero by Riemann-Lebesgue lemma due to the presence of the rapidly oscillating factor exp(−it 1ω1 /ε).
THE FREE WHITE NOISE NUMBER OPERATORS
In the present section we show that the limiting truncated correlation functions coincide with the complete (i.e., non-truncated) correlation functions of the free white noise number operators.
Definition 4 Free white noise operators N T (t) are the operators satisfying the multiplication rule
where the * -product of any two operators T and T ′ is defined by T * T ′ := 2π dEP E T P E T ′ .
Remark 2
We call the operators N T (t) as free (or Boltzmann) number operators since they can be constructed using the creation and annihilation operators B ± f (t) satisfying the free relations and extend this definition by linearity to any T . Then such defined operators satisfy the relation (15).
Let A be the algebra generated by the free white noise operators N T (t) and let φn be the state on A characterized by φn(N T (t)) = Tr(nT ).
Theorem 3 One has the equality lim ε→0 W T ε,n,T 1 ,0,...,Tn,0 (t 1 , . . . , t n ) = φn(N T 1 (t 1 ) . . . N Tn (t n )) (16) Proof. By direct calculations using the Eq. (4) and the relation (15). The existence of the representation of the limiting truncated correlation functions by the free white noise number operators is related to the fact that only a sub-class of the non-crossing irreducible diagrams survives in the low density limit. We emphasize however, that the l.h.s. of Eq. (16) is the limit of a truncated correlation function whereas the r.h.s. contains the complete correlation function.
INDEPENDENCE AND THE GENERALIZED POISSON STATISTICS IN THE LDL
The fact that the limiting truncated correlation functions are the distributions in variables t 1 , . . . , t n with support at t 1 = . . . = t n leads to the appearance of independent subalgebras in the low density limit. In the beginning of this section we remind the basic notions of independent subalgebras and of cumulants. Then we find the asymptotically independent subalgebras of A ε and discuss the limiting statistics. We show that the cumulants and the moments of certain elements in the algebra A ε in the low density limit coincide with the cumulants and the moments of the Poisson distribution.
Definition 5 Let (A, ω) be a * -probability space. A family of unital * -subalgebras {A i } i∈I , A i ⊂ A, is called independent if ω(a 1 . . . a n ) = 0 whenever a l ∈ A i l , ω(a l ) = 0, and k = l implies i k = i l .
Definition 6 Let (A, ω) be a * -probability space. Cumulants of the space (A, ω) are the multilinear functionals κ n : A n → C, n ≥ 1, uniquely determined by κ 1 (a) := ω(a), a ∈ A, and for n > 1 by induction through the relation: where the sum is over all partitions π of the set {1, . . . , n} and "(a 1 , . . . , a n )|A" designates the set of a i with i ∈ A.
For the analysis of independence in the low density limit we introduce the notion of asymptotically independent subalgebras for a * -probability space (A ε , ω εn ).
The next theorem identifies asymptotically independent subalgebras of A ε .
The proof follows from the fact that the truncated correlation functions become in the limit as ε → 0 distributions in variables t 1 , . . . , t n with support at t 1 = t 2 = . . . = t n . Now let us analyze the statistics which appears in the low density limit. From Theorem 1 and the relation between the cumulants and the truncated correlation functions it follows that l-th cumulant for the element a = N T,ω,ε (ϕ) in the limit has the form κ l (a, . . . , a) = lim ε→0 W T ε,n,T,ω,...,T,ω (ϕ, . . . , ϕ) = We specify the further consideration to the case H = L 2 (R 3 ). Considern = 1 and S t = e itH 1 where H 1 is the multiplication operator by the function ω(k) = |k| 2 , k ∈ R 3 . Let T λ be an integral operator in H with the kernel T λ (k, k ′ ) = (2π |k||k ′ |) −1 χ [0, Theorem 5 Let a λ = N T λ ,ω,ε (ϕ 0 ), where T λ and ϕ 0 are defined as above. Then for any l ∈ N one has κ l (a λ , . . . , a λ ) = λδ ω,0 or equivalently, the cumulants of the element a λ with ω = 0 coincide in the low density limit with the cumulants of the Poisson distribution with expectation equal to λ.
Proof. The proof of the theorem is based on the direct calculation of the cumulants using Eq. (17). One has One also has Thus the r.h.s. of Eq. (17) equals to one. This proves the theorem.
Moments of the element a λ with ω = 0 in the low density limit are equal to the sum over all partitions of the limiting cumulants and given by Touchard polynomials: lim ε→0 ω εn (a n λ ) = n k=1 S(n, k)λ k where S(n, k) is a Stirling number of the second kind, i.e., the number of partitions of a set of size n into k disjoint non-empty subsets. The limiting moments coincide with the moments of the Poisson distribution with expectation equal to λ. For a 1 one has lim ε→0 ω εn (a n 1 ) = B n where B n is the n-th Bell number, i.e., the number of partitions of a set of size n. The Bell numbers are the moments of the Poisson distribution with expectation equal to 1.
AN OPERATOR REPRESENTATION OF THE LIMITING CORRELATION FUNCTIONS
In the present section we explicitly realize the limiting correlation functions as correlation functions of certain operators acting in a suitable Hilbert space. Presence of delta functions in the limiting correlation functions suggests that they can be represented as correlation functions of certain white noise operators. Here such a representation is constructed in the special case using the results of 7 .
Let g 0 , g 1 ∈ H satisfy the condition g 0 , S t g 1 = 0 for any t ∈ R. Define for n, m = 0, 1 the Hilbert space K nm := L 2 (Spec H 1 , dµ nm ), where Spec H 1 ⊂ R is the spectrum of H 1 and dµ nm := g n , P E g n g m , P En g m dE. Let K := n,m=0,1 K nm and let H W N := Γ(L 2 (R, K)) be the symmetric Fock space over the Hilbert space of square integrable K-valued functions on R (abbreviation WN here stands for White Noise). Using the natural decomposition H W N = n,m=0,1 Γ(L 2 (R, K nm )) one can define the creation and annihilation operator valued distributions B ± m,n (E, t) acting in H W N and satisfying the canonical commutation relations: The operator valued distributions B ± m,n (E, t) are called time-energy quantum white noise due to the presence of δ(t ′ − t)δ(E − E ′ ) in (18). Let define the number operators Let Ω ∈ H W N be the vacuum vector. | 2019-04-12T09:07:41.665Z | 2006-03-22T00:00:00.000 | {
"year": 2007,
"sha1": "870904a9b2554445c265953e72940a15aef9b088",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math-ph/0701055",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "870904a9b2554445c265953e72940a15aef9b088",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
208032706 | pes2o/s2orc | v3-fos-license | Angina due to diffuse coronary artery disease in a patient with heart failure
A 64-year-old Caucasian female was referred to our Outpatient Clinic Center with a history of progressive, moderate angina (Canadian Cardiovascular Society Class II) and shortness of breath (New York Heart Association Class II) lasting for the past 3months. Two years before, she had an acutemyocardial infarction and underwent coronary bypass surgery (left internal mammary artery to left anterior descending artery þ saphenous vein to the right coronary artery) at another facility. She had a long history of hypertension and hypercholesterolaemia, both irregularly treated. She works as a cook and is physically inactive. On examination, she had a bodymass index of 28.7 kg/m, a heart rate of 80 b.p.m., and a blood pressure (BP) of 132/ 82mmHg. There was a mild holosystolic murmur (grade 2) best heard at the apex; fine bibasilar crackles were present as was pitting oedema in both legs. Blood glucose level was 108mg/dL, HbA1c 6.1%, total cholesterol 191mg/dL, lowdensity lipoprotein (LDL) 112mg/dL, high-density lipoprotein 38mg/dL, triglycerides 205mg/dL, and a creatinine level of 0.9mg/dL (glomerular filtration rate was 67mL/ min/1.73 m as calculated by the Modification of Diet in Renal Disease Study equation). The resting ECG is shown in Figure 1 and a transthoracic echocardiogram (Figure 2) revealed a dilated left ventricle with an estimated ejection fraction of 28% (Teicholz), moderate left atrial enlargement, andmildmitral regurgitation. She was on aspirin 100mg once daily, enalapril 10mg twice daily, carvedilol 12.5mg twice daily, spironolactone 25mg once daily, and atorvastatin 20mg once daily. At this stage, a diagnosis of stable angina in a patient with post-myocardial infarction heart failure was made.
A 64-year-old Caucasian female was referred to our Outpatient Clinic Center with a history of progressive, moderate angina (Canadian Cardiovascular Society Class II) and shortness of breath (New York Heart Association Class II) lasting for the past 3 months. Two years before, she had an acute myocardial infarction and underwent coronary bypass surgery (left internal mammary artery to left anterior descending artery þ saphenous vein to the right coronary artery) at another facility. She had a long history of hypertension and hypercholesterolaemia, both irregularly treated. She works as a cook and is physically inactive.
On examination, she had a body mass index of 28.7 kg/m 2 , a heart rate of 80 b.p.m., and a blood pressure (BP) of 132/ 82 mmHg. There was a mild holosystolic murmur (grade 2) best heard at the apex; fine bibasilar crackles were present as was pitting oedema in both legs. Blood glucose level was 108 mg/dL, HbA 1c 6.1%, total cholesterol 191 mg/dL, lowdensity lipoprotein (LDL) 112 mg/dL, high-density lipoprotein 38 mg/dL, triglycerides 205 mg/dL, and a creatinine level of 0.9 mg/dL (glomerular filtration rate was 67 mL/ min/1.73 m 2 as calculated by the Modification of Diet in Renal Disease Study equation). The resting ECG is shown in She was on aspirin 100 mg once daily, enalapril 10 mg twice daily, carvedilol 12.5 mg twice daily, spironolactone 25 mg once daily, and atorvastatin 20 mg once daily.
At this stage, a diagnosis of stable angina in a patient with post-myocardial infarction heart failure was made.
Based on the clinical diagnosis, how would you further investigate this patient?
Would you consider a functional, non-invasive assessment of her ischaemic burden? Would you prefer a non-invasive assessment of the coronary arteries by computed tomography angiography? Would you rather proceed immediately with an invasive angiography?
Although the patient was still not on optimal medical therapy for angina control, her history of recent, progressive symptoms and the impairment in the left ventricular function prompted our Heart Team to consider an invasive coronary angiography.
Meanwhile, medical treatment had to be optimized for better symptom control. The patient was strongly advised to lose weight, and, accordingly, nutritional counselling was recommended. Atorvastatin was increased to 80 mg daily in an attempt to achieve an LDL level <50 mg/dL. Furosemide 40 mg once daily was added. However, the panel was divided between increasing the ACE inhibitor or the b-blocker dosethe main concern being the reduction in BP and, thus, tolerability. It was also proposed to switch the ACE inhibitor to the sacubitril/valsartan combination, but it was decided not to make this switch due to the risk of hypotension. Finally, we increased the dose of carvedilol to 25 mg twice daily.
One month later, she returned with the results of the coronary angiography (Figure 3). She mentioned a modest improvement in symptoms, especially the shortness of breath. She had lost about 2 kg. Angina was less frequent; last week she had a disagreement with a co-worker and angina occurred at rest, but it was relieved with a shortacting nitrate. Her heart rate was down to 72 b.p.m. and BP to 122/72 mmHg. Now we have to face the decision between further optimizing medical treatment or consider a myocardial revascularization procedure (percutaneous coronary intervention or redo coronary artery bypass grafting).
What would you do now?
The Heart Team convened again to discuss whether to proceed to coronary angioplasty of the obtuse marginal branch and the right coronary artery (chronic total occlusion). There was an overall consensus to first optimize medical treatment further trying to tackle both the coronary artery disease (for symptom control) and the presence of heart failure with reduced ejection fraction (for prognosis). Ivabradine 5 mg twice daily was added to her treatment. Ivabradine was selected because the heart rate was still above 70 b.p.m. on the maximally tolerated dosage of b-blocker.
The BEAUTIFUL trial 1 showed that, in patients with stable coronary artery disease with a heart rate above 70 b.p.m., in sinus rhythm, and a left ventricular ejection fraction below 40%, ivabradine on top of maximally tolerated therapy decreased the risk of hospitalization for fatal/non-fatal myocardial infarction by 36%, and the need of revascularization by 30%. The SHIFT trial, 2 targeting patients with severe left ventricular dysfunction like the one we are discussing here, showed that the addition of ivabradine on top of optimal medical therapy led to a significant 26% decrease in both the risk of hospital admissions for worsening heart failure and deaths due to heart failure. Antianginal agents with BP-lowering effects (such as dihydropyridine calcium channel antagonists or long-acting nitrates) should not be used or used with caution; drugs with myocardial depressant effects (verapamil or diltiazem) should also not be used in patients with left ventricular dysfunction. 3 Trimetazidine has beneficial effects in patients with left ventricular dysfunction by decreasing the severity of angina and increasing left ventricular ejection fraction 4 and could be an option if needed, although its impact on long-term prognosis is less well documented than for ivabradine.
One month later, the patient reported a significant improvement in the severity and frequency of her angina attacks. She was enrolled in a cardiac rehabilitation programme. Her vitals now were a heart rate of 64 b.p.m. and BP 120/68 mmHg. Ivabradine was titrated to 7.5 mg twice daily. When last seen, she was quite pleased with her treatment, having experienced no angina during her daily activities.
Funding
The authors didn't receive any financial support in terms of honorarium by Servier for the articles. | 2019-11-14T17:07:21.671Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "f3e39ac96a28dc574bf0225a80e0c36591bc2a72",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/eurheartjsupp/article-pdf/21/Supplement_G/G23/30665484/suz197.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c58cdd0405a4e7abae0e47aa3b238d9ab7e47ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238839377 | pes2o/s2orc | v3-fos-license | Envisioning the challenges of the pharmaceutical sector in the Indian health-care industry: a scenario analysis
Purpose – This study aims to access, analyze and highlight opportunities and problems of the Indian pharmaceutical sector in the broader national health-care industry. The recent changes in the field, at the institutional and corporate levels, have placed India in the spotlight of the global pharmaceutical market, but several threats and weaknesses could limit this expansion. Design/methodology/approach – Descriptive and inferential analyses have been based on empirical data extracted from authenticated data sources. Subsequently, a narrative strengths, weaknesses, opportunities and threats analysis was performed based on the results of prior investigations and on qualitative data that were retrieved from a marketing intelligence examination to generate an overall scenario analysis. Findings – Indian pharmaceutical companies have faced several challenges on various fronts. In the home market, drug prices are controlled by the drug price control order; therefore, there is strong pressure on revenues and subsequently on costs. In the international market, threats derived from pharmaceutical multinational companies are emerging as tough obstacles to overcome. Practical implications – More focus on patents for innovative drugs is required, instead of concentrating primarily on generic drugs. There is a need for policymakers to work on the sustainability and development of the industry, while the companies must redesign their orientation toward enhancing innovation capabilities. In addition, at the level of corporate strategy, firms should establish collaborations and alliances and expand their industrial marketing vision. Originality/value – This study provides a global overview of the potential growth and development of the Indian pharmaceutical sector, comparing it with internal trends and external competition. The most relevant contribution of the research relies on the shift to innovative production that Indian companies must adopt (after years of focusing only on generic drugs), and in this vein, appropriate industrial marketing solutions are indispensable.
Introduction
The Indian pharmaceutical sector is at the top among the country's science-based industries, having widespread competencies in the complex field of drug manufacturing and technology; prior to 1991, governmental policies focused on self-reliance, protectionism and lesser cooperation in trade and services, whereas the world was advancing technologically very quickly, particularly in the information technology sector (Sharma, 2016;Kraus et al., 2021). These factors affected the country in many ways; one of the most relevant impacts was less priority given to the health sector for growth and modernization, with smaller cities and remote areas covered with the inefficient national health-care service or small hospitals and practitioners (Abrol et al., 2011).
The subsequent development of the sector brought India to rank among the top health-care markets in the world by the end of 2020. Various factors have influenced this progress: an increase in per capita income, boosting public awareness of various diseases and preventive measures, decreasing costs of health-care services, effective research and development (R&D) activities and governmental policies to induce foreign investment (IBEF, 2016).
From a quantitative point of view, the Indian health-care sector is expected to surpass US$372bn by 2022 and by 2024-2025, India's biotech industry is estimated to increase to US$100bn (FICCI, 2018). From a qualitative point of view, the Indian health-care sector focuses on several critical pillars: preventive health care, accessible health care, building medical services and mission style strategies for maternal health, child health and the marked rise in the burden to fight transmissible and noncommunicable diseases (Venkatesh et al., 2019).
Although its economy is the third-largest in the world after China and the USA in terms of purchasing power parity (Imf.org), India is still characterized by insufficient health-care facilities, scarce physical and medical infrastructure, and insufficient specialized medical staff in smaller cities and rural areas, largely due to a lack of economic resources (Roy et al., 2019).
The Indian pharmaceutical industry is among the top producers in the world, supplying over 50% of the global demand for various vaccines, 40% of generic demand in the USA and 25% of all medicines in the UK (IBEF, 2020). Although the sector still shows more relevant values concerning production quantity than production turnover, pharmaceutical exports are expected to reach US$16.28bn in FY20 (ibidem).
Presently, there are more than 11,000 manufacturing units and over 3,000 pharma companies in India; although the industry is growing at an exceptional rate, it is highly fragmented; the top ten firms, including multinational companies (MNCs), account for only one-third of the total revenues from the sector (Gulaldavar, 2019). The market is dominated by generic products with 71% of the total market share and 20% of global exports in terms of volume, which makes India the largest supplier of generics globally (IBEF, 2016), with particular importance in the field of vaccines (Chattopadhyay and Bercovitz, 2020).
In this study, we tried to identify and analyze the current situation of the Indian pharmaceutical industry, attempting to track the marketing trajectories of the industry and emphasizing possibilities and opportunities for healthy growth and development at the domestic and international levels. The remainder of the paper is organized as follows: after a global overview of the industrial scenario, with specific attention paid to industrial marketing dynamics, a descriptive and inferential analysis has been developed that provides adequate information about trends in production and trade balance; subsequently, a content analysis and a narrative strengths, weaknesses, opportunities and threats (SWOT) analysis contribute to delineating a global scenario analysis intended as a fundamental instrument of industrial strategic marketing (de Kluyver, 1980;Pinchot, 2001;Kirchgeorg et al., 2010;Ha and Nam, 2016;Lew et al., 2019); the paper concludes with considerations of its theoretical and practical implications.
Institutional and scientific background
The Indian pharmaceutical sector has been interested in achieving a huge expansion in the last decade, although it is exceedingly uneven with more than 20,000 registered units; the sector meets approximately 70% of the country's demand, consisting of a highly fragmented market, and has seen increased price competition and governmental price control; consequently, the companies in the field show huge differences in innovative capabilities, and they can be grouped into three typesinnovators, niche operators and manufacturerseach showing the need for different innovation policies to sustain their growth and development (Sampath, 2006). From an industrial marketing perspective, the most relevant aspect of the Indian pharmaceutical industry is that it is in a process of transformation, seeking to gain even more credibility at the national and international levels and particularly with the government that is implementing the new compliant intellectual property regime (Prakash et al., 2018).
The current regulations are not yet designed to promote patent filings (Chaudhuri, 2019), and this situation strongly impacts the internationalization perspective. The Indian pharmaceutical industry has been facing problems such as declining exports and increasing prices in many instances and maintaining competitiveness and market share in this sector depends on the firms' ability to obtain patents (Tyagi et al., 2018), requiring substantial spending in R&D and knowledge building (Tyagi and Nauriyal, 2017).
The Indian government has provided several policies to support the marketing impact of the industry; although tax-related benefits would be useful in this regard (Abbott, 2017;Gautam and Sharma, 2019), the measure of most impact would be to recognize intellectual property rights (IPRs) as the natural factor for encouraging pharmaceutical R&D. From a financial point of view, a potential increase in working capital (tax benefits as deferred liquidity, for example) may improve the future performance of domestic companies (Vijayalakshmi and Srividya, 2015), but to maximize their enterprise value, there is a need for higher R&D investments and major production of cost-effective drugs, which are inevitably influenced by the financial structure of firms (Desai and Desai, 2018), with the current ratio that has a positive influence on R&D investment and the debt ratio that has a negative influence on R&D investment (Lee and Choi, 2015).
The analysis of market access (MA) for ethical drugs shows the fundamental public-private interaction among pharmaceutical firms and public stakeholders to generate effects on health system stability (Santos et al., 2019); institutional and industrial collaborations are essential for understanding the modalities through which to achieve health system sustainability (Schiavone and Simoni, 2019;Guercini et al., 2020). Moreover, considering that the public side sometimes, or often, neglects the private business setting's interactivity and interdependence, it is desirable to promote the involvement of independent actors capable of delivering business reactivity, cost efficiency and quality control while being subject to competitive pressures (Waluszewski et al., 2019).
An additional aspect of the national industry, directly associated with IPRs, concerns secondary patents, often adopted to extend the periods, which is a concern for competitors and for governments; in response, several countries have provided specific measures to control the granting of these patents, but they were ineffective in many ways, revealing that it is vital to have a monitoring function to evaluate the effects of these applications in developing countries particularly (Sampat and Shadlen, 2017). In addition, it was realized that the dynamics enabling strategic account management in the pharmaceutical industry can emerge as an interaction model of value cocreation selling, suggesting the presence, in the hospital-pharmaceutical connection (Lepore et al., 2018;Zhang et al., 2018), of two key dimensions that may allow for customer-specific value-added initiatives and relationship enhancers (Pilon and Hadjielias, 2017).
The question about patents concerns not only the business perspective but also the social perspective, considering that the major problems of the Indian health-care industryand secondarily the pharmaceutical industryare accessibility and affordability; increased purchasing power and epidemiological changes are expected to spur dramatic growth in pharmacy sales volumes, but India remains a price-sensitive market (Devarakonda, 2016). These companies are forced to deal with many challenges not only in the home country but also abroad (for example, obtaining approval from the local authorities for the innovative drug business), having problems with the availability of good affordable medicines for patients (Bains et al., 2010), mainly at the economic level domestically and at the innovative level internationally.
The Defense of India Act included price control orders that first initiated price controls over drugs in 1963 (Wankhar, 2015). The MNCs had great dominance over the sector during that period, and therefore price controls came to be considered; they were pioneers in the supply of drugs and they sold all types of medicines at a higher rate as there was no control over the prices, with many people being unable to buy such expensive drugs for the treatment of various ailments: henceforth, under Section 3 of the Essential Commodities Act of 1955, the Drug Price Control Order (DPCO) was formulated and later introduced in India in 1995 (Singh, 2017) to guarantee major access to health care for larger portions of the population due to different economic conditions.
In fact, according to the World Health Organization (WHO. int), spending on health-care varies to a large extent in developing countries, transient economies and developed countries; developing nations spend 25%-66% on health care, while in transitional economies, nearly 15%-30% of money is spent on health care and associated activities. Paradoxically, but understandably, expenditures on pharmaceuticals and related products are relatively high in low-income countries; for example, India is said to be the country with the highest out-ofpocket expenditure in the health-care sector, largely because, according to the WHO, 65% of Indians are still unable to obtain the necessary medicines; there is a major tendency among Indian doctors to prescribe leading brands instead of cheaper alternatives; hence, there is an urgent need to make drugs accessible at more affordable prices (Nalinakanthi, 2014).
Finally, in a global consideration, poverty must be considered as one of the most important aspects of the Indian pharmaceutical market, most of all as concerns price control; approximately 42% of India's population lives below the poverty line, and more generally, South Asia has low per capita income; from this point of view, because of the ever-increasing population, the government cannot adequately support the pharmaceutical sector from a strict business perspective, having in mind the affordability of medicines and, ultimately, people's health (Parasiya et al., 2013). Thus, the Indian pharmaceutical sector seems to register two contrasting conditions: on the one hand, the impetuous evolution of the offer (with all the limitations concerning IPRs) and on the other hand, the inevitability of taking into careful consideration the economic situation of the demand, with the Indian population still unable to access health-care and medicine when necessary; in the face of this incongruity, which is nothing new in emerging countries, there are huge opportunities envisioned for the evolution of the Indian pharmaceutical industry, the theoretical and practical foundation for this research study, particularly in light of the COVID-19 pandemic, which has highlighted even more the indispensable economic and social value of the pharmaceutical supply chain.
Research objectives and methodology
Based on the above background, the following research questions have been formulated regarding the Indian pharmaceutical sector.
RQ1.
"Are the expectations about the global value of the industry positive or negative?" RQ2. "Are the expectations about the trade balance positive or negative?" RQ3. "What are the most relevant SWOT for the future?" These objectives of the investigation highlight the essential explorative nature of the study, which has been finalized as a scenario analysis, adopting a mixed approach of quantitative and qualitative methods. To empirically carry out the investigation, secondary data have been extracted from the authenticated databases of the Centre for Monitoring Indian Economy (Cmie. com) and the Reserve Bank of India (Rbi.org.in), particularly to respond to RQ1 and RQ2. After determining the most relevant coordinates of the field, specific reports and issues from other governmental and corporate institutions have been purposively retrieved and analyzed through a content analysis for generating a narrative SWOT analysis, particularly to respond to RQ3.
Results
The following investigation provides general elements for the examination of the current and expected scenarios of the Indian pharmaceutical sector to respond to RQ1 and RQ2. More specific data have been provided with reference to DPCO to respond to RQ3.
The scenario of the Indian pharmaceutical industry: descriptive and inferential analysis
In value terms, the sector, considering the total manufacturing of pharmaceuticals, medicinal chemicals and botanicals (Table 1), is worth Rs. 145,841.1m in 2018-2019. The largest market share concerns ayurvedic and homeopathic medicaments (AYUSH medicines) with 24.37%, followed by antibiotics (API and formulations) with 18.82%, and anti-retroviral drugs for HIV (Human Immunodeficiency Virus) treatment with 15.89%. These three pharmaceutical categories alone are worth approximately 60% of the entire Indian pharmaceutical production. In the above, API stands for Active Pharmaceutical Ingredient and AYUSH stands for Ayurveda, Yoga, Unani, Siddha and Homeopathy. Starting in 2014 in India, a Ministry of AYUSH was established, indicating the extraordinary relevance in the country of these specific items. In Table 1, the following notation has been adopted. A = Total production of pharmaceuticals, medicinal chemicals and botanical products B = Vitamins (API and formulations) C = Antibiotics (API and formulations) D = Antidiabetic drugs (excluding insulin) E = Antipyretic, analgesic/anti-inflammatory drugs (API and formulations) F = Anti-retroviral drugs for HIV treatment G = Capsules H = Ayurvedic and homeopathic medicaments (AYUSH medicines) I = Vaccines for veterinary medicine J = Medical/surgical accessories K = Other products From the following analysis ( Figure 1), it can be seen that the production of pharmaceuticals, medicinal chemicals and botanicals has been growing over the years from 2013-2014 to 2018-2019, although not constantly. The trend line of the values, derived from an ordinary least squares (OLS) calculation, shows a consistent trend regarding the increase in total production (compound annual growth rate [CAGR] = 1.07%), even though the strong impact of the 2018-2019 values is evident compared with the decreases of 2016-2017 and 2017-2018. For this reason, together with the very limited observations, the R 2 value is so low (0.0329).
These calculations allow us to respond positively to RQ1 ("Are the expectations about the global value of the industry positive or negative?").
Subsequently, the trade balance of the sector was analyzed to understand the state of the art and potential development trajectories of growth/development compared to the rest of the world. The data reported in Table 2 (with the details of various categories) and Table 3 (with the related balance of the same categories) provide evidence about the progress of the Indian pharmaceutical industry and its international business.
From the following analysis ( Figure 2), it can be seen that the trade balance has been growing from 2014-2015 to 2018-2019, and in this case constantly. The trend line of the values, derived from an OLS calculation, shows a consistently increasing trend in the total trade balance (CAGR = 5.08%), with reliable evidence year after year. For this reason, notwithstanding the very limited observations, the R 2 value is so high (0.8617).
With specific reference to single commercial relationships, the USA and other developed countries are major trade partners as far as exports are concerned, while India depends significantly on imports of APIs from China (Rbi.org.in). More broadly, the Indian pharmaceutical industry has major exports to North America and Europe and major imports from Asia and Europe. Furthermore, as reported in Table 4, the positive trade balance trend appears to be confirmed for the future, most likely by virtue of renewed attention of the global economies on India's pharmaceutical industry due to the COVID-19 pandemic.
From the following analysis (Figure 3), the trade balance, under the CMIE forecast, will also grow consistently along with the abovementioned trend for the years from 2015-2016 to 2023-2024. The trend line of the values, derived from an OLS calculation, shows a consistently increasing trend in the total trade balance (CAGR = 4.95%), with reliable evidence year after year. For these reasons, notwithstanding the very limited observations, the R 2 value is so high (0.9678).
These calculations, as emerging from Figures 2 and 3, allow us to respond positively to RQ2 ("Are the expectations about the trade balance positive or negative?"). After this initial overview of the current and future situation of the Indian pharmaceutical sector, a positive picture has emerged. However, several issues cannot be highlighted by the positive numbers so far analyzed, representing weaknesses and threats for the industry that have emerged from a qualitative analysis that has been deployed to provide the basis for a subsequent narrative SWOT analysis.
The scenario of the Indian pharmaceutical industry: contextual analysis
Taking into consideration some of the most authoritative reports and issues in the field at the institutional and corporate levels (e.g. Biospectrum Bureau, Fortis Healthcare, IBEF, McKinsey, etc.), a content analysis was performed to identify the most relevant issues for the sector. The investigation has been executed manually, due to it being more sensitive than analytical, using standard office automation software for searching, retrieving, organizing, cataloging and processing the most relevant elements of the texts, focusing the attention of the global analysis on the final goal of the research, and identifying SWOT, in terms of specific words, as well as general concepts.
The most relevant results of this inquiry have been aggregated in the following seven "areas" detected by applying, after manual content analysis, the affinity diagram technique, similar to other research (Tuch et al., 2013;Zhang and Sun, 2017;Song et al., 2018). Next, the most significant evidence for each category is reported.
Pharmaceutical research hub.
Health-care special economic zones.
Manufacturing facilities for medical equipment.
Land/town planning for health-care facilities.
Food and Drug Administration (FDA) regulations.
Health insurance penetration.
Pharmaceutical research hub (potential strength)
Innovation in the Indian pharmaceutical industry is weaker than its potential. Companies still prefer generic drug businesses and conduct less research. After the 1990s, India emerged as an information technology and information technology-enabled services hub for the world due to trained manpower, a very high number of computer engineers, and a cheaper workforce that could speak foreign languages, mostly English. Similarly, pharmaceutical research and development can be carried out in India by global pharmaceutical MNCs. India has many pharmaceutical, bioscience and chemistry colleges that churn out huge amounts of high-quality graduates every year. The skilled labor force is available in Indian cities at very competitive salaries compared to most other destinations. Having research labs in India will be a win-win situation for MNCs.
Health-care special economic zones (potential opportunity)
The medical tourism segment in the global health industry is increasing rapidly because the cost of health-care is significantly lower in India than that in Korea, Malaysia, Thailand and many other countries in the region. Improved medical facilities, modernization of hospitals and lower and affordable treatment costs for most developed and developing countries' nationals are all reasons for this boom (Fortis Healthcare, 2019).
Special economic zones (SEZs) are popular because they provide tax benefits for companies. To create new business opportunities for the pharmaceutical industry, SEZs can be a powerful option, especially if established near airports, stations and ports.
Manufacturing facilities for medical equipment (potential strength)
Currently, China is a manufacturer for the whole world. India's manpower is large, with a huge number of graduates. The earlier Indian business environment was not considered suitable for doing business, per the survey results of the International Finance Corporation (IFC.org), but in recent years, the climate has improved in terms of the ease of setting up companies, winding up, infrastructure development, etc. In India, production factors are cheaper than those in the developed world and can be associated with trained manpower. Hence, there is an opportunity to set up manufacturing facilities to produce medical equipment.
Land/town planning for health-care facilities (potential opportunity)
India has witnessed rapid population and economic growth in the past two decades, and this expansion has created a demand for larger and better infrastructure in cities and industrial parks. India's limited development of service platforms is a result of the scarce functioning of the bureaucracy, but conditions have improved in recent years. There is a need to reserve appropriate sizes of land for the health-care sector in cities and for pharmaceutical companies in industrial belts during the planning process. In India, the public health-care system is not strong, and most of the middle class and above prefer to visit private hospitals for treatment. Reservation or allocation of land for the health-care sector will boost the growth and development of the industry, resulting in employment and self-employment of health-care staff, even at the pharmaceutical level. Financially poor people use India's public health-care system and reserving land in cities for public facilities will serve the population's low-income earners.
"Food and Drug Administration (FDA) regulations (potential threat)
The USA represents a large pharmaceutical demand as companies obtain substantially higher prices for medicines sold in that market. FDA approvals are considered benchmarks across many countries; therefore, these authorizations are very important for Indian pharmaceutical industries for the access, export, presence and profit they represent at the global level. Although Indian companies have received the highest number of FDA approvals in the last decade and in 2019-2020 particularly, there is an ongoing need to obtain these authorizations. The FDA regulations are stringent and time-consuming, making it difficult for midsized Indian companies to access the North American market and other similarly developed markets. Thus, the Indian government should establish ongoing support to small-and medium-sized Indian companies in training and implementing FDA standards, as it provides easy global access to those and similar international markets.
Health insurance penetration (current weakness, potential opportunity)
The insufficient diffusion of the services associated with health insurance is a reason for pressure on prices and the slow business growth of the Indian health-care sector. Most of the population earns a very low per capita income that is considered a poverty-level wage by the standards of developed nations. In recent years, there has been some rise and subsequent awareness about health insurance, with an increase in the number of people taking health insurance policies. Many employers have started buying group health insurance policies for employees and families, but insurance coverage is still inadequate if certain medical conditions are considered.
DPCO (real weakness or potential strength?)
4.2.7.1 Influence of price controls on producers DPCO has been a drawback for drug companies because it caused a decrease in their profit margins. Developing a medicine involves many aspects and is quite costly; hence, reducing their prices and profit margins has generated complex situations for pharmaceutical companies. Drugs registered under DPCO must be sold within the price range, which is mandatory for the company and therefore, the break-even point is sometimes barely reached (Paul, 2018).
Influence of price controls on drug wholesalers and retailers (chemists)
According to DPCO, the trade margins in the pharmaceutical supply chain should be reduced to allow better penetration of medicines under DPCO and allow a larger population to obtain access. The first-line sellers will receive different percentages of margins to apply, including distributors, wholesalers, retailers (chemists) and hospitals. If this measure proves to be successful, it will lead to a tremendous increase in the availability of essential drugs (Thacker, 2018), but at the same time, the pharmaceutical producers will have to take into careful consideration the business relationships with these operators, whose importance in the supply chain has been growing (IQVIA, 2018).
4.2.7.3 Influence of price controls on patients/consumers DPCO is a blessing for those who use medicine and has boosted their hopes of the availability of cheaper and better medicines, leading to a great psychological impact on consumers (Venkiteswaran, 2013). For example, heart attacks and cardiac arrests are increasing in India, and treatments for these newly emerging diseases are very expensive. Because open-heart surgery and the stents needed for angioplasty are not affordable to all, DPCO could offer a potential solution for these and other medical treatments (Wadhera et al., 2017).
Influence of price controls on patented drugs
There was an enormous effect of price policies on brands and drugs that had similar contents. These branded drugs showed a variation in pricing, also known as inter-brand price variation. There was ample availability of multiple brands for identical drugs in India, and therefore, after DPCO was revised in 2013, a socalled tug of war between these branded drugs began. There is a concrete risk of confusion in the minds of patients/consumers regarding which drug should be considered due to this price variation. Therefore, price controls were effective at analyzing this price variation and are attempting to decrease the difference even more (Jhanwar and Sharma, 2018).
4.2.7.5 Influence of price controls on innovation management The pharmaceutical business function of R&D has been significantly affected by the DPCO. Due to price controls, companies have not dedicated themselves to inventing and developing new medicines. These operations need huge amounts of capital, but DPCO measures, which limit drug prices, do not contribute to establishing a favorable scenario concerning capital budgeting for R&D. In fact, if the developed drug comes under DPCO, then it is likely that sales will be hindered, affecting the expansion of the industry due to regulations pertaining to that medicine (Biospectrum Bureau, 2016).
4.2.7.6 Influence of price controls on corporate development When the DPCO was revised, many growing pharmaceutical companies were shaken. The main reason for their slump was the establishing of prices by the government. The ceiling for drug prices placed restrictions on the companies, and because of price controls, the sales of the drugs were also hindered and therefore unsatisfactory. Other studies have confirmed that sales decreased after DPCO was revised in 2013 (Sahay and Jaikumar, 2016), with a large amount of disparity in the industry, mainly due to the rise of two dynamics: certain companies showed growth in revenue, while some companies showed a recession. Growth of companies was seen where a majority of drug prices were under the ceiling price, but in contrast, there was a decline in the growth of those companies where a majority of drug prices were above the ceiling price. Many differences were seen between DPCO and the concerned companies because of the adoption of market-based pricing (MBP). Drug prices formerly were established based on cost-based pricing, which took into consideration the different components necessary to produce a drug, including API, cost of labor and amortizations, the costs of which were used to decide on the ceiling price. However, in MBP, pricing was determined by market share, and according to demand, it was characterized into numerous categories of drugs. Thus, MBP has had an adverse impact on pharmaceutical companies (Narula, 2015).
4.2.7.7 Influence of price controls on exports and imports DPCO is applicable only in India. Consequently, domestic businesses will be hindered to some extent, and exports may blossom. Thus, focusing on specific medicines to export will probably be vital. For example, India has been universally considered the pioneer in the export market for generic drugs. In the case of other medicines, if the drug is under DPCO in India, it may be possible to sell it under price control domestically and to collect maximum revenue from exports, thus helping to enhance the country's economy (Das, 2013). Nonetheless, DPCO policies had an adverse influence on the imports of medicines to India in the form of a fall in trade due to price regulations. One of the reasons for this decrease was the transition of the pricing strategy from cost-based to marketbased policies, a noticeable cause due to the implications deriving from DPCO (PTI, 2013).
4.2.7.8 Positive and negative effects of price controls A rise in the profit margin of businesses having product prices below DPCO was registered, which resulted in economies of scale (Venugopal and Jampala, 2019). The expenditure on health care in India has ultimately reduced the costs of medicines under DPCO, and essential drugs are coming under the ceiling price every day (Kuchey and Jan, 2018). Therefore, a better distribution of medicine in the middle-income groups, who need medicines for several treatments, has been observed. There was social and economic injustice because the low-and middle-income classes could not afford costly medicines, and DPCO resulted in socioeconomic justice for many poor people in India, a developing country. However, pharmaceutical companies may lose interest in the Indian market due to the fixation of prices, which can lead to economic uncertainty since the Indian market is under rigorous pricing laws . Another major concern is that medicines are not available where needed due to an inadequate supply chain and to the absence of potential producers that may not be attracted by these restrictions. If there is a lack of proper supply of medicine to the needy, this would be in contradiction with the objectives of social justice (Mrinali, 2013). Thus, the Indian government should take effective measures to better balance the positive and negative effects of DPCO.
A narrative strengths, weaknesses, opportunities and threats analysis of the Indian pharmaceutical industry
After developing the above topics, it is possible to implement a potential framework to oversee the current and future conditions of the Indian pharmaceutical industry. Naturally, the peculiar situation is considered due to the COVID-19 pandemic, whose impact in the near and far future may greatly influence scenario analysis. For these reasons, we opted for a narrative SWOT analysis, similar to other studies (Vandevelde and Halleux, 2017;Septinaningrum and Nugraha, 2019;Cowx et al., 2010).
Strengths
There is a robust low-cost manufacturing setup available in India, where the industry can produce drugs at a cost that is 40%-50% lower than the rest of the world and sometimes even as much as 90%. There is also a presence of good technical and technological expertise together with the availability of low-cost skilled human resources. Moreover, the penetration of modern medicine in India is less than 30%. Hence, there is a large untapped market available. The growth of the middle-class population is leading to a new lifestyle, providing a huge market for lifestyle drugs, which are currently the lowest contributor to revenues from the sector. The industry possesses excellent chemistry and process reengineering skills. This provides an added advantage to the nation, which assists in developing processes that are cost-effective (Mahajan, 2019).
Another strength concerns the AYUSH market. As previously mentioned, the AYUSH Ministry was formed in 2014 for the development and spread of Ayurveda, Yoga and Naturopathy, Unani, Siddha and Homoeopathy treatments. Earlier, it was known as the Department of Indian System of Medicine and Homeopathy (ISM&H), founded in 1995. In addition to modern medicine, Ayurveda and the other mentioned treatments are highly popular in India. Patanjali, Himalaya, Vicco Laboratories and Dabur are the major companies manufacturing in the field (Mehrotra et al., 2017). Due to the greater effectiveness of modern medicine, Ayurveda has been losing its importance from a technological point of view, although during the period from 2014-2015 to 2018-2019, AYUSH treatments have seen significant growth in production and exportation, as was seen from the data provided above, and the AYUSH Ministry is spreading awareness of immunity-boosting using AYUSH remedies with reference to COVID-19 (Chaturvedi et al., 2020;Priya and Sujatha, 2020;Ayush.gov.in). Although there seems to be even less demand for these treatments in a reasonable future in comparison to modern medicine, the AYUSH field still attracts investments (Joshi and Srivastava, 2013), and companies engaged in this specific subsector of the Indian pharmaceutical industry will probably see higher exports, in part because there is governmental support. Indian companies engaged in AYUSH production and distribution will have to consider a long-term perspective; in the near future, positive results are likely, but in long-range planning, with the tremendous increase of technology, focusing on this segment could be very risky. In fact, AYUSH represents a mass-niche market, requiring peculiar attention mostly in terms of competitive strategies that would have a focus orientation.
Weaknesses
There is less emphasis on R&D in pharmaceuticals, which have a major focus on generics. This is because in India, there is an inadequate R&D infrastructure and lower industry-academia connection for research. As mentioned above, DPCO establishes the various pricing parameters according to which the price is to be decided, and this policy reduces the profitability of the companies that would invest in innovative drugs, which requires huge capital. This sector has been hamstrung by a lack of product patents, due to which foreign companies do not introduce new drugs in the Indian market, discouraging innovation and drug discovery. Paradoxically, low entry barriers have led to fragmented industries that make the sector highly accessible due to competition .
Opportunities
There are a large number of drugs that went off patents, providing many pharmaceutical companies with huge opportunities to enter the market. India is a country where good skills are available at a lower cost, thus it is emerging as an attractive destination for contract research and manufacturing organizations due to rapid growth of the domestic market, robust foreign direct investment (FDI) policies (which are attracting large amounts of green-field and brown-field investments in production and capacity-building from MNCs) and steady (although not so quick) migration to a product patent-based regime. The increased inclination toward the health insurance sector and the growth of per capita income have expanded the purchasing capacity of patients/consumers, providing a long-term perspective for the development of the pharmaceutical sector. Hence, India can become a global outsourcing hub for pharmaceutical products due to its lowcost production ability combined with FDA-approved manufacturing plants (Vaidya et al., 2018).
Threats
In the present scenario, the changes in the patent regime may benefit MNCs, while domestic companies may face more challenges. Similarly, the threats possibly deriving from other low-cost countries -China above allare real. The negotiations with MNCs, international rules and domestic regulations are imbalanced, while there are increasingly stringent regulations and nontariff barriers to generic drugs in developed countries (Dhar and Joseph, 2019).
Taking into account the above considerations, from a narrative and then qualitative point of view, a positive scenario for the Indian pharmaceutical industry can be highlighted, thus providing a response to RQ3 ("What are the most relevant SWOT for the future?"). More specifically, both attack and defense strategies have emerged at the same time. For example, an attack strategy seems realistic when combining strengths such as the industry's manufacturing ability and the rising economy of the country, thinking about a competitive strategy based on differentiation and/or niche orientation for innovative and patented drugs. Similarly, a defense strategy seems realistic when combining the experience with generic drug production with DPCO limits, thinking about a competitive strategy based on cost leadership, which may be sustainable considering the huge population of the country.
Theoretical and practical implications
The study seems to have an impact in at least three areas: growth of the sector, growth drivers and human capital in the patent-based regime. All have been associated with relevant implications.
Growth of the sector
The Indian pharmaceutical industry has been living a situation of constant growth in recent years, as shown by statistical analyses of the field at the descriptive and inferential levels. There are increased expectations regarding both the production amount and the trade balance value.
Although the sector must tackle several issues concerning internal (fragmented) and external (innovative) competition, drug price controls and patent regimes have a significant influence on its effective functioning, which is why the Indian pharmaceutical industry is still mainly concentrated on generic drugs. Although some Indian companies have invested significantly in R&D initiatives, most of them prefer remaining in the business of generic drugs. Because it does not require huge efforts to invent new molecules, there are still numerous small companies that produce the same generic drugs, creating tough competition in the domestic market (Pardhe, 2019).
Nonetheless, despite being a global market leader in generic drug formulations, the Indian pharmaceutical industry is highly dependent on China for raw material supply to produce pharmaceutical formulations and even medicines. India imports approximately 70% of its raw materials from China (Shreyan, 2020).
Thus, from a practical point of view, it is expected that entrepreneurs, managers, and professionals in the field would also orient their attention mostly on innovative and patented drugs to avoid dangerous competition based on low prices if possible, establishing alliances for legally influencing the government with forward-looking lobbying. From a theoretical point of view, there is a need to analyze and develop innovative strategies in "transition," which could assist companies in sustaining their financial performance based on the positive outlook that emerged from the field investigation, with a farsighted perspective at the national and international levels.
Growth drivers
In coming years, per capita health-care expenditures in India will grow exponentially, especially because of the higher penetration of health-care services and increasing spending capacity of the population (Dogra and Dogra, 2018). Other influencing factors are governmental initiatives to support, the intensification of R&D investments, and the expansion of foreign investments (Ganesan and Veena, 2018). This situation also affects the pharmaceutical industry, particularly with the following features: The growing investment in R&D is a driving factor, together with mergers & acquisitions; Indian pharmaceutical companies invested approximately 9% of their revenues in R&D during 2018 (DIPP, 2018), while in 2017, there were 46 deals for US$1.47bn in India's pharmaceutical industry (Dhingra, 2019).
The low cost of production may support companies' competitiveness, contributing to profitable exports: the production cost in India is much lower than that in the USA, and thus India's capacity to produce high-quality and low-priced medicines represents an enormous business opportunity for the global and domestic industry (ICRA, 2019).
To support economic growth, it is necessary to improve medication affordability and increase health insurance coverage, boosting spending on health-care in general and more specifically on medicine; in rural India in particular, special attention must be paid to over-the-counter drugs (IBEF, 2019).
The Pharma Vision 2020 is the governmental project for assisting the global expansion of the industry to make India a world leader in end-to-end production of drugs, with huge investments by the Ministry of Health and Family Welfare; additionally, there is a favorable propensity for FDI in the sector under the automatic route.
India ranks among the industry leaders in clinical trials, having the availability of genetically diverse populations and qualified health-care professionals; moreover, the Contract Research and Manufacturing Services industry in India is projected to hit US$20bn by 2020 (McKinsey, 2020).
Thus, from a practical point of view, Indian pharmaceutical companies are somehow forced to continue to leverage their expertise in generic drugs, considering the positive financial impact of these productions. At the same time, according to an ambidextrous perspective, they should try to invest in R&D for innovative drugs to capture new opportunities arising from the demands for good health from the growing income classes. From a theoretical point of view, major attention should be paid to conceiving innovative formulas of collaborations at the private and public-private levels, given the rising weight of private companies in the field, and establishing correct and forward-looking alliances for the development of the sector at the national and international levels.
Human capital in the patent-based regime
In India, the IPR regime is important at all levelsstatutory, administrative, and judicial. Starting in 1995, an agreement on Trade Related Aspects of Intellectual Property Rights (TRIPS) was established after negotiation with the World Trade Organization, defining the minimum standards for the protection and enforcement of IPRs in member countries.
Several factors have an impact on pharmaceutical patents in India, such as an increasing level of income, a consistent growth rate of the domestic economy, and rapid growth in the diffusion of better economic conditions. In this respect, however, foreign companies, as well as national companies, have been reluctant to invest in R&D in India (Ghai, 2010). Indeed, compared to developed nations, Indian pharmaceutical companies concentrate less on innovation, where they allocate less budget. For all the above-described reasons, they find manufacturing generic drugs more lucrative, as the outcome is guaranteed. Consequently, although India produces a large number of graduates, due to a lack of enthusiasm from the companies for R&D, they press to change professions or tend to perceive careers in related areas because the average remuneration in pharmaceutical companies is less than that in other industries (Schweitzer and Lu, 2018).
Thus, from a practical point of view, Indian pharmaceutical companies should invest more in R&D, attempting to attract talented human capital. Otherwise, they will lose the highly qualified national workforce that will favor MNCs in the field. From a theoretical point of view, the contribution of intellectual capital to the financial stability and economic prosperity of the pharmaceutical industry, most of all due to patents as immaterial assets deriving from human, structural and/or relational capital, seems indispensable (Festa et al., 2020).
Research limitations and future directions
The principal aim of this study is to conduct an explorative investigation of the current and future situation of the Indian pharmaceutical industry, with the development of a narrative SWOT analysis to generate an overall scenario analysis. The empirical inquiry has been fundamentally based on secondary data, at the descriptive and inferential levels, limiting, due to constraints concerning the volume of the calculations, the years under observation (six years for the total production value, five years for the total trade balance and nine years for the total trade balance projections). This is the first limitation of the research, thus, further investigations expanding the range of the data under examination would improve the reliability of the results.
However, the most critical limitation seems to concern the content analysis that has enabled the elaboration of the SWOT analysis, particularly concerning the choice of the institutional reports and issues under investigation and the possible methods of examination. Further research involving other reports and even interviews with experts could mitigate this limitation, researching more numerous, shared, qualified and reliable content and adopting more methods.
Conclusion
The production of pharmaceuticals, medicinal chemicals and botanicals for health-care has been growing yearly in India. Particularly, the country exports generic medicines on a large scale, with a major impact in the American and European markets.
The infrastructure of the industry and the R&D capabilities of domestic businesses have improved considerably in recent years, but many challenges remain, mainly related to pricing regulation, sector fragmentation and intellectual property. Not surprisingly, all of them, directly or indirectly, concern patents, which are central issues of debate in the national industry.
Nevertheless, India has a massive population with low per capita income, and stricter patent rules would probably mean less access to medicine for a significant part of the population. The affordability of pharmaceuticals is a critical challenge in India and South Asia in general, raising questions of life and death.
Finally, the study has revealed in a current and future scenario analysis that the overall situation of the Indian pharmaceutical industry is positive at the economic, business and commercial levels, even though with many concerns. Most likely, however, the real challenge of the sector will entail a sustainable compromise between the legitimate expectations of innovative growth from the business point of view and the fundamental exigencies of affordable health from a social point of view. | 2021-09-27T20:40:32.115Z | 2021-08-04T00:00:00.000 | {
"year": 2021,
"sha1": "a8401d0185d60c394be5a0f2cd9e367218ff7fdb",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/JBIM-07-2020-0365/full/pdf?title=envisioning-the-challenges-of-the-pharmaceutical-sector-in-the-indian-health-care-industry-a-scenario-analysis",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "dfab1643c255ebee0783e49d0ae3b6a62708c4b3",
"s2fieldsofstudy": [
"Business",
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Business"
]
} |
265245075 | pes2o/s2orc | v3-fos-license | Characterization of an eco-friendly active packaging film for food with ultraviolet light blocking ability
An eco-friendly active packaging film for food with ultraviolet (UV) light blocking ability was prepared using nano-magnesium oxide (MgO), nano-zinc oxide (ZnO), nano-cellulose (NCC), and poly(lactic acid) (PLA). The results revealed that the four nanomaterials were evenly dispersed in the PLA films, but no chemical bonds formed according to infrared spectroscopy and scanning electron microscopy. Compared with other PLA films, the PLA films with ZnO were endowed with excellent UV absorption and its surface hydrophilicity was decreased. On the contrary, the PLA films with MgO, ZnO, and NCC had improved mechanical strength, better antimicrobial activity, lower oxygen permeability (OP), and water vapor permeability (WVP). The PLA film with nanoparticles is an excellent active packaging material with improved physical, mechanical, and barrier properties, which can also avoid the damage of food or active ingredients in packaging from UV radiation, and has a broad application prospect for the preparation of multilayered composite active packaging materials for food.
At present, a series of natural active ingredients (antioxidants and antimicrobials) were added to packaging materials to prepare active and intelligent food packaging (Chavoshizadeh et al., 2020;Guo et al., 2020;Janani et al., 2020;Kong et al., 2020;Sharma et al., 2021).However, the stability of active ingredients to ultraviolet (UV) light, temperature, and oxygen are important for the active films in practical application (Vilela et al., 2017;Zhang et al., 2019).Especially, UV light showed a significant influence on the stability of natural active ingredients over other factors.Therefore, it is essential to improve the stability of natural active ingredients (Mohr et al., 2019;Yang et al., 2021;Yuan et al., 2019).
Currently, encapsulation technology (pickering emulsions, hydrogels, β-cyclodextrin, and microfiber) (Chen et al., 2019;Dammak et al., 2019), multilayer technology (Biswal & Saha, 2019;Konuk Takma & Korel, 2019;Oudjedi et al., 2019), or both (Estevez-Areco et al., 2020;Li et al., 2020;Yang et al., 2021) are used to protect the active ingredients in packaging material from environmental factors.Encapsulation uses tiny physical structure to protect the active substances directly, while the multilayer technology uses one or more layers of composite membranes to protect the active substances indirectly.Composite membranes generally include multiple active layers and protective layers.The protective layer must have some good characteristics, such as a high barrier, good mechanical properties, high chemical resistance, and UV protection (Yang et al., 2021).
To match the above requirements, a series of protective layers based on PLA and nanoparticles were prepared in this test.The application of nanoparticles in active packaging is considered to be very promising.Nanoparticles are incorporated into food-contact polymers to enhance mechanical and barrier properties.In addition, they are also effective antimicrobial agents and UV absorber.Currently, nanomaterials such as Ag + (Nur Amila Najwa et al., 2020;Yalcinkaya et al., 2017), CuO (Peighambardoust et al., 2019), nano-zinc oxide (ZnO) (Sun et al., 2020;Yadav et al., 2021), nano-magnesium oxide (MgO) (Swaroop & Shukla, 2018), and TiO 2 (Riahi et al., 2021) are used in active and intelligent packaging.They are generally recognized as safe compound in food industry approved by the Food and Drug Administration.
In this study, a variety of nanoparticles were used in the preparation of PLA active packaging materials.The aims of this study are to enhance the barrier and mechanical properties of PLA film and to give it excellent UV absorption capacity.Then, based on the PLA film, a variety of composite active packaging materials can be developed in the future, and the stability of active ingredients can be effectively guaranteed.
Preparation of nano-modified PLA film
First, the PLA and nanomaterials (MgO, ZnO, and cellulose) were dried at 60°C for 24 h.Subsequently, 4 g of PLA and 100 mL of trichloromethane were added to a beaker and magnetic stirring was carried out for about 3 h until the PLA was completely dissolved.Next, nanomaterials (MgO, ZnO, nano-cellulose (NCC), and MgO/ZnO, with 1, 2, 3, and 4 g/100 g) were added to the PLA solution, and then the PLA solution was subjected to ultrasound for 15 min at 40 kHz and was stirred for about 1 h.The PLA solution was then coated onto a clean glass plate at room temperature and was allowed to naturally evaporate for 2-3 h to form a film.Finally, all films were stored at 45 °C for 24 h (Swaroop & Shukla, 2018).
Thickness
A total of five points were randomly selected on a film and the thickness of the film was measured using a micrometer (Mitutoyo, Kawasaki, Japan).Each sample had five parallels and we obtained the average value.
Transmittance
Each film was cut into a rectangle of 2 × 4 cm and the transmittance was measured using an UV-visible spectrophotometer (Mettler Toledo, Zurich, Switzerland).The wavelength range was set to 200-800 nm and each sample was taken in five parallels (Arrieta et al., 2015).
Water vapor permeability and oxygen permeability
First, each film was cut into a circle having a diameter of 6 cm and then 15 mL of water was added to each test cup (4.5 cm in diameter and 3 cm in height) to maintain the humidity at 90%.The membrane was fixed to the mouth of the cup with paraffin.The test cup was placed in a desiccator (relative humidity = 90%), weighed after 2 h and then taken out once every 3 h for weighing.Weight loss from each cup was measured as a function of time for 12 h.The test was done in duplicate and the mean value was reported.
The VAC-V1 OP tester (Industrial Physics, Boston, USA) was used in the experiment.The sample was stored at 25°C and 50% RH for 2 h and then cut into a circle having a diameter of 9.7 cm.The test piece was placed in a tester with a test area of 38.46 cm 2 and the test time was 8 h at an oxygen pressure of 0.5 MPa.
Tensile properties
TA-XT plus texture analyzer (Stable Micro System Ltd., Godalming, UK) was used in the experiment.The A/TG stretching die was selected and calibrated with a 5 kg weight.The initial distance of the holder was set to 50 mm, the test speed was fixed at 1 mm/s and the data were processed using Texture Exponent 32.The tensile strength (TS), modulus of elasticity (EM), and elongation at break (EAB) were calculated by using a stress curve.Each sample was tested eight times and four parallel samples were taken per sample for a sample size of 10 mm × 150 mm.
Scanning electron microscopy
An SEM FEI Quanta 200FEG (Hillsboro, OR, USA) was used to characterize the surface structure of PLA films, and the surface was coated with Au/Pd alloy before the measurement, using an E5 150 SEM coater (Polaron Equipment Ltd., Doylestown, PA, USA).The pressure was set to 10 kV (Dashipour et al., 2015).
Statistical method
Statistical analyses were carried out with ANOVA using IBM SPSS Statistics Version 23.0 and the differences between the trials were detected using the LSD test (P < 0.05).
LI et al.
Scanning electron microscopy
An SEM image of the film surface is shown in Figure 1.The results showed that the surface of pure PLA film, 1% and 2% NCC films, was smooth and uniform and had no obvious aggregation of particles, while the 3% NCC film showed obvious aggregation of particles.The 4% NCC film had obvious convexity, which might be caused by the aggregation of NCC (Sun et al., 2020;Yadav et al., 2021).Similarly, the surface of the film with 1-4% MgO and ZnO showed some granular bulges, which increased with the addition of MgO and ZnO.The MgO/ZnO films with different additions had significant differences, with 1% being the most uniform and 2% having some bumps and concave areas.The 3 and 4% showed discontinuous surfaces and uniform holes but nano-agglomerated particles were still uniformly dispersed between the holes.In addition, there was no significant difference between 1, 2, 3, and 4% films in terms of tensile and barrier properties.The pores on the PLA membrane surface may be caused by the different surface tensions of the two kinds of nanomolecules.
Transmittance
The transmittance of each PLA film was affected by adding nanomaterials during the processing.The effect of different groups on the transmittance is shown in Figure 2. Compared with the pure PLA films, the PLA-NCC film had no significant change.The transmittance of PLA/MgO, PLA/ZnO, and PLA/MgO/ZnO films decreased, indicating that ZnO and MgO could reduce the light transmittance of the PLA film.The transmittance of PLA/ZnO and PLA/MgO/ZnO films in the UV spectral region (200-400 nm) decreased significantly (P < 0.05); the larger the amount of nanomaterials added, the lower the transmittance (Jiang et al., 2018).However, the light transmittance of PLA/MgO film in the UV spectral region did not decrease significantly, indicating that ZnO has a strong UV absorption effect.Compared with the control, the transmittance of 2% PLA/ZnO was lower than 10% and it could absorb most of the UV rays.Furthermore, there was no significant difference between 2, 3, and 4% PLA/ZnO films.This indicates that adding 2% ZnO to the film could achieve a good UV light absorption effect.Therefore, ZnO has potential application in the packaging of UV-sensitive foods (Marra et al., 2016).
Tensile properties of the nano-modified PLA films
Mechanical strength is very important to the application of biodegradable materials in food packaging.Therefore, it is very important to study the mechanical properties of packaging film (Wen, et al., 2017).The main mechanical properties of PLA films such as TS, EM, and EAB are shown in Figure 3.
As shown in Figure 3, the mechanical properties of the nano-modified PLA film were significantly improved compared with the pure PLA film.TS, EM, and EAB of PLA films were 48.12 MPa, 0.87 GPa, and 5.13%, respectively.The results show that the TS and EAB of 1% NCC increased significantly by 31 and 23%, respectively, but its mechanical properties did not continue to improve with the increase in the added amount.TS, EM, and EAB of 2% MgO increased significantly by 54, 48, and 13%, respectively.TS and EM of 4% ZnO increased, but EAB decreased significantly.TS and EM of the others did not change significantly; hence, ZnO had a poor effect.PLA/MgO/ZnO showed a significant increase in TS, EM, and EAB at 1% by 21, 38, and 23%, respectively.The main mechanical properties of PLA films such as TS, EM, and EAB are reinforced (Wen et al., 2017).
The improvement of the mechanical properties of the PLA film by the nanofiller was mainly due to its large specific surface area, which promotes the stress transfer between the polymer molecular chain and the nanofiller (Arrieta et al., 2015).The smaller the size of the nanofiller, the larger the specific surface area.Therefore, the smaller the size of the nanofiller (MgO 50 nm and ZnO 100 nm) in the PLA film, the more obvious the improvement of the mechanical properties of the PLA film.The effect of nanofillers on the mechanical properties of PLA films was mainly determined by two aspects (Marra et al., 2016;Swaroop & Shukla, 2018).First, the addition of nanofillers to the polymer matrix provided a relatively high surface interaction between the filler and polymer chain, which helped the transfer of stress from the polymer chain to the nanomaterial, leading to the improvement of mechanical properties.Second, they tended to agglomerate because of the high surface energy of the nanomaterials, thus reducing the effective filler in the matrix, contrary to the first case.In addition, aggregated particles were beginning to behave like defects in the polymer networks; thus, adding more nanomaterials does not improve the mechanical properties of the films (Endres & Siebert-Raths, 2012;Shah et al., 2017).
Barrier properties of the nanomaterial-modified PLA films
Small molecular substances in the environment, such as water vapor and oxygen, can enter the internal environment through food packaging, thus leading to oxidation of foods and mass reproduction of microorganisms and causing food spoilage.Therefore, food packaging requires low permeability (Ciannamea et al., 2018).The barrier properties of packaging materials have a great impact on the shelf life of foods.Therefore, we measured the WVP and OP of the PLA films.The OP and WVP of all samples are shown in Figure 4.
The nano-modified PLA film has an oxygen transmission rate lower than that of the pure PLA film.The OP of PLA/NCC decreased with the increase of the addition and the 4% film decreased by 14%.The OP of PLA/MgO decreased first and then increased upon addition, and the OP of 2% film was the lowest (18% lower).The OP of the PLA/ZnO film increased first and then decreased and the OP of 1% film was the lowest (21% lower).The OP of PLA/MgO/ZnO film decreased upon addition and 4% addition produced the lowest OP value (21% lower).The above results show that the addition of materials can significantly reduce the OP of the PLA film, that the effect of MgO and ZnO was better (p < 0.05), and that the OP of the PLA film can decrease by more than 20%, which may be because nanofillers hinder the diffusion of oxygen molecules (non-polar molecules) in the polymer, so it must bypass these nanofillers.This greatly extends the average diffusion path of oxygen molecules in the membrane and reduces the OP of the PLA film (Fabra et al., 2015;Galus & Kadzińska, 2016).In addition, the PLA film prepared in this study had OP higher than that in a similar study, which is likely due to the evaporation of the solvent during solvent casting, which resulted in a larger free volume and in turn promoted spreading of the gas molecules.
*Different letters (a-c) indicate significant differences between groups.Compared with the WVP of the pure PLA film, the WVP of the nano-modified PLA film increased (Aydogdu et al., 2018).Among them, PLA/NCC film showed that WVP of the 2% film increased by 76%, the PLA/MgO film showed that the WVP of the 4% film increased by 58%, the PLA/ZnO film showed that the WVP of the 2% film increased by 47%, and the PLA/MgO/ ZnO film showed that the WVP of the 2% film increased by 36%.According to the theory of molecular diffusion, nanofillers in polymer networks reduce the diffusion of small molecules but increase the diffusion of water molecules.This may be because the diffusion of water molecules (polar molecules) is not affected by the path effects (Aydogdu et al., 2018).In contrast, the interfacial effect of the nanofiller changed the absorption and dissolution characteristics of the free region in the polymer network, which in turn increased the permeability of the water molecule (Dashipour et al., 2015).
CONCLUSION
In this study, the enhancement effects of ZnO, MgO, NCC, and ZnO/MgO mixture on PLA films were analyzed.The results show that nanomaterials were evenly distributed in the PLA film.PLA/ZnO and PLA/MgO/ZnO films had significant UV absorption effect.All of the nanomaterials could improve the mechanical properties of PLA film and the films with 2% MgO had the most remarkable enhancement effect (P < 0.05); all of the nanomaterials decreased the OP and increased the WVP.The oxygen barrier property of the films with 1% ZnO or 2% MgO improved remarkably (P < 0.05).Therefore, 2% MgO and ZnO enhanced the mechanical strength and oxygen barrier property of PLA film and endowed it with UV absorption capacity, which is conducive to promoting the application of PLA film in food packaging.
Figure 1 .
Figure 1.SEM images of various PLA films.
Figure 2 .
Figure 2. The transmission of various PLA films in the range of 200-800 nm.(A) PLA films with MgO, (B) PLA films with NCC, (C) PLA films with ZnO, and (D) PLA films with MgO/ZnO. | 2023-11-17T16:12:06.782Z | 2023-11-15T00:00:00.000 | {
"year": 2023,
"sha1": "21a4baaf578b57c3b91289dd5627997dd33804f4",
"oa_license": "CCBY",
"oa_url": "https://fstjournal.com.br/revista/article/download/217/123",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "96a94bc63fcdb4d2054b7886d1012e4b70187c1f",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
238052172 | pes2o/s2orc | v3-fos-license | Determining the parameters of a shunting locomotive taking into account the environmental component
. The article considers the issues of determining the main technical and economic indicators of the shunting locomotive during its modernization by a hybrid power plant. The analysis of scientific and practical works on the impact of railway transport on the environment and increase the efficiency of shunting locomotives due to design changes, which are aimed at reducing emissions. A model has been developed to determine the rational ecological and energy characteristics of a shunting locomotive, which has been modernized by technical means for energy saving taking into account the ecological component. A procedure, algorithm and program for calculating locomotive parameters have been developed. The main parameters of the shunting locomotive of the ChME3 type at modernization by its hybrid power plant taking into account an ecological component are defined and the estimation of expediency of such modernization is given.
Introduction
With the growing trend of human mobility and increasing freight traffic, vehicles face the problem of shortage of primary energy resources. To date, the analysis of the locomotive fleet of Ukraine has shown the urgent need to update it, which can be done in two ways: the purchase of new locomotives, the price of which is very high, or the modernization of the existing rolling stock.
In recent years, active work has been carried out to modernize locomotives on railways and industrial enterprises of Ukraine. The need to increase the efficiency of rolling stock operation motivates the search for innovative solutions during the modernization process. Diesel locomotives need to be more efficient and better adapted to alternative energy sources, the use of which solves the issue of shortage of primary energy resources and increases the environmental performance of diesel traction.
Literature review and problem statement
The issue of ecology in transport is given much attention in the works of scientists [1][2][3][4][5][6][7]. Thus, in [3] an analysis of various aspects of the impact of rail transport on the environment was conducted, in addition, examples of the negative impact of rail transport on the environment and human health are given.
Studies [4] on reducing the environmental load have shown the need to create a new comprehensive scheme for the treatment of waste oils and technological sludge of railways, the implementation of which will increase the efficiency of recovery and resource conservation, increase the environmental friendliness of oil circulation.
The authors in [5] considered the impact of rail transport on the environment. Ways to reduce the ecodestructive load of transport on the environment are given, the necessity of transition of railway transport to electric traction is also conditioned. As a result, the procedure for calculating the economic damage from the impact of railway transport on the environment is proposed, thus necessitating the introduction of an indicator that takes into account the mode of operation of locomotives at different power, which affects the mass of emissions.
The analysis conducted in [6] showed the relevance of improving the environment and improving the efficiency of natural resources. Thus, the preconditions for the development of "green" logistics in railway transport are identified, in addition, environmental issues in Ukraine are considered, which are the most relevant at present in the existence of problems that hinder the development of "green" logistics.
In [7] the results of researches of determination of bacteria of group of Escherichia coli in ballast of a railway track are published. As a result of which, conclusions were made about the level of pollution of railway ballast and described ways to radically solve this problem.
The issues of modernization of shunting locomotives with hybrid power plants were considered by scientists of the Ukrainian State University of Railway Transport in [8][9][10]. They provide methods for determining the main characteristics of a locomotive with a hybrid power plant, but the environmental component is not sufficiently taken into account.
According to research by scientists and transport workers, shunting locomotives run idle for more than 50% of the time. The specific emissions of pollutants are the largest. Therefore, the introduction of modern power plants, especially those that have the technical means to save energy, in the modernization of locomotives, is a topical issue in terms of ecology.
But, unfortunately, some scientific issues of the use of hybrid transmissions in the modernization of locomotives, taking into account the environmental component, the choice of energy storage and the region of operation of these scientists or not considered at all, or were not fully considered. Therefore, based on the analysis, the purpose of the study was formulated and tasks were set to achieve it.
Objective and tasks
The purpose of the work is to increase the efficiency of modernized shunting locomotives by improving their design with modern technical means and technologies for energy saving and improving environmental performance. To achieve the goal of the work it is necessary to solve the following tasks: --to analyze scientific and practical works on the impact of railway transport on the environment and increase the efficiency of shunting locomotives due to design changes that are aimed at reducing emissions; --to develop a model for determining the rational ecological and energy characteristics of the shunting locomotive, which is modernized by technical means for energy saving --to determine the main parameters of the shunting locomotive type ChME3 during the modernization of its hybrid power plant taking into account the environmental component and to assess the feasibility of such modernization.
Equations and mathematics
As a result of the analysis of different types of schemes of hybrid power plants taking into account features of shunting locomotives and conditions of their operation for shunting work the scheme which power chain is presented in figure 1 was chosen.
In General, the power of the hybrid power plant Nгеу is presented as follows, where Ndg -diesel power, kW; Nne -energy storage capacity, kW; Еne -energy consumption of the energy storage, kJ; limi -restrictions on the length of the energy storage, m; j -the number of elements of energy storage; Kzav -locomotive load factor. Based on the European experience of environmental taxation, the target function for determining the technical and economic indicators of the locomotive included compensation for environmental losses, in addition to those previously taken into account, the cost of energy storage and maintenance and repair costs. It should also be borne in mind that the parameters of the energy storage can be affected by its limitations in terms of mass and size parameters. In General, the objective function is described as follows: where С0 -the cost of an old diesel generator, UAH.; С1 -cost of a diesel generator, UAH.; С2 -cost of energy storage devices, UAH; С3 -fuel costs after modernization, UAH.; С4 -maintenance and repair costs, UAH; С5 -costs of environmental penalties, UAH; ΔVlim -underutilized free space of the locomotive to be engaged in energy storage, m 3 ; -respectively the length, width and height of one element of the energy storage, m. The costs of environmental penalties are determined by the formula, UAH: where T 10 рт -total operating time of the locomotive for 10 years, hours.
The target function, taking into account the constraints on energy storage, power plant and environmental emissions, will be explicitly presented as follows, 0 , 1 where А, В, С -coefficients that characterize the diesel power plant; Nд -power of a diesel power plant, kW; u2 -specific cost of energy storage, UAH / kWh; сt -fuel cost, UAH / kg; ge0 -specific fuel consumption by diesel of the base locomotive, kg / kW•h; Gi,j -fuel consumption by hybrid locomotive, kg; where Сб -costs for maintenance and repair of the base locomotive, UAH.; k(Nengj) -the ratio of expenditure on maintenance and repair of the hybrid locomotive to the base, depending on the selected power diesel generator set; k v -coefficient of underutilization of free space of the locomotive to be engaged in energy storage, m 3 / MWh. The coefficient is chosen depending on the type of locomotive, the overall dimensions of the elements of the energy storage and its required energy consumption.
Parameter limits were set for the model: Em max , Ev max -maximum values of energy consumption by mass and volume for a certain shunting locomotive.
To determine the technical and economic characteristics of the shunting locomotive with a hybrid power plant, an appropriate procedure was developed ( Figure 2), which consists of seven stages. At the first stage the values of necessary parameters for calculation are defined. Then the calculation of the energy storage is performed. In the third stage, the main parameters of the hybrid transmission are determined. After that, the construction of the external characteristics of the traction generator, the characteristics of the electric transmission and the traction characteristics are performed.
On the basis of the developed procedure the algorithm of the program of calculation of technical and economic indicators of the shunting locomotive with hybrid transfer of power was made.
To determine the main parameters of power transmission of a shunting locomotive with a hybrid power plant for operation in shunting motion, the model described in [11] was improved.
The initial data for determining the parameters of the energy storage and power plant of the locomotive, taking into account the works [12][13][14] are: power of the power plant Nf і , кВт, which is determined during the time interval Δτ, с, and a matrix of energy storage parameters is introduced in the form of a matrix kne and mass storage limits Mpred, kg; and volume Vpred, m 3 , provided they are placed on the locomotive.
The proposed model differs from the existing ones in that when using an energy storage device, its power Nne is used for traction of engines to 3 the position of the driver's controller. Charging it is performed, if necessary, at 6 the position of the driver's controller. When using a power of more than 4 the position of the driver's controller, the shunting locomotive begins to operate according to the usual scheme. If necessary, the power of the energy storage Nne added to the power of the diesel generator Ndg and total power Ntey supplied to traction engines, kW, As a result of calculation of this block we receive: dependence Ene(Neng) the required energy consumption of the energy storage, MWh, and the power of the locomotive power plant, kW, as well as the limit parameters of the energy consumption of these storage in the form of a matrix Еnelim, MJ, and the optimal power values of the diesel generator Nopt, kW and energy consumption of the energy storage Eopt, MWh.
As a result of the analysis of works [2; 15-20] the initial data for definition of the basic parameters of electric transfer of the locomotive are expressed through an array Mpoch: Mpoch = {Ne , kdod, ηg, Рs, ψkr, ηvu, ηed, ηsl, с}, (6) where Ne -locomotive power, kW; kdod -percentage of costs for ancillary needs, %; ηg -efficiency of the generator; Рs -coupling weight of the locomotive, kN; ψkr -thrust coefficient on the calculated lift; ηvu -efficiency of the rectifier; ηed -efficiency of the electric motor; ηsl -efficiency, taking into account losses in the power circuit; с -number of traction motors, pcs.
The result of the calculation of this block is the mass Nd -потужність дизеля, яка віддається на тягу, кВт; Рg -diesel power, which is given to traction, kW; ηеп -efficiency of power transmission; Fkr -the calculated thrust force determined from the condition of realization of the thrust coefficient on the calculated lift, kN; vp -speed on the calculated rise, km / h; Ped -power at the terminals of the traction motor, kW; Рde -power on the shaft of the traction motor beforehand, kW. An array of parameters was used to construct the external characteristics of the traction generator: where Ugmax -maximum voltage of the traction generator, kW; Cgu -voltage control factor of the traction generator; CgI -current control coefficient of the traction generator. The result of the construction of this unit is the construction of the external characteristics of the traction generator U(I) with all restrictions.
To build the control characteristic of the power transmission is also added vmax -maximum speed of the locomotive, km / h. The result of the calculation of this block is the construction of graphs and dependence Іg(v) and Ug(v) generator current and voltage from the speed of movement.
To construct the traction characteristics of the locomotive, the dependence of the efficiency of the electric transmission on the current is introduced. As a result, dependence is built F(v) with limitation on current and coupling.
Based on the proposed algorithm, a program for calculating the technical and economic characteristics of a shunting locomotive with a hybrid power transmission using a software package was developed Mathcad, The verification of the developed model for adequacy was performed on the basis of the choice of parameters of the locomotive ChME3.
The parameters of the locomotive were calculated and its characteristics were constructed. In Figure 3.5. shows the traction characteristics of the locomotive ChME3: real and built on the model. Current and clutch restrictions are also applied to the characteristics. Figure 3 shows that the characteristics are almost identical. But there is a need to determine the difference between them. For this purpose, the absolute error of the traction characteristic according to the formula was determined: Fp v F v ' (9) where Fp(v) -traction characteristics of the locomotive ChME3, built on the model; F(v) -real traction characteristic of the ChME3 locomotive.
From the analysis of which it turns out that the maximum error is about 6.3%, which is satisfactory for calculations. Based on the absolute error, the relative error of the traction characteristic is calculated by the formula, Figure 4 shows the change in the absolute and relative errors of traction from velocity in the construction of the traction characteristics of the locomotive ChME3. Based on the developed model, the main technical and economic parameters for a six-axle shunting locomotive are determined, taking into account the selected modern energy-saving technologies.
According to scientific research and operation of serial six-axle shunting locomotives, taking into account the limitations (mass and dimensions and provided that the free space of the locomotive is limited), the maximum energy consumption of various energy storage devices was calculated. The capacity of the energy storage is selected so that it is sufficient for the operation of the locomotive, which is equivalent to its operation at 1 to 3 positions of the driver's controller. The power dependences of the hybrid locomotive power plant for each position of the locomotive driver's controller are constructed.
The total costs associated with the upgrade are calculated Сzag, UAN. It is determined that taking into account the restrictions imposed on the energy storage device, the minimum modernization costs are observed for the power of the diesel generator of 360 kW and the energy consumption of the energy storage device of about 600 kWh. Calculations of parameters of the modernized shunting locomotive with the hybrid power plant for performance of shunting works by the locomotive are executed.
The calculations of life cycle costs of modernized locomotives by a hybrid power plant (two options) and the base locomotive ChME3 showed the following. The costs of emission fines in the base locomotive ChME3 are more than 40% higher than in the locomotive with the base engine and energy storage and almost 75% higher than in the modernized locomotive with a new power plant and energy storage.
Similarly, the life cycle costs for modernized locomotives have shown that it is advisable to perform a deep modernization of the shunting locomotive with a new diesel and energy storage. But at lower costs, ie the installation of only energy storage and repair of the base diesel, there will also be a positive effect, the costs will be lower by 12%.
According to the results of calculations, the appropriate parameters of the modernized shunting locomotives at the energy capacity of the energy storage of 600 kWh are selected, the optimal power of the diesel generator will be within: for shunting operation 250 kW; for export -800 kW; for work on a hill -300 kW. According to the results of traction calculations for export work, fuel consumption of the hybrid locomotive was reduced to 30% compared to the base.
The life cycle of a hybrid locomotive based on ChME3 and a basic locomotive for a period of 20 years -the time from modernization (or overhaul) of the locomotive to its complete decommissioning. Thus, when using a hybrid locomotive based on ChME3, during operation the total economic effect of one locomotive will be UAH 3.5 million.
The efficiency of the shunting locomotive is proposed to be determined taking into account the technical, economic and environmental components according to the formula where kn -the ratio of numerical parameters of the new development to the parameters of existing objects for rational categories and irrational categories; M(i)function that normalizes the weight of the parameters in the ranked sequence; і -shunting locomotive parameter number; LLC ТB , LLC ТG -life cycle cost of the basic locomotive and the modernized, respectively, UAH.; А'z -indicator of the relative activity of impurities of the z-th type; m бz , m гz -average annual masses of pollutant of the z-th type, which enter the atmosphere in year t during operation, respectively, of the basic locomotive and modernized, kg / h per section; К 1 , К 2 , К 3 -weights of efficiency components. At the same time weights of the components of efficiency are determined by the expert method depending on the presented tasks. This factor when using a locomotive in shunting operation for the base locomotive is equal to К е =1, for an upgraded diesel locomotive with a basic diesel engine and an energy storage capacity of 600 kWh equal to К е =1,13, and for the upgraded new diesel with a capacity of 250 kW and this energy storage equal to К е =1,4. This confirms the efficiency of modernization of six-axle shunting locomotives with a hybrid power plant of the proposed type.
Conclusions
Based on the results of theoretical and experimental studies, the following conclusions were made. 1.
Analysis of the directions of work of scientific organizations, rolling stock manufacturers and works of scientists shows that to solve the problem of determining the technical and economic indicators of locomotives with hybrid power plant requires a comprehensive approach that should link the technical parameters of the locomotive, performance and cost indicators taking into account the environmental component. To substantiate the choice of technical and economic indicators of locomotives with hybrid transmission, an approach based on mathematical modeling was taken. He allowed to justify the choice of the main technical and economic indicators of the modernized locomotive at the lowest cost of the life cycle when using it in shunting work.
2.
The functional scheme of the power circuit of a shunting locomotive with a hybrid power plant is proposed, which allowed to determine the functional connections between the elements of power transmission with a hybrid drive.
3.
A model has been developed to determine the rational design and energy characteristics of an upgraded shunting locomotive with a hybrid power plant. The functional dependences of the power of the diesel generator set on the energy consumption of the energy storage for shunting operation of the locomotive are obtained.
4.
It is proposed to determine the efficiency of shunting locomotive modernization by an appropriate coefficient, which takes into account the technical level of the locomotive, life cycle costs and environmental component with the appropriate weights. This factor when using a locomotive in shunting operation for the base locomotive is equal to К е =1, for an upgraded diesel locomotive with a basic diesel engine and an energy storage capacity of 600 kWh equal to К е =1,13, and for the upgraded new diesel with a capacity of 250 kW and this energy storage equal to К е =1,4. This confirms the efficiency of modernization of six-axle shunting locomotives with a hybrid power plant of the proposed type.
5.
The procedure for the designation of technical and environmental indicators of locomotives for a specific core structure can be used for similar applications in other types of transport. | 2021-08-27T16:54:11.852Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "8366c277c2b9cca25e0252c25df90c78df5e973c",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/56/e3sconf_icsf2021_06001.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6c8a30aa2ecf74e0ae859b569eff5990f86dcaeb",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
21312658 | pes2o/s2orc | v3-fos-license | Src phosphorylates the insulin-like growth factor type I receptor on the autophosphorylation sites. Requirement for transformation by src.
The insulin-like growth factor type I (IGF-I) receptor can become tyrosine phosphorylated and enzymatically activated either in response to ligand or because of the activity of the Src tyrosine kinase (Peterson, J. E., Jelinek, T., Kaleko, M., Siddle, K., and Weber, M. J. (1994) J. Biol. Chem. 269, 27315-27321). The goal of the present study was to analyze the mechanistic basis and functional significance of the Src-induced phosphorylation and activation of the IGF-I receptor. 1) We mapped the sites of IGF-I receptor autophosphorylation to peptides representing three different receptor domains: tyrosines 943 and 950 in the juxtamembrane region; tyrosines 1131, 1135, and 1136 within the kinase domain; and tyrosine 1316 in the carboxyl-terminal domain. The juxtamembrane and kinase-domain peptides were phosphorylated both in vivo and in vitro. The carboxyl-terminal site, although phosphorylated in vitro and in src-transformed cells, was not a major site of ligand-induced phosphorylation in vivo. 2) We determined that the sites of Src-induced phosphorylation of the IGF-I receptor are the same as the ligand-induced autophosphorylation sites and that the Src kinase can catalyze these phosphorylations directly. 3) We showed that cells cultured from mice in which the IGF-I receptor has been knocked out by homologous recombination are defective for morphological transformation by src. Thus, the Src kinase can substitute for the receptor kinase in phosphorylating and activating the IGF-I receptor, and this receptor phosphorylation and activation are essential for transformation by src.
Insulin and the insulin-like growth factor type I (IGF-I) 1 are peptide hormones that regulate distinct biological functions through interaction with their cognate receptors (Drop et al., 1991;Jacobs and Moxham, 1991;Soos et al., 1991;Treadway et al., 1991a;Adamo et al., 1992;Pessin, 1993;De Meyts, 1994;Faria et al., 1994;LeRoith et al., 1994;Soos et al., 1993;Baserga et al., 1995). The normal function of insulin is primarily to regulate metabolism in liver, fat, and muscle, whereas IGF-I acts to regulate growth and differentiation. Recently, there has been considerable interest in the IGF-I receptor because of its ability to inhibit apoptosis (Harrington et al., 1994a(Harrington et al., , 1994b and because of its central role in malignant transformation by various oncogenes (Baserga, 1995).
Normal activation of insulin family receptors occurs through ligand binding by the ␣-subunits, which results in activation of the receptor tyrosine kinase. Increased receptor phosphorylation on tyrosine occurs through intersubunit phosphorylation between the two -subunits, and it is the phosphorylation of these tyrosines that regulates the activity of the receptor (Rosen et al., 1983;Cobb et al., 1989;Czech and Massague, 1982;Czech, 1989;Mooney et al., 1992;Begum et al., 1993;Frattali and Pessin, 1993;Lee et al., 1993;Pessin and Frattali, 1993;Hubbard et al., 1994;Van Obberghen, 1994). Signaling via the insulin and IGF-I receptors requires both a functional tyrosine kinase and also the phosphorylation of conserved tyrosines within the -subunit of the receptors.
The sequence homology between the insulin and IGF-I receptor -subunits is highest in the tyrosine kinase domain (85%), intermediate in the juxtamembrane region (61%), and lowest in their cytoplasmic tails (44%) (Ullrich et al., 1986). Ligand stimulation of the insulin receptor results in phosphorylation of tyrosines clustered in each of these three regions (Tornqvist et al., 1987;White et al., 1988aWhite et al., , 1988b. Table I presents a comparison of the tyrosine-containing tryptic peptides derived from the -subunits of the IGF-I and insulin receptors. The major sites of insulin receptor autophosphorylation are tyrosines 953 and 960 in the juxtamembrane region (Peptide I, Table I), tyrosines 1158, 1162, 1163 within the kinase domain (Peptide V, Table I), with additional phosphorylation on tyrosines 1316 and 1322 in the carboxyl terminus (Peptide X, Table I) (Tornqvist et al., 1987;White et al., 1988aWhite et al., , 1988b. Interestingly, nearly all of the insulin receptor tyrosine phosphorylation sites important for signaling are conserved within the IGF-I receptor. This includes both tyrosines located in the juxtamembrane domain, the triplet of tyrosines in the kinase domain, and (although with less conservation of contextual sequence) one of the two tyrosines in the cytoplasmic domain. However, in spite of the central importance of the IGF-I receptor in growth and malignant transformation, no work prior to what is reported here has directly determined whether these conserved tyrosines in fact represent the major sites of IGF-I receptor phosphorylation on tyrosine.
It is well documented that phosphorylation on tyrosine is important for insulin and IGF-I receptor activation. Therefore, it is conceivable that a heterologous kinase capable of phosphorylating these tyrosines would also be capable of activating the receptor. Although IGF-I receptor activation normally requires the presence of IGF-I, there is some precedent for IGF-I receptor activation without its cognate ligand. For example, the insulin receptor can induce signaling by the IGF-I receptor through the formation of heterotetramers made up of one insulin receptor ␣ dimer and one IGF-I receptor ␣ dimer. Insulin binding to the insulin receptor leads to activation of the -subunit of the IGF-I receptor through intersubunit phosphorylation within the hybrid receptor heterotetramer (McClain et al., 1990;Janicot et al., 1991;Treadway et al., 1991b;Frattali and Pessin, 1993;Takata and Kobayashi, 1994). Thrombin, perhaps via activation of pp60 c-src , causes rapid tyrosine phosphorylation of the IGF-I receptor (Rao et al., 1995). Previous reports from this laboratory have shown that the transforming non-receptor tyrosine kinase Src induces the phosphorylation of the IGF-I receptor in vivo , Peterson et al., 1994. Src-induced phosphorylation of the receptor was correlated with an increase in the in vitro tyrosine kinase activity of the receptor, both toward itself and exogenous substrates (Peterson et al., 1994). The Src-induced increase in receptor activity was shown to be dependent on tyrosine phosphorylation, as treatment with a tyrosine-specific phosphatase lowered receptor activity (Peterson et al., 1994).
We hypothesized that the Src-induced phosphorylation of the IGF-I receptor might be functionally important for transformation, because phosphorylation of the IGF-I receptor was one of only a few phosphorylations out of 30 analyzed that correlated with phenotypic transformation in cells infected with a panel of partially transforming src mutants .
In the present study, we identify the sites of IGF-I receptor tyrosine phosphorylation in response to ligand stimulation in vivo and in vitro and show that they are homologous to regulatory sites in the insulin receptor. We also show that in vivo and in vitro, Src is capable of phosphorylating the same sites observed upon ligand-induced autophosphorylation and that this is likely due to direct phosphorylation by the Src kinase. Finally, we show that cells cultured from mice in which the IGF-I receptor has been knocked out by homologous recombination (Liu et al., 1993;Sell et al., 1995) are defective for transformation by src. Taken together, these data indicate that intracellular, ligand-independent phosphorylation and activation of the IGF-I receptor by the Src kinase occurs by a mechanism similar to ligand-induced autophosphorylation and that this interaction between Src and the IGF-I receptor is essential for transformation by this oncogene.
MATERIALS AND METHODS
Antibodies and Immunoprecipitations-␣-Subunit antibodies ␣IR3 and CII 25.3 were purchased from Oncogene Science (Manhasset, NY) and used for immunoprecipitation of the IGF-I receptor and insulin receptor, respectively. An antipeptide monoclonal antibody to the -subunit of the IGF-I receptor, Ab 1-2, was provided by Kenneth Siddle (University of Cambridge) (Soos and Siddle, 1989) and was used for Western immunoblotting. An alkaline phosphatase-conjugated antiphosphotyrosine antibody (RC20H) was purchased from Transduction Laboratories (Lexington, KY). The anti-Src antibody EC10 was provided by Sarah J. Parsons (Parsons et al., 1984). The anti-Src antibody 327 was provided by Joan S. Brugge (Lipsich et al., 1983).
Cell Culture-The creation and maintenance of cell lines that overexpress normal and mutant IGF-I receptors with or without temperature-sensitive Src were as described in Peterson et al. (1994). For experiments involving Src-induced receptor phosphorylation, cells were grown at 39°C and then shifted to 35°C for 2 h before lysis without ligand stimulation. For experiments involving ligand-induced receptor phosphorylation, cells were grown at 39°C and then shifted to 35°C for 2 h followed by ligand stimulation before lysis. Insulin receptor was purified from rat fibroblasts overexpressing the human insulin receptor (HIRc B cells). Cells cultured from mice in which the IGF-I receptor had been knocked out by homologous recombination were kindly provided by Renato Baserga (Jefferson Medical College).
IGF-I Receptor Phosphorylation-For IGF-I receptor preparations phosphorylated in vivo, cells were incubated with 5 mCi/ml of inorganic phosphate ( 32 PO 4 ) in phosphate-free medium supplemented with 1% calf serum and 1% (v/v) of spent Dulbecco's modified Eagle's medium for 6 h, which is sufficient to equilibrate the ATP pool at the ␥-position (Weber and Edlin, 1971). Stimulation of cells occurred as described under "Cell Culture." Immunoprecipitates were washed once, then resuspended in cold kinase buffer (25 mM Hepes, pH 8.0, 10 mM MgCl). Reactions were initiated by the addition of 1 Ci of [␥-32 P]ATP in 10 M unlabeled ATP. Kinase reactions were carried out for 20 min at 25°C, and reactions were terminated by the addition of 3 ϫ Laemmli sample buffer. For experiments involving purified Src (see below), reactions included 1 mM dithiothreitol with unlabeled ATP (50 M) for 30 min at room temperature.
Purified baculovirus-expressed p60 c-srcSD was purified from lysates of infected Sf-9 cells by column chromatography using an affinity column of the anti-Src monoclonal antibody 327, as described in Morgan et al. (1991).
Protein Chemistry-Cleavage of the IGF-I receptor -subunit with trypsin was performed essentially as described (Gibson, 1974;Cooper et al., 1983;Aebersold et al., 1987;Contor et al., 1987;Kamps and Sefton, 1989). Following transfer of proteins to nitrocellulose and visualization by autoradiography, the nitrocellulose region corresponding to the 95- IGF-I receptor (top)
and the insulin receptor, (bottom)
The numbering and alignment of the tryptic peptides are based on the amino acid sequence of the two receptors as described in (Ullrich et al., 1986). Capital letters denote amino acid differences between the tryptic peptides of the two receptors whereas dots (.) denote sequence identity. Dashed lines denote a gap in the amino acid sequence of the IGF-I receptor with the divergent insulin receptor sequence in lower-case letters. kilodalton -subunit of the IGF-I receptor was excised and washed with ammonium bicarbonate. Samples were then digested with 10 -100 g of trypsin (Worthington Biochemical L-1-tosylamido-2-phenylethyl chloromethyl ketone-trypsin) for 24 h. Recoveries of 80% of counts/min were routinely obtained. Samples were dissolved in water and lyophilized repeatedly to remove ammonium bicarbonate. Finally, the tryptic peptides were resuspended in pH 1.9 chromatography buffer (see below) and separated by two-dimensional thin layer chromatography. Two-dimensional thin layer chromatography was performed essentially as described (Gibson 1974;Cooper et al., 1983;Aebersold et al., 1987;Contor et al., 1987;Kamps and Sefton, 1989). Chromatography plates (cellulose, without fluorescent indicator) 20 ϫ 20-cm square were purchased from Eastman Kodak Co. For separation in the first dimension, plates were electrophoresed for 1 h at 700 volts in pH 1.9 buffer (acetic acid, formic acid, butanol, and water). For separation in the second dimension by ascending chromatography, plates were placed in a tank equilibrated with phosphochromo buffer (pyridine, butanol, and water) until the buffer front was within 5 cm of the plate edge. Plates were air-dried overnight, and the separated phosphopeptides were visualized by autoradiography.
Phosphopeptides were eluted from thin layer plates in pH 1.9 buffer and transferred to Sequelon™ aryl amine membrane (Millipore) and subsequently dried at 55°C. After drying, the peptides were coupled to the membrane with carbodiimide (20 min at room temperature) according to the manufacturer's protocol. Following four washes of 1 ml with 27% acetonitrile, 9% trifluoroacetic acid, and two washes of 1 ml with 50% methanol, the membrane was applied to an Applied Biosystems 470A sequenator. Edman degradation of the peptides was performed as described (Shannon and Fox, 1995). Edman degradation resulted in recoveries of phosphate in the range of 75-90% of the bound radioactivity.
Phosphoamino acid analysis was performed as described (Jelinek and Weber, 1993).
Phosphopeptides from IGF-I Receptor Autophosphorylated in
Vivo and in Vitro-Because it was possible to label IGF-I receptors to much higher specific activities with in vitro kinase assays than by in vivo 32 P i labeling, it was preferable to determine the phosphorylation sites by analyzing the in vitro phosphorylations. To validate this approach, we compared the tryptic phosphopeptide maps of the IGF-I receptor following in vitro autophosphorylation reactions with that generated by receptor from cells labeled in vivo with 32 P i and stimulated with ligand. In Fig. 1 the sites of ligand-induced in vivo (A) and in vitro (B) phosphorylation of the receptor have been superimposed (C), and a composite schematic was created depicting the in vitro tryptic phosphopeptides as well as the major in vivo tryptic phosphopeptides (D).
Although the pattern of tryptic phosphopeptides revealed by two-dimensional separation is complex, it is evident that many of the sites of receptor phosphorylation in vivo (A) were present on peptides that became autophosphorylated by the IGF-I receptor in vitro (B). This suggests that the IGF-I receptor autophosphorylates in vitro on sites that become phosphorylated in vivo, as is also the case for the insulin receptor .
Not all of the phosphopeptides were equally represented in the two preparations. For example, peptides 13 and 14 were labeled only slightly if at all in vivo. These sites may not become substantially phosphorylated in vivo in response to ligand, or they may be very sensitive to cellular phosphatases. Another possibility is that these peptides undergo some additional post-translational modification in vivo, which makes them migrate to another location on thin layer chromatography. On the other hand, tryptic phosphopeptides 15-19, appeared only in vivo. We suspected that these phosphopeptides may have been generated by serine or threonine phosphorylations, which would not have occurred during in vitro autokinase reactions that occurred exclusively on tyrosine (data not shown). As predicted, these phosphopeptides became labeled on phosphoserine or both phosphoserine and phosphothreonine (Table II). Peptides 16 and 18 did not display phosphotyrosine at all.
The Major Sites of IGF-I Receptor Tyrosine Phosphorylation Are Contained on Three Tryptic Peptides-To identify the sites of phosphorylation occurring on the IGF-I receptor phosphorylated in vitro, phosphopeptides were separated by two-dimensional thin layer chromatography and were then analyzed by Edman degradation. We were able to determine the site(s) of phosphorylation within each peptide by determining the cycle at which radioactivity was released. Reliable and reproducible data have been obtained for up to 15 cycles of degradation, after which the quality of the data is limited by nonspecific loss of label from the filter.
Edman degradation was performed on each of the phosphopeptides (1-14) from in vitro labeled IGF-I receptor ( Fig. 1) and the conclusions are summarized in Table III. As a representative example, the Edman degradation data from phosphopeptide 6 are shown in Fig. 2. Based on the results of this procedure, we suggest that the major sites of receptor phosphorylation are contained on three peptides, I, V, and X (Table I). Our reasoning in making these assignments is as follows.
Edman degradation of phosphopeptides 1-5 revealed phosphorylated amino acids at positions 7 and 14 from the amino terminus of the peptide. These phosphorylations are consistent with this group of phosphopeptides corresponding to the dually phosphorylated form of tryptic peptide I on tyrosines 943 and 950 (Table I). As Edman degradation was carried out for only 15 cycles, a potential phosphorylation of tyrosine 957 was not determined. Comparable juxtamembrane phosphorylations have also been reported for the insulin receptor.
Peptides 6, 8, 9, 11, and 12 all show phosphorylation at a residue eight amino acids from the amino terminus of the peptide. There is only one tryptic peptide from the IGF-I receptor cytoplasmic domain that contains a tyrosine at position 8, and this is tryptic peptide V. Phosphopeptides 7 and 10 both contain phosphorylations at residues three and seven. Since there exists only one tryptic peptide with tyrosines located at positions 3 and 7 from the amino terminus it is likely that these phosphopeptides also represent different trypsinized forms of the partially phosphorylated peptide V. Phosphopeptides 6 through 12 make up the majority of radioactivity incorporated into in vitro autophosphorylated IGF-I receptor. This is consistent with reports concerning insulin receptor phosphorylation where this same peptide, which is completely conserved between the two receptors, contains the major sites of in vitro receptor autophosphorylation . Taken together these results indicate that tyrosines 1131, 1135, and 1136 are major sites for IGF-I receptor phosphorylation, both in vivo and in vitro.
Although phosphopeptides 13 and 14 are not highly phosphorylated in vivo, they represent significant sites of in vitro receptor phosphorylation. Analysis of these peptides revealed phosphorylation at a residue three amino acids from the amino terminus, consistent with two different candidate tryptic peptides derived from the IGF-I receptor intracellular -subunit, V and X. As discussed below, further evidence suggests that these phosphopeptides represent phosphorylation of the tryptic peptide X and not V. This implicates tyrosine 1316, located in the carboxyl terminus of the receptor, as being a site of in vitro autophosphorylation.
Taken together, these results suggest that the major sites of autophosphorylation occur on tyrosines located within all three regions of the IGF-I receptor: 943 and 950 in the juxtamembrane domain, 1131, 1135, and 1136 in the regulatory tyrosine kinase domain, and 1316 in the carboxyl terminus domain. An additional candidate phosphorylation at tyrosine 957 has not been determined.
The number of phosphopeptides detected on thin layer chromatography (14) is considerably greater than the number of phosphopeptides predicted (3). There are two reasons for the complexity of the phosphopeptide maps. First, sequential lysines and/or arginines downstream from the site(s) of phosphorylation result in incomplete or "staggered" digestion of the receptor by trypsin thus producing heterogeneity in the phosphopeptide pattern. Second, peptides containing more than one phosphorylation site, and which differ in stoichiometry of phosphorylation, will migrate differently during thin layer chromatography. The same problems also occur with all three tryptic phosphopeptides from the insulin receptor, yielding a similarly complex phosphorylation pattern .
It is possible to obtain from Edman sequencing data evidence indicating that the phosphopeptide maps are rendered more complex by heterogeneity in phosphorylation. Ordinarily, if a peptide containing multiple phosphorylations at the same stoichiometry is sequenced, each successive cycle will display a decrease in the yield of counts/min as radioactivity is nonspecifically lost from the filter. In Fig. 2, such a decreased yield can be seen to occur between cycles 3 and 7 in the sequencing of peptide 6. However, at cycle 8, the yield increases. This is most easily explained if peptide 6 is actually a mixture of peptides, one phosphorylated on amino acid residues three and seven, the other phosphorylated on residue eight, although we cannot exclude the possibility that this spot is actually a mixture of peptides singly phosphorylated at differing stoichiometries. Comparable analysis of the other peptides is able to account fully for all of the complexity of the peptide maps (data not shown).
Because of the considerable amino acid sequence similarity of the insulin and IGF-I receptors, we were able to confirm the identification of these phosphopeptides by comparing the twodimensional thin layer chromatography patterns of tryptic phosphopeptides generated by the autophosphorylated IGF-I receptor to the pattern generated by the insulin receptor and identifying the insulin receptor peptides by Edman sequencing (data not shown). The results confirm unequivocally that phosphopeptides 6 -12 correspond to forms of peptide V, as this sequence is completely conserved between the insulin and IGF-I receptors (Table I) and the two sets of peptides co-migrated. Phosphopeptides 1-5 migrated close to, but not identically with, the equivalent insulin receptor peptides, which contained a phosphotyrosine at position 10, consistent with the assignment to tryptic peptide I (see Table I). Finally, insulin receptor peptides, which migrated in the lower left portion of Ϫ ϩ Ϫ a Tryptic peptides are numbered as described in Figure 1D. 3 X a The numbered designation of the phosphopeptides is based on the map from Figure 1D. b IGF-IR peptides are numbered according to Table I. the thin layer plates, near IGF-I receptor phosphopeptides 13 and 14, contained phosphotyrosine at positions 2 and 8, as expected if peptides 13 and 14 correspond to IGF-I receptor tryptic peptide X (see Table I). This strengthens the conclusion that the IGF-I receptor autophosphorylation sites are contained on three tryptic peptides and involve tyrosines 943 and 950 in the juxtamembrane domain, tyrosines 1131, 1135, and 1136 in the kinase domain, and tyrosine 1316 in the carboxylterminal domain.
Phosphorylation of the IGF-I Receptor Induced by Src-Identification of the ligand-induced sites of IGF-I receptor phosphorylation made it possible to determine whether Src would induce IGF-I receptor phosphorylation on the same sites as ligand or on other sites. To obtain in vivo labeled IGF-I receptors, cells were grown at the Src-permissive temperature and incubated in the presence of 32 PO 4 , without ligand stimulation, as described under "Materials and Methods." To obtain IGF-I receptors autophosphorylated in vitro, IGF-I receptor was purified from Src-transformed cells by immunoprecipitation and then subjected to an in vitro kinase reaction. Tryptic phosphopeptides from in vivo and in vitro phosphorylated receptor preparations were analyzed by two-dimensional thin layer chromatography, and the results of this analysis are shown in Fig. 3.
When the Src-induced sites of in vitro receptor autophosphorylation (D) were compared with the sites of ligand-induced in vitro autophosphorylation (B), it was evident that Src induced the receptor to autophosphorylate on sites that were the same as those phosphorylated in response to ligand namely phosphopeptides 1-14, the identity of which was determined previously in Fig. 1 and Table III. This suggests that autophosphorylation of the IGF-I receptor occurs similarly whether induced by ligand or Src.
As demonstrated above (Fig. 1), ligand-stimulated IGF-I receptor autophosphorylation occurred on many of the same sites in vivo as in vitro (A and B). A similar comparison revealed that tryptic phosphopeptides from in vivo labeled IGF-I receptor purified from Src-transformed cells co-migrated with a subset of the tryptic peptides phosphorylated during in vitro autokinase reactions (Fig. 3, C and D, respectively). Except for the constitutive phosphorylation of peptide 15 (discussed previously), the major sites of in vivo phosphorylation of the IGF-I receptor occurred on the major sites of in vitro IGF-I receptor autophosphorylation. These results indicate that Src is capable of inducing the IGF-I receptor to phosphorylate on sites phosphorylated upon ligand stimulation, namely tyrosines 943, 950, 1131, 1135, and 1136.
Although Src and ligand induced similar receptor phosphorylation patterns in vitro, it is interesting to note the relatively higher phosphorylation of tyrosine 1316 (peptides 13 and 14) from Src-stimulated cells labeled in vivo compared with the in vivo pattern from ligand-stimulated normal cells. This indicates that tyrosine 1316 may be more highly phosphorylated in Src-transformed cells than in ligand-stimulated cells.
Purified Src Can Directly Phosphorylate the Kinase-defective IGF-I Receptor in Vitro-To determine whether Src is capable of directly phosphorylating the IGF-I receptor, we analyzed the ability of the Src kinase to phosphorylate the receptor in vitro. Fig. 4 is an autoradiograph revealing the relative incorporation of [ 32 P]ATP into the -subunit of wild-type and kinase-defective IGF-I receptors following an in vitro kinase reaction. The differences in the intensity of the incorporation correspond to the relative differences in kinase activity of the two receptor types, in vitro. To examine whether the Src-induced phosphorylation of the IGF-I receptor in vivo could be recapitulated in vitro using purified components, Src kinase was immunopurified from Baculovirus-infected Sf9 cells and was added to an in vitro kinase reaction containing the kinase-defective IGF-I receptor (lanes 5 and 6). As a control, phosphorylation of wildtype (lanes 1 and 2) and kinase-defective (lanes 3 and 4) IGF-I receptor in the absence of added purified Src was also examined.
FIG. 2. Phosphopeptide 6 contains phosphorylated amino acids at positions 3, 7, and 8. Phosphopeptide 6 was analyzed by Edman degradation as described under "Materials and Methods." The fractions were collected, and the amount of radioactivity released at each cycle of degradation was measured by Cerenkov counting (counts/min). The support contained 71,704 cpm at the start of the Edman degradation cycles and retained 10,428 cpm at the end.
As expected, the wild-type IGF-I receptor autophosphorylated at a much higher level than the kinase-defective receptor. Although the kinase-defective mutant appears to retain a low level of autokinase activity, this residual activity may be due to the presence of endogenous wild-type IGF-I receptors present in the immunoprecipitates due to heterodimerization with the ectopically expressed human receptors. When purified Src was added to the kinase-defective IGF-I receptors in an immune complex kinase reaction, the phosphorylation of the kinasedefective receptors increased, indicating direct phosphorylation of the receptor by Src.
To identify the sites of the IGF-I receptor that are phosphorylated by purified Src in vitro, the samples depicted in Fig. 4 were digested with trypsin and the peptides separated by twodimensional thin layer chromatography (Fig. 5). Although there are quantitative differences, the patterns are qualitatively similar for both ligand-stimulated wild-type receptor (A) and Src-phosphorylated kinase-defective IGF-I receptor (B). Therefore, Src is capable of directly phosphorylating the same sites on the IGF-I receptor whose phosphorylation is induced by ligand. A schematic map (C) indicating the likely identity of the tryptic phosphopeptides, based on relative migration, has been included to allow a comparison with earlier experiments examining the pattern of tryptic peptides from IGF-I receptor phosphorylated in vitro.
Phosphorylation of Tyrosines 1131Tyrosines , 1135Tyrosines , and 1136 Are Required for Ligand-stimulated IGF-I Receptor Autokinase Activity-To confirm the importance of the ligand-induced sites of tyrosine phosphorylation for IGF-I receptor function, the activity of a mutant receptor lacking the triplet of tyrosines present in the kinase domain was examined. This mutant (DY), contains phenylalanines in place of tyrosines 1131, 1135, and 1136. The in vivo tyrosine phosphorylation and in vitro autokinase activity of the kinase-defective (KϪ) and tyrosine to phenylalanine (DY) IGF-I receptor mutants were compared with those of the wild-type (IGFR) receptor in normal rodent fibroblasts (R) and rodent fibroblasts expressing the temperatureconditional v-src mutant, LA29 (L) (Fig. 6).
A (Fig. 6, top and bottom) presents an autoradiograph of an in vitro autophosphorylation assay that measures the tyrosine kinase activity of the indicated receptor. B (Fig. 6, top and bottom) shows the in vivo state of tyrosine phosphorylation of the indicated receptor as determined by anti-phosphotyrosine Western blotting of immunoprecipitates from cells in culture. C (Fig. 6, top and bottom) is an anti-IGF-I receptor Western blot that reveals the relative level of the indicated receptor present in each immunoprecipitate.
When the wild-type IGF-I receptor was expressed in normal cells (RIGFR), we observed a ligand-dependent increase in receptor tyrosine phosphorylation concurrent with elevated autokinase activity. Similarly, wild-type IGF-I receptor in Srctransformed cells (LIGFR) was tyrosine-phosphorylated in response to ligand stimulation with an accompanying increase in kinase activity. However, in the cells co-expressing Src, there was a ligand-independent increase in both tyrosine phosphorylation and autokinase activity of the receptor. Since this occurred only in the cells co-expressing Src, not the normal cells, and only at the Src-permissive temperature (35°C), we conclude that Src expression and activity are required for this to occur (Peterson et al., 1994).
As expected, the kinase-defective IGF-I receptor mutant ex- hibited no detectable tyrosine phosphorylation or kinase activity in response to ligand stimulation when compared with the wild-type receptor. When the kinase-defective IGF-I receptor was expressed in Src-transformed cells (LKϪ at 35°C), the receptor was phosphorylated on tyrosine in vivo, although it still lacked in vitro autokinase activity. This confirms that the Src-induced phosphorylation of the IGF-I receptor does not require receptor kinase activity and is consistent with direct phosphorylation of the IGF-I receptor by Src in vivo (Peterson et al., 1994).
When the IGF-I receptor with tyrosines 1131, 1135, and 1136 changed to phenylalanine was expressed in normal cells (RDY), it did not exhibit the ligand-stimulated increase in tyrosine phosphorylation seen with wild-type receptor. Similarly, ligand was also incapable of stimulating the kinase activity of the mutant receptor, although this mutant did exhibit a measurable basal level of autokinase activity, in vitro, comparable with that obtained with wild-type receptor. In some cases, ligand stimulation appeared to slightly increase both the phosphorylation state and kinase activity of the mutant IGF-I receptors.
However, this represented only a fraction of what was seen upon ligand stimulation of wild-type IGF-I receptor and may be due to endogenous wild-type IGF-I receptors present in the immunoprecipitates. Nevertheless, it is apparent that removal of tyrosines 1131, 1135, and 1136 abolishes the elevated tyrosine phosphorylation and increased autokinase activity of the receptor normally seen in response to ligand. This is consistent with published reports on the IGF-I receptor as well as the corresponding mutant of the insulin receptor: loss of these phosphorylation sites impairs ligand-induced receptor kinase activity (Hubbard et al., 1994).
Unlike wild-type receptor, the in vitro kinase activity associated with the DY mutant was unresponsive to Src-stimulation when purified from cells expressing Src, although it retained the elevated level of ligand-independent basal kinase activity seen when it was purified from normal cells. However, the DY mutant was tyrosine-phosphorylated in cells expressing Src, but only at the permissive temperature. Thus, although this receptor mutant is constitutively tyrosine-phosphorylated in Src-transformed cells, it is not constitutively active, in vitro. This suggests that phosphorylation on one or more of tyrosines 1131, 1135, and 1136 is essential for receptor activation.
Phosphorylation of the DY receptor mutant in src-transformed cells implies that there exist sites of Src-induced tyrosine phosphorylation in addition to tyrosines 1132, 1135, and 1136. These phosphorylations are likely occurring on tyrosines 943 and 950 in the juxtamembrane domain and tyrosine 1316 FIG. 5. Src-induced phosphorylation of kinase-defective IGF-I receptors in vitro occurs on peptides phosphorylated during IGF-I receptor autophosphorylation. Receptor samples were prepared as described in the legend to Fig. 4. Following separation by two-dimensional thin layer chromatography, phosphopeptides from autophosphorylated wild-type IGF-I receptor (A), and kinase-defective IGF-I receptor phosphorylated directly by purified Src (B), were visualized by autoradiography. C is a schematic map. The phosphopeptides are identified based on their relative migration. Chromatography was performed as described previously; the anode is on the right, the cathode on the left. 6. Phosphorylation of tyrosines 1131, 1135, and 1136 are required for ligand-stimulated IGF-I receptor autokinase activity. Analysis of wild-type (IGFR), kinase-defective (KϪ), and tyrosine to phenylalanine (DY) IGF-I receptor mutants in normal rodent fibroblasts (R, top panels) and rodent fibroblasts expressing a temperaturesensitive src mutant, LA29 (L, bottom panels) with (ϩ) or without (Ϫ) prior stimulation with IGF-I. LA29 is nontransforming at the srcrestrictive temperature (39°C) and transforming at the src-permissive temperature (35°C). A, autophosphorylation of the indicated IGF-I receptor, in vitro. B, anti-phosphotyrosine Western blotting of the indicated IGF-I receptor. C, anti-IGF-I receptor Western blotting of the indicated IGF-I receptor.
FIG.
in the carboxyl-terminal domains, since Src is capable of phosphorylating the IGF-I receptor on these sites. This suggests that phosphorylation of these sites, although potentially necessary, is not sufficient for receptor activation. IGF-I receptor mutants that lack these sites of tyrosine phosphorylation are currently being prepared.
The IGF-I Receptor Is Necessary for Transformation by src-To assess the functional significance of the ability of Src to phosphorylate and activate the IGF-I receptor, we transfected cells cultured from mice in which the IGF-I receptor had been knocked out by homologous recombination (Liu et al., 1993;Sell et al., 1995) with an expression vector encoding the mutant v-src, LA29 (Welham and Wyke, 1988). Fig. 7 shows that morphologically transformed colonies could not be obtained by transfection of LA29 into cultures of IGF-I receptor knock-out cells (R Ϫ ). On the other hand, cells from wild-type mice (W) gave numerous transformed colonies. Transfection of a vector encoding the wild-type human IGF-I receptor into the R Ϫ cells gave cells (R Ϫ /R), which could be partially transformed by src. We do not know why the IGF-I receptor when ectopically expressed in the R Ϫ cells only partially restores a wild-type phenotype, but it could reflect either differences in expression level between the normal endogenous versus vector-driven receptors or secondary changes that have occurred in the R Ϫ cells. In either case, it is clear that morphological transformation by src does not occur in cells that lack an IGF-I receptor.
DISCUSSION
Autophosphorylation Sites of the IGF-I Receptor-In the present study we compared the sites of IGF-I receptor phosphorylation induced by Src with the sites of receptor phosphorylation induced by ligand. We have demonstrated that the major sites of ligand-induced in vitro phosphorylation occur on tyrosines located within all three regions of the IGF-I receptor: tyrosines 943 and 950 in the juxtamembrane domain, tyrosines 1131, 1135, and 1136 in the tyrosine kinase domain, and tyrosine 1316 in the carboxyl-terminal domain. Phosphorylations corresponding to the juxtamembrane and kinase domain peptides were also detected in receptors prepared from ligandstimulated cells labeled in vivo with 32 P i , but phosphorylation of the carboxyl-terminal peptide was not evident. It is possible that this peptide is poorly phosphorylated in vivo or that it is subject to rapid dephosphorylation. It is also possible that the peptide is subject to ligand-induced post-translational modifications in addition to tyrosine phosphorylation and that this results in its migration to a different location.
To confirm the importance of these phosphorylation sites for receptor activity, a mutant receptor was constructed that replaced tyrosines 1131, 1135, and 1136 with phenylalanines. In contrast to the wild-type receptor, this mutant was unresponsive to ligand-stimulated increases in tyrosine phosphorylation and autokinase activity, confirming the importance of these sites for receptor activity. The mutant IGF-I receptor mutant was also incapable of mediating ligand-stimulated DNA synthesis (data not shown). Thus, tyrosines 1132, 1135, and 1136 are essential for receptor kinase activity and receptor-mediated mitogenesis, consistent with other reports concerning similar mutant receptors . Previous reports from others have demonstrated the importance of tyrosine 950 in IGF-I receptor signaling and internalization (Prager et al., 1994;Miura et al., 1995), although it has not been determined previously that this is a site of phosphorylation.
Phosphorylation of the IGF-I Receptor by the Src Tyrosine Kinase-We have demonstrated previously that the IGF-I receptor is constitutively tyrosine-phosphorylated and enzymatically activated when expressed in Src-transformed cells, and we hypothesized that direct phosphorylation of the IGF-I receptor by Src might be responsible (Peterson et al., 1994). Central to understanding this possibility is the identification of the sites of IGF-I receptor tyrosine phosphorylation in Srctransformed cells.
When the IGF-I receptor was purified from Src-transformed cells in the absence of ligand stimulation, it was constitutively active and capable of autophosphorylating on the same tyrosines that are the sites of ligand-induced receptor autophosphorylation. Likewise, the IGF-I receptor in src-transformed cells became constitutively tyrosine-phosphorylated on these same residues in the absence of ligand stimulation, in vivo. Thus, constitutive phosphorylation of the receptor on these (and other) regulatory tyrosines in Src-transformed cells implies that Src is capable of functionally activating the IGF-I receptor, in vivo (Prager et al., 1994;Miura et al., 1995;Rao et al., 1995).
In theory, this constitutive phosphorylation could be indirect, as a consequence of an autocrine mechanism or activation of another kinase. For example, it is conceivable that when Src becomes active, cells secrete IGF-I, resulting in autocrine stimulation of the receptor. However, there are several lines of evidence that argue against this possibility. First, the time course of Src-induced receptor phosphorylation is too rapid to be occurring via autocrine stimulation by IGF-I synthesized in response to Src activation (Peterson et al., 1994). Second, we have been unable to detect autocrine production of IGF-I in Src-transformed cells . Finally, we have found that pp60 v-src can cause tyrosine phosphorylation of a kinase-defective IGF-I receptor mutant when the two kinases are co-expressed in cells. Thus, an autocrine mechanism for Src-induced receptor phosphorylation is highly unlikely. Although we have not excluded the possibility that pp60 v-src activates another tyrosine kinase, which in turn phosphorylates the IGF-I receptor the most parsimonious explanation is that pp60 v-src directly phosphorylates the IGF-I receptor in vivo. FIG. 7. LA29 src cannot transform IGF-I receptor negative cells. Cells were transfected by electroporation (400 V, 250 microfarads) with pBabe/LA29-hygro plasmid (3.4 g/ml) and plated on 10-cm tissue culture dishes (3 ϫ 10 5 cell/dish) in Dulbecco's modified Eagle's medium with 10% fetal bovine serum. 48 h after transfection, the medium was changed, and transfected colonies were selected with 200 g/ml hygromycin. Cells were fixed and stained with Giemsa after 18 days. Similar results were obtained in two independent transfections. W, cells from wild-type mice. R Ϫ , cells from mice in which the IGF-I receptor had been knocked out by homologous recombination (Liu et al., 1993;Sell et al., 1995). R Ϫ /R, are R Ϫ cells in which the expression of the IGF-I receptor has been restored by transfection with an IGF-I receptor expression vector, as described previously (Peterson et al., 1994). The ectopically expressed receptor restored IGF-I-responsive DNA synthesis to the cells and became tyrosine-phosphorylated in response to IGF-I, demonstrating its functionality (data not shown).
Consistent with this suggestion, purified Src is capable of directly phosphorylating the IGF-I receptor on the sites of ligand induced autophosphorylation in vitro (Fig. 5). Moreover, since Src can phosphorylate mutant receptors lacking tyrosines at the primary sites of ligand-induced autophosphorylation (1132,1135,1136) our data suggest that the Src-induced phosphorylations are not just "priming" reactions that precipitate subsequent receptor autophosphorylation. However, one should note that since the kinase-defective receptors heterodimerize with endogenous wild-type receptors, we cannot unequivocally exclude the possibility that some of the phosphorylations observed occur by a cascade mechanism, in which Src activates wild-type receptors which then phosphorylate kinasedead receptors.
However, in any case this does not imply that the in vivo signaling activities of IGF-I receptors from ligand-stimulated and Src-transformed cells are identical. It is noteworthy that IGF-I receptors from ligand stimulated cells are not significantly phosphorylated at Tyr-1316, whereas phosphorylation at this site is quite evident in receptors from Src-transformed cells ( Fig. 3 and Table III). Although this may be a quantitative rather than qualitative difference, it is intriguing that this region of the insulin receptor has been implicated in regulating mitogenesis and is a candidate site for binding of phosphatidylinositol 3-kinase (Zick et al., 1986;Begum et al., 1993;Thies et al., 1989;McClain et al., 1990;Takata et al., 1991Takata et al., , 1992Liu et al., 1993;Pang et al., 1994;Faria et al., 1994;Kato et al., 1994;Surmacz et al., 1995). Thus, there may be marked differences in the physiology of IGF-I receptors expressed in normal and Src-transformed cells due to quantitative differences in the phosphorylation of this region of the receptor.
Role of the IGF-I Receptor in Oncogenic Transformation-The phosphorylation of the IGF-I receptor by Src shows great specificity for transforming mutants of Src: mutants that are defective in transforming activity fail to phosphorylate the IGF-I receptor and all the transforming variants tested cause IGF-I receptor phosphorylation . Only two other proteins out of over 30 Src substrates examined showed a comparable correlation with phenotypic transformation. In particular, only one other glycoprotein (a 130-kDa protein of unknown identity) displayed transformation-dependent tyrosine phosphorylation. Thus, Src-induced phosphorylation of the IGF-I receptor correlates closely with transformation by this oncogene.
We suspected that src-induced phosphorylation of the IGF-I receptor is biologically significant, because of the oncogenic potential of this receptor (White, 1985;Kaleko et al., 1990;Liu et al., 1992Liu et al., , 1993Giorgetti et al., 1993). When overexpressed, the IGF-I receptor can induce ligand-dependent morphological transformation and growth in soft agar. Moreover, cells expressing high levels of the IGF-I receptor induce tumor formation in nude mice (Kaleko et al., 1990). Loss of ligand dependence by receptor truncation enhances the transforming potential of the receptor, which is accompanied by increased in vitro and in vivo tyrosine phosphorylation (Liu et al., 1992).
The importance of IGF-I receptor activation for both growth factor and oncogene-induced proliferation is clearly demonstrated by the pioneering studies of Baserga and colleagues (Liu et al., 1993;Sell et al. 1994;Baserga, 1995), who have made use of fibroblasts from mice that have had the IGF-I receptor knocked out by homologous recombination. These cells grow more slowly in serum than fibroblasts from their wildtype littermates and are incapable of responding to IGF-I. Interestingly, cells derived from IGF-I receptor knockout mice cannot be transformed by overexpression of the EGF receptor, a transforming mutant of ras, or the large T antigen of SV40.
However, re-introduction of a functional IGF-I receptor restores their respective transforming abilities Sell et al., 1994). More pertinent to this discussion is our observation that these cells are also unable to be transformed by the activated src mutant LA29. This provides the first direct evidence that a functional IGF-I receptor is important for transformation by src.
Baserga and colleagues have found that mutationally activated c-src also is incapable of transforming the IGF-I receptor knockout cells. 2 However, they find that wild-type v-src is capable of transforming these R Ϫ cells; presumably the fully activated and overexpressed v-src is able to function both as an oncogene and as a surrogate for the IGF-I receptor.
It is unclear what role the IGF-I receptor plays in transformation by src and other oncogenes. One hypothesis, based on the work of Evan and colleagues (Evan et al., 1992;Harrington et al., 1994aHarrington et al., , 1994b is that the IGF-I receptor can serve as a repressor of oncogene-induced apoptosis. It seems quite possible that in the absence of IGF-I receptor signaling, Src induces apoptosis. This would be consistent with our observation that fewer hygromycin-resistant colonies appear when a src expression vector is transfected into R Ϫ cells than into wild-type cells (Fig. 7) and that the colonies that do appear express only low levels of Src (data not shown); perhaps the cells that expressed higher, functionally significant levels of Src were killed. Another possibility (not mutually exclusive with the first) is that unscheduled activation of the IGF-I receptor directly contributes to mitogen-independent or anchorage-independent proliferation of the transformed cells. Current work is aimed at distinguishing between these possibilities. | 2018-04-03T04:56:42.384Z | 1996-12-06T00:00:00.000 | {
"year": 1996,
"sha1": "6910e1706f281e08d2c20f236742a472246debad",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/271/49/31562.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7f27e2d783c9521464efd87535897bc58bfbbb36",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
246272224 | pes2o/s2orc | v3-fos-license | A Hybrid Science-Guided Machine Learning Approach for Modeling and Optimizing Chemical Processes
This study presents a broad perspective of hybrid process modeling and optimization combining the scientific knowledge and data analytics in bioprocessing and chemical engineering with a science-guided machine learning (SGML) approach. We divide the approach into two major categories. The first refers to the case where a data-based ML model compliments and makes the first-principle science-based model more accurate in prediction, and the second corresponds to the case where scientific knowledge helps make the ML model more scientifically consistent. We present a detailed review of scientific and engineering literature relating to the hybrid SGML approach, and propose a systematic classification of hybrid SGML models. For applying ML to improve science-based models, we present expositions of the sub-categories of direct serial and parallel hybrid modeling and their combinations, inverse modeling, reduced-order modeling, quantifying uncertainty in the process and even discovering governing equations of the process model. For applying scientific principles to improve ML models, we discuss the sub-categories of science-guided design, learning and refinement. For each sub-category, we identify its requirements, advantages and limitations, together with their published and potential areas of applications in bioprocessing and chemical engineering.We also present several examples to illustrate different hybrid SGML methodologies for modeling polymer processes.
downscaling the complexity of physics-based models, generating data, quantifying uncertainty, and discovering governing equations of the data-based model.
The objective of this paper is to present a review and exposition of scientific and engineering literature relating to the hybrid SGML approach, and propose a systematic classification of hybrid SGML models focusing on both science complementing ML models, and ML complementing science-based models. Section 2 gives a review of the broad applications of hybrid SGML approach in bioprocessing and chemical engineering. As the number of reported methodologies and applications continues to rise significantly, it is hard for a person unfamiliar with the subject to identify the appropriate approach for a specific application. This leads to our key focus in Sections 3 to 5, beginning with a systematic classification and exposition of hybrid SGML methodologies in Section 3. Section 4 explains different categories of applying ML to complement science-based models, discuss their requirements, strengths and limitations, suggest potential areas of applications, and present illustrative examples from chemical manufacturing. Section 5 focuses on different categories of applying scientific principles to complement ML models, together with their requirements, strengths and limitations, as well as their potential applications and illustrative examples. Section describes the challenges and opportunities for hybrid SGML approach for modeling chemical processes. Section 7 summarizes our conclusions.
This work differentiates itself from several recent reviews of hybrid modeling in bioprocessing and chemical engineering through the following contributions: (1) presentation of a broader hybrid SGML methodology of integrating science-guided and data-based models, and not just the direct combinations of first-principles and ML models; (2) classification of the hybrid model applications according to their methodology and objectives, instead of their areas of applications; (3) identification of the themes and methodologies which have not been explored much in bioprocessing and chemical engineering applications, like the use of scientific knowledge to help improve the ML model architecture and learning process for more scientifically consistent solutions; and (4) illustrations of the use of these hybrid SGML methodologies applied to industrial polymer processes, such as inverse modeling and science-guided loss which have not been applied previously in such applications.
| APPLICATIONS OF HYBRID SGML APPROACH IN BIOPROCESSING AND CHEMICAL ENGINEERING
The integration of science-based models with data-based models has appeared in various fields like fluid mechanics 3 , turbulence modeling 4 , quantum physics 5 , climate science 6 , geology 7 and biological sciences. 8 This study focuses on applications of hybrid SGML methodologies in bioprocessing and chemical engineering. Among the earliest applications is the direct hybrid modeling involving the integration of first-principles model with data-based neural networks 9 . Psichogios and Unger 10 combine a first-principles model based on prior process knowledge with a neural network, which serves as an estimator of unmeasured process parameters that are difficult to model from first principle. They apply the hybrid model to a fed-batch bioreactor, and the integrated model has better properties than the standard "black-box" neural network models. In particular, the integrated model is able to interpolate and extrapolate much more accurately, is easier to analyze and interpret, and requires significantly fewer training examples. Thompson and Kramer 11 later demonstrate how to integrate simple process model and first-principles equations to improve the neural network predictions of cell biomass and secondary metabolite in a fed-batch penicillin fermentation reactor when trained on sparse and noisy process data.
Agarwal 12 develops a general qualitative framework for identifying the possible ways of combining neural networks with the prior knowledge and experience embedded in the available first-principles models, and discusses the direct hybrid modeling with series or parallel configuration to combine the outputs of the science-based model and the ML model. Asprion, et al. 13 present the term, grey-box modeling, for optimization of chemical processes. They consider the case where a predictive model is missing for a process unit within a larger process flowsheet, and use measured operating data to set up hybrid models combining physical knowledge and process data. They report results of optimization using different gray-box models for process simulators applied to a cumene process. Actually, in a number of earlier studies, Bohlin and his coworkers have explored in details the concepts of gray-box identification for process control and optimization, and Bohlin has summarized the concepts, tools and applications of grey-box hybrid modeling in an excellent book. 14 Over the years, we have seen a growing number of applications of hybrid modeling in bioprocessing and chemical engineering as part of the advances in smart manufacturing [15][16][17] .
In their 2021 paper, Sansana et al. 16 discuss mechanistic modeling, data-based modeling, hybrid modeling structures, system identification methodologies, and applications. They classify their hybrid model into parallel, series, surrogate models (which are simpler mathematical representations of more complex models and similar to reduced-order models that we discuss 6 below), and alternate structures (which include gray-box modeling mentioned above). In the alternate structures, they refer to some applications of semi-mechanistic model structures where the best hybrid model is selected using optimization concepts. They also classify the hybrid models based on some of the chemical industry applications into analysis of model-plant mismatch 17 , model transfer, feasibility analysis and predictive maintenance, apart from the previous mentioned applications like process control, monitoring and optimization.
Von Stosch et. al. 18 have used the term, hybrid semi-parametric modeling, in their 2014 review, and have summarized applications in bioprocessing for monitoring, control, optimization, scaleup and model reduction. They emphasize that the application of hybrid semi-parametric techniques does not automatically lead to better results, but that rational knowledge integration has potential to significantly improve model-based process design and operation.
Qin and Chiang 19 review the advances in statistical machine leaning and process data analytics that can provide efficient tools in developing future hybrid models. In a latest paper, Qin et. al. 20 propose a statistical learning procedure integrating with process knowledge to handle a challenging problem of developing a predictive model for process impurity levels from more than 40 process variables in an industrial distillation system. Both studies highlight the power of statistical machine leaning for developing future hybrid process models.
In a recent study, Zhou et al. 46 present a hybrid approach for integrating material and process design that holds much promise in process and product design. Cardillo et. al. 47 63 gives an excellent exposition of the current state of development and applications of artificial intelligence in chemical engineering. The author highlights the intellectual challenges and rewards for developing the conceptual frameworks for hybrid models, mechanism-based causal explanations, domain-specific knowledge discovery engines, and analytical theories of emergence, and presents examples from optimizing material design and process operations.
In an excellent edited volume, Glassey and Stosch 64 discuss some of the key strengths of hybrid modeling in chemical processes, particularly in the prediction of scientifically consistent results beyond the experimentally tested process conditions, which is crucial for process development, scale-up, control and optimization. They also identify some challenges. For example, incorrect fundamental knowledge in a science-based model could impose bias on predictions, thus the underlying assumptions used in a model are important for analysis. Also, time and accuracy of parameter estimation is critical when deciding on a hybrid modeling strategy. Kahrs and Marquardt 65 discuss the approach of simplifying the complex hybrid models into sequence of simpler problems, such as data preprocessing, solving nonlinear equations, parameter estimation and building empirical models using ML.
Herwing and Portner in their latest book showcase the applications of hybrid modeling in digital twins for smart biomanufacturing 124 .
A recent patent by Chan et al. 66 presents Aspen Technology's approach on asset optimization using integrated modeling, optimization and artificial intelligence. In a later white paper, Beck and Munoz 67 describe Aspen Technology's current focus on hybrid modeling, combining AI and domain expertise to optimize assts. In particular, based on their application experience in in chemical industries, Aspen Tech have classified hybrid models into three categories: AI-driven, first-principles driven and reduced-order models 67 . They define an AI-driven hybrid model as an empirical model based on plant or experimental data and use first principles, constraints and domain knowledge to create a more accurate model. Examples of AI-driven models are inferential sensors or online equipment models. They define a first-principles driven hybrid model as an existing first-principles model augmented with data and AI to improve model's accuracy and predictability, which has seen many applications in bioprocessing and chemical engineering.
Lastly, they define a reduced-order model where we use ML to create an empirical data-based model based on data from numerous first-principles process simulation runs, augmented with constraints and domain expertise, in order to build a fit-for-purpose low-dimensional model that can run more quickly. With reduced-order models, we can extend the scale of modeling from units to the plant-wide models that can be deployed faster.
MODELS
As we have seen thus far, the majority of work in hybrid model applications in bioprocessing and chemical engineering focuses on the direct combination of science-based and data-based models. In this article, we portray a broad perspective of the combination of scientific knowledge and data analysis in bioprocessing and chemical engineering as inspired by some of the applications in physics and other areas 1,2 . We categorize these hybrid SGML applications in chemical process industry into two major categories, namely, ML compliments science and science compliments ML, together with their sub-categories based on the methodologies and objectives of hybrid modeling as illustrated in Figure 1. We also classify the applications in bioprocessing and chemical engineering according to our hybrid SGML approach. We present examples in several areas of SGML which have not been explored much thus far, and which have great potential for process improvement and optimization.
| ML COMPLEMENTS SCIENCE
We can integrate a first-principles scientific model with a data-based model to improve the model accuracy and consistency. In the following, we introduce the sub-categories of direct hybrid modeling, inverse modeling approach, reducing model complexity, quantifying uncertainty in the process, and discovering governing equations.
| Direct Hybrid Modeling
A direct hybrid model combines the output of a first-principles or science-based model with the output of a data-based ML model to improve the prediction accuracy of dependent variables.
These combinations could occur in a series configuration, a parallel configuration, or a seriesparallel configuration. The direct hybrid modeling strategy is the most widely used approach in hybrid modeling in bioprocessing and chemical engineering. Figure 2 illustrates the concept of a parallel direct hybrid model. The science-based model may use the initial conditions and boundary conditions as inputs to make a prediction (Ym), while the ML model uses dynamic time-varying data to make the predictions (Yml). We then combine both outputs directly or with assigned weights (w1, w2) to achieve higher prediction accuracy. We can determine the weights by least squares optimization to minimize the total sum of squares of errors for the difference between the plant and the hybrid model. Galvanauskas et. al. 68 combine directly the data-based neural networks for kinetics and viscosity predictions with the first-principles mass balance ordinary differential equations to optimize the production rate of an industrial penicillin process. Chang et. al. 33 showcase a parallel hybrid model for the dynamic simulation of a batch free-radical polymerization of methyl methacrylate.
| Parallel Direct Hybrid Model
They combine an approximate rate function for the concentration of the immeasurable initiator concentration with a black-box time-dependent or recurrent neural network model 9 of the dependent variables representing the mass and moment balance equations of the polymerization reactor. They use the resulting hybrid neural network and rate function (HNNRF) 13 model to optimize the batch polymerization system, identifying the optimal recipe or operating conditions of the batch polymerization system.
Hybrid residual modeling or parallel direct hybrid residual model is a class of the parallel direct hybrid model, where we use a first-principles or science-based process model to quantify the time-dependent prediction error or residual, Yres, between plant data Y(t) and science-based model prediction Ym as a function of process variables 41,[69][70][71] . Figure We recommend that the use of hybrid models will generally perform better than standalone ML model for applications like process development. This follows because hybrid models are better at extrapolation, while standalone ML models can be adequate for prediction in a steady running plant.
Tian et al. 69 develop a hybrid residual model for a batch polymerization reactor. First, they develop a simplified process model based on polymerization kinetics, and mass and energy balances to predict the monomer conversion, number-average molecular weight MWN, and weight-average molecular weight MWW. This first-principles process model cannot predict these product quality targets accurately because of its neglect of the gel effect at high monomer conversion and other factors. Next, the authors develop a parallel configuration of three databased, time-dependent or recurrent neural networks 9 trained by process data to predict the residuals of monomer conversion, MWN and MWW of the simplified first-principles process model. The predicted residuals are added to the predictions from the simplified process model to form the final hybrid model predictions. Because of focus in batch process control is on the end-of-batch product quality targets, the use of time-dependent or recurrent neural networks can usually offer good long-range predictions. Therefore, the resulting hybrid residual model performs well in many batch process control and optimization applications 41,43,69-71 .
Simutis and Lubnert 36 present another application of the direct hybrid modeling methodology to state estimation for bioprocess control. This work combines a first-principles state Kalman filter based on mass balances of biomass, substrate and product, and an ML-based observation model for quantifying relationship between less established variables and measurements. Recently, Ghosh et. al. [72][73] apply the parallel hybrid modeling framework in process control, where they combine first-principles models with data-based model built by applying subspace identification for better prediction of batch polymer manufacturing and seed crystallization system. Hanachi et. al. 74 showcase the application of direct hybrid modeling methodology for predictive maintenance. They combine a physics-based model with a data-based inferential model in an iterative parallel combination for predicting manufacturing tool wear. 66 have discussed the advantages of data augmentation by combining simulation and plant data to generate a more accurate data-based analysis. In an application to crude distillation in petroleum refining, Mahalec and Sanchez 51 use a sciencebased model to calculate the internal reflux to augment other plant data as inputs to a ML model, in order to calculate the relationship to the product true boiling point curves for quality analysis.
| Series Direct Hybrid Model
The data augmentation in series hybrid models is more relevant when some feature measurements are missing in the original data, so we use a first-principles model to calculate those features and then augment those calculated data to the ML model to study the combined multivariate effects. The goal is more towards causal effect of the added science model features and less towards improving accuracy. If we find that some missing feature measurements cause a mismatch between a science-based model and the actual plant, data augmentation may improve the training performance of the hybrid model. Liu 78 show how to use plant data to estimate kinetic parameters of first-principles models for industrial polyolefin processes. In a recent study Bangi and Kwong 125 estimate process parameters in hydraulic fracturing process using deep neural network which are then input to a first-principles model. Finally, we note that as illustrated in Figure 4, we can interchangeably use a science-based model or a ML model first in the hybrid framework, depending on if we require to add more features to augment the data set or to estimate model parameters. This combined strategy is generally more useful for the case where the science-based model has unknown parameters. We could use ML to determine these unknown parameters and then apply a hybrid residual ML approach. By doing so, we could improve the model prediction accuracy as well.
| An Application of Combined Direct Hybrid Modeling to Polymer Manufacturing
We apply the combined direct modeling strategy to an industrial polyethylene process for the prediction of melt index. We build a first-principles steady-state model of a Mitsui slurry high-density polyethylene (HDPE) process by following the methodology and kinetic parameters presented in Supplement 1b of Sharma and Liu 78 . For this application, it is easier to first estimate the complex multisite Ziegler-Natta polymerization kinetic parameters using steady-state production targets, and then convert the resulting steady-state simulation model based on Aspen Plus to a dynamic simulation model using Aspen Plus Dynamics. The resulting dynamic simulation model has similar independent process variables, including the feed flow and compositions and the reactor operating conditions. For less complex applications, dynamic data could be used for parameter estimation. To improve the accuracy of model predictions, we develop a regression model to predict the error residues as a function of independent process variables using a ML method called random forest algorithm 82 with Python. This leads to a hybrid model that predicts the MI value as a sum of the dynamic simulation model prediction (first-principles-based) and the predicted error residual (data-based) corresponding to a give set of independent process variable values, as illustrated in Figure 5. Figure 6 shows that the hybrid model predictions (with a RMSE value of 0.21) match the plant data much better than a first-principles dynamic simulation model alone.
We note that a data-based model alone has also a similar accuracy, but it may give scientifically inconsistent results for predictions beyond process operating data which the model uses. Thus, the hybrid model is not only accurate, but also gives scientifically consistent results beyond current operating range.
| Inverse Modeling
In inverse modeling, we use the output of a system to infer its corresponding input or independent variables; this is different from the forward modeling where we use the known independent variables to predict the output of the system 2 . Figure 7 illustrates the inverse modeling framework. We see that in the traditional data-based approach, we use process variable data (X) and quality target data (Y) to train and test a ML model. Because the plant does not measure most quality targets continuously, we can apply a science-based process model, developed by first principles and validated by plant data, to predict and augment the quality target data (Y) for given process variable (X). Raccuglia et.al. 87 train the ML learning model using reaction data to predict reaction outcomes for the crystallization of templated vanadium selenites. They demonstrate the use of ML to assist material discovery using data from previously unsuccessful or failed material synthesis experiments. The resulting ML model outperforms traditional human strategies, and successfully predicts conditions for new organically templated, inorganic product formation with a success rate of nearly 90%. Significantly, they show that inverting the machine-learning model reveals new hypotheses regarding the conditions for successful product formation.
There is a growing interest in the inverse approach to material deign, in which the desired target properties are used as input to identify the atomic identity, composition and structure (ACS) that exhibit such properties. Liao et al. 88 present a metaheuristic approach to material design that incorporates the inverse modeling framework. Vankatasunramanian 61 also mentions the importance of inverse problem being solved by the application of artificial intelligence in chemical engineering processes.
Note the inverse modeling approach may lead to non-unique solutions which can give a range of predictions of input parameters within the operating range. By adding additional constraints to the input parameters (such as their operating range), we may obtain a unique solution.
| An Application of Inverse Modeling to Polymer Manufacturing
We illustrate the application of an inverse modeling approach that integrates steady-state and dynamic simulation models of a Mitsui slurry HDPE process, developed from first principles and validated by plant data, with a data-based ML model. The goal is to predict the operating conditions for producing new polymer grades, given the desired product quality targets, such as melt index (MI), polymer density (Rho), polydispersity index (PDI) and polymer production rate (P). The details of the steady-state simulation model are available in Supplement 1b of reference 78.
We first estimate the polymerization kinetic parameters from plant production targets in a steady-state model using Aspen Polymers based on our reported methodology 78 . This results in a validated Aspen Polymers steady-state simulation model. Next, we convert the steady-state model to a dynamic model using Aspen Plus Dynamics. We use the dynamic model to simulate the product quality data for different process operating conditions, which include the data characterizing the polymer grade transitions. Then, we use a Python-based, ensemble machine learning regression model 89 to regress the simulated data, with the simulated product quality data as input, and the process operating conditions (flow rates of all input streams) as the output.
Given the desired quality targets for a new polymer grade, we apply the trained ML model to predict the operating conditions for the new polymer grade. Figure 8 illustrates that the inverse modeling approach predicts the hydrogen feed flow rate with a high accuracy (low RMSE = 0.9) when compared to actual plant data for a standard deviation of 20). Thus, if we want to produce a new polymer grade given its quality targets, we can predict the operating conditions required to produce that polymer grade using the inverse modeling approach. 27 Figure 8. Hydrogen feed predictions from inverse modeling of product quality features
| Reduced-Order Models
Reduced-order models (ROMs) are simplified models that represent a complex process in a computationally inexpensive manner, but also maintain high degree of accuracy of prediction in simulating the process. In bioprocessing and chemical engineering, we can apply the ROM methodology to simulate complex processes and then use ML models to optimize the processes. See Figure 9. We can use ROMs to simulate different scenarios and sensitivities in order to generate process data, which in turn can be combined with ML models to build accurate soft sensors to predict quality variables. This approach helps to make sure that the ML model is trained on process data with multiple variations which is not possible in a steady plant run.
Hence, data-based sensors will be accurate for any future process optimization, scale up etc. and it is also easier to deploy such models online. 91,92 propose the use of surrogate models as reduced-order models that approximate the feasibility function for a process in order to evaluate the flexibility and operability of a science-based process model, since it is difficult to directly evaluate the feasibility due to blackbox constraints.
In a recent study, Abdullah et. al. 93 showcase a data-based reduced-order modeling of non-linear processes that have time-scale multiplicity to identify the slow process state variables that can be used in a dynamic model. Agarwal et. al. 94 use ROM for modeling pressure swing adsorption process where they use a low-dimensional approximation of a dynamic partial differential equation model, which is more computationally efficient. In another study, Kumar et. al. 45 use a reduced-order steam methane reformer model to optimize furnace temperature distribution. In a recent study, Shafer et al. 95 use a reduced-dimensional dynamic model for the optimal control of air separation unit. The model combines compartmentalization to reduce the number of differential equations with artificial neural networks to quantify the nonlinear input-output relations within compartments. This wok reduces the size of the differential equation system by 90%, while limiting the additional error in product purities to below 1 ppm compared to a fullorder stage-by-stage model. Kumari et. al. 126 use data based reduced order methods for computational fluid dynamic modeling applied to a case study of super critical carbon dioxide rare event. They propose a knearest neighbor (kNN)-based parametric reduced-order model (PROM) for consequence estimation of rare events to enhance numerical robustness with respect to parameter change.
Recently, many operator-theoretic modeling identification and model reduction approaches like the Koopman operators have been applied to integrate first-principles knowledge into finding relationship among multiple process variables in chemical processes. Koopman operator offers great utility in data-driven analysis and control of nonlinear and high-dimensional systems.
Narsingham and Kwon 127 develop a new local Dynamic Mode Decomposition (DMD) method to better capture local dynamics which does temporal clustering of snapshot data using mixed integer nonlinear programming. The developed models are subsequently used to compute approximate solutions to the original high-dimensional system and to design a feedback control system of hydraulic fracturing processes for the computation of optimal pumping schedules.
Our focus on ROM is more towards using the science-based model to simulate process data that can be used by ML models to derive empirical correlations for process optimization. ROM are particularly useful in chemical processes for dynamic optimization of a complex large-scale process.
| An Application of Reduced-Order Modeling to Polymer Manufacturing
We illustrate the ROM methodology in a HYPOL polypropylene production process. The details of the steady-state simulation model are available in Supplement 1a of reference 78.
The Hypol process is complex with series of reactors, separators and recycle loops. The process has many operating variables, such as feed flow rates of propylene, hydrogen to each reactor, and temperature and pressure in each reactor. It is critical to quantify the effects of operating variables on the polymer quality targets, particularly melt index, in order to design or optimize the process. To achieve this, we need multivariate process data which are not usually available in a steady running plant. Hence, we use the ROM methodology.
We model the HYPOL polypropylene production process following the methodology of Sharma and Liu 78 and then run multiple steady-state simulations to generate multivariate data with varying operating variables and the corresponding melt index predictions. We use a random forest ML model 89 The ML model also decides the relative importance of different operating variables in reducing the mean decrease in "node impurity", which is a measure of how much each operating variable feature reduces the variance in the model. Figure 10b illustrates that the ROM calculates the important features like hydrogen flow rate (H24) and the temperature to the fourth reactor (R4T) as the most important variables affecting the melt index, which can then be used to find the optimum conditions to produce polymer of a specified melt index value and to improve the process design for a new process.
| Hybrid SGML Modeling for Uncertainty Quantification
A science-based model can produce results with some uncertainties which can be quantified by ML-based techniques. The uncertainties in science-based models arise from uncertainty in model parameters, and boundary and initial conditions. In some cases, the model bias and assumptions can be a source of uncertainty as well. We can use the predictions from a calibrated model to quantify uncertainties. Data-based ML models like Gaussian process, neural networks etc. are used to help build a surrogate model that defines a relation between model inputs and outputs which can then be used to quantify the uncertainty.
Because of uncertainty in process inputs and process states in a chemical process model, the uncertainty propagates to the process outputs as well. The uncertainty in a science-based model due to any of the parameters or any of the prior knowledge can be used by a ML model to quantify uncertainty in a chemical process as shown in Figure 11. This surrogate data-based ML modeling reduces the computational expense of Monte Carlo methods, which are traditionally used for uncertainty quantification (UQ) 96 .
Because of uncertainty in process inputs and process states in a chemical process model, the uncertainty propagates to the process outputs as well. Duong et. al. 97 uses UQ for process design and sensitivity analysis of complex chemical processes using the polynomial chaos theory. Fenila et. al. 98 utilize UQ for electrochemical synthesis, where they calculate simulation uncertainties and global parameter sensitivities for the hybrid model. UQ has also been applied to understand complex reaction mechanisms. Proppe et. al. 99 showcase kinetic simulations in discrete-time space considering the uncertainty in free energy and detecting regions of uncertainty in reaction networks. UQ techniques are popular in the field of catalysis and material science as they are used to quantify the uncertainty of models based on density functional theory 100,101 . In another study, Boukouval and Lerapetritou 102 demonstrate the feasibility analysis of a science-based process model over a multivariate factor space. They use a stochastic data-based model for feasibility evaluation, referred to as Kriging and develop an adaptive sampling strategy to minimize sampling cost while maintaining feasibility.
Manufacturing
We quantify the uncertainty of the chemical process model in predicting the melt index for the industrial HDPE process described in Section 2.1.4. This uncertainty in prediction may result from the estimated kinetic parameters of the process, which propagates to the quality output as well.
We simulate the data using the chemical process model and calculate the prediction intervals 35 using a gradient boosting ML model 89 . In this case, we use the concept of prediction intervals to determine the range of model prediction. We use the quantile regression loss with gradient boosting model to predict the prediction intervals 103 . We define a lower and an upper quantile according to the desired prediction interval. Figure 12 illustrates the uncertainty in the prediction of melt index given by the range of the 90% prediction interval which implies that there is 90% likelihood that the ML model prediction will lie in the given range. The resulting RMSE value lies within 1.2 to 1.5, with the standard deviation of melt index data equals 5.1. In the figure, we see that the prediction interval is the area between the two black lines represented by the upper quantile (95 th percentile) and the lower quantile (5 th percentile). From the figure, we see a larger prediction interval that means a higher uncertainty in prediction for time less than 100 hours compared to the later stage because of a more appreciable change in MI in that interval. Thus, uncertainty quantification (UQ) helps in making better process decisions after knowing the error estimate of the model. 36 Figure 12. Uncertainty quantification of melt Index prediction of a slurry HDPE process
| Hybrid SGML Modeling to Aid in Discovering Scientific Laws Using ML
One way in which ML can help science-based modeling is by discovering new scientific laws which governs the system. There is a growing application of ML in physics to rediscover or discover physical laws mainly by data-driven discovery of partial differential equations. ML can be used to develop an empirical correlation which can be used as a scientific law in a science-based model, or ML can be used to solve the partial differential equation defining scientific laws as illustrated in Figure 13. Another important application of ML is to discover some of the thermodynamic laws which can be useful in defining the phase equilibrium and critical for an accurate science-based process model. Nentwich et. al. 106 use data-based mixed adaptive sampling strategy to calculate the phase composition, instead of the complex equation-of-state models. Thus, ML application can have promising use in discovering more accurate physical and chemistry laws that govern the chemical process. This methodology can be used to obtain the functional form of scientific laws as well as the estimation of the parameters of existing laws. Brunton et al. 128 demonstrate a novel framework to discovering governing equations underlying a dynamic system simply from data measurements, leveraging advances in sparsity techniques and machine learning. These scientific laws calculated by ML-based models can then be utilized in first-principles model to improve accuracy as well as reduce model complexity.
| SCIENCE COMPLIMENTS ML
Referring to Figure 1, we can also improve ML models using scientific knowledge. We can improve the generalization or extrapolation capability and reduce the scientific inconsistency of ML models by using scientific knowledge in designing the ML models. The scientific knowledge can also help in improving the architecture of the data-based ML model or the learning process of the ML model and even with the final post-processing of the ML model results.
| Science-Guided Design
In science-guided design, we choose the model architecture based on scientific knowledge. For a neural network, we can decide the intermediate variables expressed as hidden layers based on scientific knowledge of the system. This helps in improving the interpretative ability of the models. Figure 14 illustrates a neural network model whose architecture like the number of neurons, hidden layers, activation layers etc. can be decided by prior scientific knowledge. design their theory-infused neural networks based on adsorption energy principles for interpretable reactivity prediction. The use of the novel neural differential equation 108 to solve a first-principles dynamic system represents a hybrid SGML approach, where the architecture of ML model is influenced by the system and finds applications in continuous time series models and scalable normalizing flows. The derivative of hidden state is parameterized using a neural network and the output of the network is computed using a differential equation solver. In a recent study, Jaegher et. al. 109 use the neural differential equation to predict the dynamic behavior of electro-dialysis fouling under varying process conditions. In a recent application of this theme in chemical process for model predictive control, Wu et. al. 110 use prior process knowledge to design the recurrent neural network (RNN) structure 9 .They showcase a methodology to design the RNN structure using prior scientific knowledge of the system and also employ weight constraints in the optimization problem of the RNN training process. Reis et. al. 111 discuss the concept of incorporation of process-specific structure to improve process fault detection and diagnosis.
Fuzzy artificial neural networks (ANN) is a class of neural networks which utilize prior scientific knowledge of the system to formulate rules mapped on to the structure of the ANN 9,112 . The weights of the ANN connecting the process input to output can be connected to physical process variables 64 . Apart from making the models more scientifically consistent with prior knowledge, they also reduce computational complexity and provides interpretable results. The use of prior knowledge also makes them suitable for extrapolation. Fuzzy ANN have been particularly useful for applications in process control 113 . Simutis et. al. use fuzzy ANN system for industrial bioprocess monitoring and control [114][115] . They also illustrate the application of fuzzy ANN process control expert to perform appropriate control actions based on process trends for bioprocess optimization and control 116 .
Sparse Identification of Nonlinear Dynamics (SINDy) is another data-based modeling method that utilizes scientific knowledge for improving the model performance with the algorithms 128 .
Bhadiraju et. al. 129 have used the SINDy algorithm to identify the Non Linear Dynamics of a chemical process system(CSTR). They used sparse regression in combination with feature selection to identify accurate models in an adaptive model identification methodology which requires much less than data that current methods. In a similar study Bhadiraju et. al. 130 have a modified adaptive SINDy approach that is helpful in cases of plant model mismatch and does not require retraining and hence computationally less expensive.
| Science-Guided Learning
Here, we make use of the scientific principles to improve the scientific consistency of data-based models by modifying the machine learning process. We do this by modifying the loss function, constraints and even the initialization of ML models based on scientific laws. Specifically, in order to make the ML models physically consistent we make the loss function of neural network model incorporate physical constraints 2 . A loss function in ML measures how far an estimated value is from its true value. A loss function maps decisions to their associated costs. Loss functions are not fixed, they change depending on the task in hand and the goal to be met. We can define a loss function (based on the mean squared error, MSE) of the ML model (LossM) for regression to calculate the difference between the true value (Ytrue) and the model predicted value (Ypred).
Likewise, we can define a loss function for a science-based model (LossSC), which is a function of the model predicted value (Y_pred) consistent with science-based loss. We include a weighting factor λ to express the relative importance of both loss terms. We write the overall loss function (Loss) as: Figure 15 illustrates the concept of science-guided loss function.
A science-guided initialization helps in deriving an initial choice of parameters before a model is trained so that it improves model training and also prevents from reaching a local minimum, which is the concept of transfer learning. Thus, we can use the data from a science-based model to pre-train a ML model based on this concept of initialization 1,2,7 . This concept has been utilized in chemical process model in the form of process similarity and developing new process models 42 through migration. In particular, Lu et. al. 117
| An illustrative Example of Science-Guided Learning
We showcase the application of the science-guided loss function in the slurry HDPE process for the industrial HDPE process described in Section 2.1.4. The goal is to predict the melt index of the polymer. The plant only measures the polymer melt index as the quality output, but we also want the data-based ML model to predict the scientifically consistent polymer density values.
We express polymer density as a function of the melt index using some empirical correlations and modify the loss function (based on the mean squared error, MSE) to consider density as well.
See Eq. (2) below. We then train a deep learning neural network model to predict the melt index of the polymer. Figure 16 illustrates that the SGML hybrid model calculates the melt index, resulting in a RMSE of the melt Index that is slightly higher (RMSE = 0.8) (standard deviation of data= 5) compared to a standalone ML model. In addition to predicting the melt index values, the hybrid SGML model is simultaneously predicting the polymer density correctly within the physically consistent range of 0.94-0.97 g/c. By contrast, the density estimates by the ML model alone result in density values greater than 1, which is physically inconsistent.
| Science-Guided Refinement
By science-guided refinement, we mean the post-processing of ML model results based on scientific principles. This post-processing of results of the ML model using science-based models can be useful to the design and prediction of material structure 113 . Thus, the discovery of materials forms the basis of chemical process development from which the manufacturing 45 process of any compound can be designed. This is different than the serial direct hybrid model discussed in Section 3.1.2. In particular, we use the science-based model to merely test the scientific consistency of the ML model results. Hautier et. al. 114 use first-s models based on density functional model to refine the results of probabilistic ML models to discovery ternary oxides. Figure 17 illustrates the science-guided refinement framework. Figure 17. Science-guided refinement framework Another application for science-guided learning is for data generation. ML techniques like generalized adversarial networks (GAN) are useful for generating data in an unsupervised learning. GANs do have a problem of high sample complexity 2 which can be reduced by incorporating some science-based constraints and prior knowledge. Cang et al. 115 apply ML models to predict the structure and properties of materials and use the results of the ab initio calculations to refine the ML model results. They generate more imaging data for property prediction using a convolution neural network and introduce a morphology constraint form scientific principles, while training of the generative models so that it improves the prediction of the structure-property model.
Thus, some of these methodologies of having science complimenting ML have much potential for future applications to bioprocessing and chemical engineering.
PROCESSES
Along with all the merits of using the SGML methodology there are challenges as well. Incorrect fundamental knowledge and the assumptions of the science-based first-principles model will lead to inaccurate hybrid model, so it is important for the scientific model to be very accurate. Lack of engineer/scientists having expertise in both domain knowledge and machine learning.
Computation infeasibility in certain modeling approaches like inverse modeling.
Data cleaning, preprocessing, feature engineering maybe difficult in certain cases but may be imperative in science-based model parameter estimation hence in these cases the hybrid models may increase the complexity compared to a stand-alone ML models like Neural Network which may not require feature engineering. Model predictions must not only be accurate but also with lower uncertainty which may be difficult for certain hybrid model methods.
There is a lot of scope of using hybrid SGML methodologies in chemical process modeling, summarizing here some of the opportunities and areas where they can be beneficial. As we have seen Hybrid SGML models are useful for extrapolation and predicting beyond operating range, hence it will be particularly useful for processes development. Process fault diagnosis and anomaly detection is one such area where data-based methods have been used extensively, thus there is opportunity to combine scientific knowledge as well to make the anomaly detection process more scientifically consistent.
| CONCLUSION
We present a broad perspective of hybrid modeling with a science-guided machine learning (SGML) approach and its application in bioprocessing and chemical engineering. We give a detailed review and exposition of the hybrid SGML modeling approach and its applications, and classify the approach into two categories. The first refers to the case where a data-based ML model compliments and makes the first-principles science-based model more accurate in prediction, and the second corresponds to the case where scientific knowledge helps make the ML model more scientifically consistent. We point out some of the areas of SGML which have not been explored much in chemical process modeling and have potential for further use like in the 49 areas where Science can help improve the data-based model by improving the model design, learning and refinement. We also illustrate some of these applications of the hybrid SGML methodologies for industrial polymer/chemical process improvement.
Thus, based on our review, we recommend that the use of hybrid models will perform better than standalone ML for applications like process development, since they are better at extrapolation while standalone ML models which can be adequate for prediction in a steady running plant. | 2021-12-03T02:15:44.260Z | 2021-12-02T00:00:00.000 | {
"year": 2021,
"sha1": "e980073c43fd0635b088a88474a1439c5c18f0da",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e980073c43fd0635b088a88474a1439c5c18f0da",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
53596061 | pes2o/s2orc | v3-fos-license | Using Written Tests to assess Holistic Development of Lower Primary School Learners in Kenya Violet
The present study investigated the use of written tests to assess holistic development of Lower Primary school Learners in Kenya. The Concurrent Triangulation Design was employed. The sample size comprised 184 respondents who were, 122 lower primary teachers, 47 ECDE teachers and 15 primary school Head teachers.Both Questionnaire and Interviews were used to collect data.The researcher ensured validity of questionnaires through expert judgment that is with the help of lectures from Jaramogi Oginga Odinga University of Science and Technology.Moreover, the items in the questionnaire were made clearer and also arranged from simple to complex.Reliability of the instrument was tested using internal consistency and a reliability coefficient of 0.892 was reported.The quantitative data obtained from questionnaires was analysed by using descriptive statistics with the aid of Statistical Package for Social Sciences (SPSS) version 22.Qualitative data was analysed using the thematic framework.The study finding was that that intellectual development of ECDE learners was effectively assessed by written tests method of assessment. However, on the aspect of emotional development, most participants reported that written tests methods of assessment does not effectively assess emotional development of ECDE learners. Moreover, this study confirms that written tests methods of assessment did not effectively assess the social development of ECDE learners. Finally, most participants disagreed that physical growth andspiritual developments of ECDE learners is assessed well when they are given written tests. The study recommendation is that The Kenya Institute of Curriculum Development should come up with clear policies on assessment of ECDE learners so that holistic development is guaranteed during the assessment process.
Introduction 1.
In holistic education, the teacher is not seen as a person of authority who leads and controls but is rather seen as 'a friend, a mentor, a facilitator, or an experienced traveling companion' (Forbes, 2006).Schools should be seen as places where students and adults work towards a mutual goal.Open and honest communication is expected and differences between people are respected and appreciated, co-operation is the norm, rather than competition.Thus, many schools incorporating holistic beliefs do not give grades or rewards.The reward of helping one another and growing together is emphasized rather than being placed above one another.The role of play in supporting children's holistic development, 'meta-cognitive' and self-regulatory abilities is an area of recent research development.Meta-cognitive abilities worry our developing awareness of our own cognitive and emotional processes and expansion of policies to control them (Gronlund, 2006).It is now clearly established that children begin to develop this awareness and control very early in life, important individual differences are quickly established which have long-lasting results for attainment and well-being, that these abilities are learnt, and can be taught, and that the various types of play form a powerful context for their development (Whitebread, 2010).
In United States of America, when assessing Children's Learning and Development, there is specific guidelines that are available regarding children's development.The National Association for the Education of Young Children (NAEYC) and the Division for Early Childhood (DEC) advocate the use of authentic assessment practices as the primary approach for assessing young children (DEC, 2007).Early childhood leaders have advocated the use of authentic assessment approaches for accountability purposes, indicating that these methods are more appropriate for young children (Meisels, 2003;Neisworth&Bagnato, 2004;Grisham-Brown, 2008).Emerging research shows that authentic assessment approaches, used for accountability purposes, can yield technically adequate assessment data thereby not compromising the results of high-stakes assessment.
In Turkey, assessment is done in a structured way, in predetermined times to learn about the development of individual.A number of studies are conducted in Turkey related to assessment and evaluation techniques used by teachers.The result revealed that the teachers face problems in implementing new assessment and evaluation techniques in their classrooms (Gelbal & Kelecioglu, 2007).These problems might emerge due to teacher's lack of knowledge about implementation of these new constructivist assessment techniques.As a result of their lack of knowledge, they most prefer to use the most familiar assessment techniques for them as exams or face to face interviews.For instance, in the study conducted with elementary school students, researchers investigated assessment strategies used by primary school teachers (Gelbal &Kelecioglu, 2007).Teachers stated that they mostly prefer to use traditional assessment techniques while assessing their students progress.Teachers find constructivist assessment tools time consuming and leading to extra effort.
In an effort to respond to the need for quality early childhood development and education services the Open Society Initiative for Southern Africa (OSISA; 2009) is focusing increasing attention on providing quality services for young children and their families.The Early Childhood Development and Education Programme is part of the broader OSISA education Programme that seeks to make significant improvements in the early childhood sector in Southern Africa by engaging in multi-leveled interventions in selected countries (Kanje, 2009).The overarching goal of the program, which is being run in collaboration with Open Society Foundation's (OSF) Early Childhood Program, is to promote access to quality early childhood development and education in a manner that places a premium on eliminating inequalities in current access for the most marginalized and vulnerable children.
In the Sessional Paper No. 1 of 2005 on a Policy Framework for Education, Training and Research; the government planned to integrate ECDE into basic education but the policy was not fully implemented, and therefore the ECDE sector is majorly run by private initiatives and partly county government.This has led to the indiscriminate establishment of ECDE institutions with little or no concern for standards in infrastructure, curricula, teaching and assessment methodologies.The task force (2011) appointed by the then minister of education, noted that the current system of education, curriculum and assessment does not include Early Childhood development and education (ECDE).In addition, the quality of education was not clearly spelt out so that the curriculum delivery could focus on development of specific expected competences to be assessed.The task force further noted that the current summative assessment at the end of every cycle does not measure learner's holistic development.
Standardsor quality is currently a challenge to most of the ECDEinstitutions mushrooming all over the urban and rural centres of Kenya; raising eye brows on the capability of the assessment methods used in the country.One great concern is the government's inability to regulate and control the establishment and operations of ECDE in the country whose total effect is the maladjustment on the child not only in terms of cognitive but both psychological and psychomotor wise.Ultimately, the maladjustments have a long term effect on Kenya's development as a whole.In a bid to force formal learning and competition at this early age, most ECDE institutions use a punitive kind of assessment.The children are assessed through exams and assignments and are punished when they fail to meet the threshold (Shitubi and Wanyama, 2012).These punitive methods deny a child the opportunity to develop holistically.The ideal assessment of the children at this early age should be formative and continuous from the experiences planned in a curriculum.This implies documenting the development of the child, by interpreting the day to day experiences of the child with the purpose of recognizing and encouraging strengths and addressing developmental gaps.
It is recommended that teachers use both formal and informal screening and assessment approaches to systematically evaluate children's growth across all domains of development and learning within natural contexts, including the early childhood classroom (Bordignon& Lam, 2004).In Kisumu central sub-county, ECDE learners especially the ones ready for primary one are strictly assessed formally, leaving one to wonder whether children's growth across all domains of development and learning are really captured.Every society nurtures a set of goals for its children, although the balance among those goals may be contested within societies and may vary across them.People want their children to be safe and healthy, to be happy and well adjusted, to be competent in array of domains and accomplished in one or two of those and to be prepared cognitively and morally to contribute to society.
Theoretical Framework and Literature Review 2.
Theoretical Framework
The study was informed by the Humanistic holistic learning theory (Maslow, 1968).The theory is concerned with personal growth and the full development of each human's potential not on just an intellectual level, but also on an emotional psychological, creative, social, physicaland even spiritual level.The goal of education from this point of view is not to simply put a uniform body of knowledge in students 'heads' or to transmit traditional nationalist, instead the goal is to facilitate the development of knowledgeable human beings who know and are able to nature themselves, other humans, and their environment, to instil a joy of learning, to promote the discovery of each student's passion and special talents, and to teach the knowledge and skills necessary for students to be good decision makers (Miller, 1996).The benefit of holistic development among ECDE learners is that the full spectrum of the child or human experience is included in the educational experience.Emotions, relationships creativity, imagination, intuition and real life problems are all part of the human experience.Including them in the educational experience does not take away from learning; rather enhances it.Human educators want to create the conditions where human beings can learn to use their knowledge as well as intellect, emotions and intuition to solve problem, make decisions or come to know the world.They are not trying to produce intellectual automatons, (Gardener, 2000).
Maslow also asserts that schools should produce students who want to learn and know how to learn.Human beings are programmed to want to find out about their world.Learning is a natural process, teachers kill off this natural instinct when they always ask students to learn about things that have no relevance to their lives.When they ask them to learn in ways that are not natural for them.Part of teachers' jobs is to teach students how to learn, that is, how to get the necessary information they need, how to critically analyse and evaluate that information, and how to use and apply it (Miller, 1997).The theory also suggests that students learn best in a non-threatening environment.Threats come not only in the form of physical threats, but also social threats, emotional threats and things that endanger one's self esteem.When schools becomes too much about competition and measuring up, they invariably have a population who experience failure.This population will find something they can be successful in at sooner or later date which may not be very pleasant (Goswami, 1993).Developmentally appropriate practices are a set of standards for providing high quality early care and education experiences to children, birth to 8, which are based on knowledge about "how children develop and learn".
The theory was relevant since it addresses ways of assessing holistic development.Assessment establishes the child's level of attainment in a learning experience by checking if learning objectives have been achieved and whether progress is being made (Laren, 2008).It enables a teacherto, monitor and promote each child's holistic development plan adequately and understand the learners abilities, evaluate the teaching methods and learning resources in order to adopt relevant teaching and assessing strategies for particular skills, identify children who need remedial assistance to care for individual differences, appraise behavior, skills knowledge, attitudes and achievement of learners and classify learners for further development of skills (Ransuran, 2006).
Literature Review
Numerous studies have investigated assessment of learners in different domains.A study conducted in England by Moss, (2012) on classroom summative assessment involved students between the ages of 4 and 18 years used descriptive survey design to answer the research problem.Data collection instrument was interview guide and the participants of the study were both teachers and students.The findings of the study revealed that when teachers use summative assessments for external purposes like certification for vocational qualifications, selection for employment or further education and or monitoring accountability or gauging the schools performance, students benefit from receiving better description and examples that help them understand the assessment criteria and what is expected of them.The study indicated that only older students respond positively or negatively to written tests and hence their emotions could be assessed.But younger learners take every situation as it comes and therefore their emotions will not be assessed by written test.The study also revealed that when teachers use summative assessment for internal purposes like regular grading, for record keeping, informing decisions about choices within the school and reporting to parents and students, non-judgmental feedback motivates students for further effort.In Pakistan, a study conducted by Hayat (2011) on credibility of written tests or examinations conducted by boards of immediate and secondary education and educational testing and evaluation agency used a sample of 541 students.The study found out that examinations conducted by the educational testing and evaluation agency are more credible than the examinations conducted by the boards, which ISSN 2039-9340 (print) Mediterranean Journal of Social Sciences A study conducted in Uganda by Dozva (2009) on accessibility of early childhood education used 280 parents as the key respondents in the study.Using interview guide as data collection instrument, data was collected from the parents of the sampled area.Survey design was used in the study and descriptive analysis used to analyse the collected data.The study revealed that communities still needed a lot of sensitization regarding ECDE because many still believed that school before primary is a total waste of time.The recommendation of the study was that Early Childhood Education be a pre-requisite to joining primary school.The study also revealed that there was a positive link between early childhood learning and future holistic development of a child which however has been not clearly understood as revealed by Uganda's policy on ECDE.In Kenya, a study conducted by Jagero, (2013) on how performance in Kenya Certificate of Primary (KCPE) can predict their performance in Kenya certificate of secondary education (KCSE ) used expost facto and correlation research design.The major finding was that there was a correlation between performance in KCPE and KCSE and the correlation was significant.The study also indicated that written tests assess the major domains of development in a learner (language, cognitive, social, physical, moral and spiritual development).
From the reviewed literature, some studies were carried out in secondary schools, and they missed information from primary school learners.Therefore the present study was carried out in primary schools but among the ECDE learners, thereby filling gap in literature.Other studies focused on accessibility of early childhood education but not on assessment methods used ECDE's which was the focus of the present study.Other reviewed studies compared two examinations done at the end of eight years in primary and the other done at the end of four years in secondary education but not focusing on lower primary classes.Other studies focused on summative assessment that is done at the end of a course or academic year but not formative, while the present study focused on both the summative and formative assessment thereby filling the gap in literature.Therefore, the present study established the use of written tests to assess holistic development of Lower Primary school Learners in Kenya.
Goal of the Study
The present study investigated the use of written tests to assess holistic development of Lower Primary school Learners in Kenya.
Research Design
The study adopted theConvergent Parallel Design.According to Tashakkori and Teddlie (2003), the convergent parallel design (also referred to as the convergent design) occurs when the researcher uses concurrent timing to implement the quantitative and qualitative strands during the same phase of the research process, prioritizes the methods equally, and keeps the strands independent during analysis and then mixes the results during the overall interpretation.For example, an investigator might collect both quantitative correlational data as well as qualitative individual or group interview data and combine the two to best understand participants' experiences.The data analysis consists of merging data and comparing the two sets of data and results (Creswell & Plano Clark, 2011;Morse &Niehaus, 2009).
Population and Sample
The target population for the study was 327 respondents.That is, 234 lower primary school teachers, 90 ECDE teachers and 3 DICECE officers in Kisumu Central Sub-county, Kisumu County Kenya.The study used stratified random sampling to select lower primary teachers and ECDE teachers.Stratified random sampling identifies sub-groups in the population and their proportions and select from each sub-group to form a sample (Cooper and Schindler, 2009).Stratified random sampling was found appropriate for this study as it ensures that each sub-group is proportionately represented.Moreover, purposive sampling technique was used to sample the DICECE officers.
Research Instruments
The instruments used in the study were questionnaires to be administered to teachers and interview schedule administered to the DICECE officers.The questionnaire were administered to both lower primary and ECDE teachers since they are directly involved in assessment of ECDE learners.Likert's scale was used where the respondents were asked to make a choice based on their opinion whether they Strongly Agree, Agree, Disagree or Strongly Disagree based on the question asked.Interviewing as a research technique involves the researcher asking questions and hopefully receiving answers from the people being interviewed (Kombo and Delno, 2009).The interview schedule was appropriate for the study as it provided in-depth information and a detailed understanding of the issue under research.Validity of questionnaires was ensured through expert judgment that is with the help of lecturers, while reliability was tested using internal consistency and a reliability coefficient of .892was reported.
Data Collection Procedures
Permission to conduct the study was first sought from Board of post graduate studies of Jaramogi Oginga Odinga University of Science and Technology.After which the researcher obtained research permit from National Council of Science and Technology Innovation (NACOSTI).Then permission from the Kisumu Sub-County Education office was also sought.Thereafter, the researcher obtained permission from Head teachers in schools to be able to conduct study within school.Data collection was through questionnaires which were be administered to lower primary teachers and preschool teachers and interview schedules were administered to the head teachers and DICECE officers.The researcher booked appointments with DICECE officers in advance to facilitate the interview process.Questionnaires were issued to teachers and it took an average of 25 minutes to complete them.Interviews were carried out with the three DICECE officers and the responses were tape recorded.
In order to gain consent of the respondents regarding the study, the researcher showed a written letter of authority and explained the details of the research, its objectives, purpose and procedure before involving in the actual interview or administration of the questionnaires.The privacy of the respondents as well as the confidentiality of their responses was prioritized as well.The researcher also assured the respondents that the data that were collected or gathered was only to be used for the purpose of the study and was to be protected from unauthorized access.
Data Analysis
Data was analysed both quantitatively and qualitatively.Quantitative data was analyzed using descriptive statistics, which quantitatively describes the main features of a collection of information,or the quantitative description itself.The Descriptive statistics aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent.On the other hand, qualitative data from interviews was analysed using the thematic framework.
Findings
The study sought to find out teachers' perception on written tests as a method of assessing holistic development of ECDE learners in public primary schools in Kisumu Central Sub-county.An exploration on the ECDE teachers' perception on this was done.The researcher developed a questionnaire designed to evaluate the teachers' views on written test as a method of assessing holistic development of ECDE learners.In exploring teachers' perceptions, items were drawn relating to written tests as a method of assessing holistic development of the learners.They were twelve Likerts'-scaled items type of statements, in which respondents choose from 4-point score; Strongly Agree (SA), Agree(A), Disagree (D) and Strongly Disagree (SD).The respondents were asked to use the scale to respond to the statements in relation to their views on written tests method of assessing ECDE learners.
The percentage frequencies of the responses from the ECDE teachers were computed and tabulated as shown in Table 1.From the findings in table 4.1, generally, teachers trust written method of assessment over other methods as they argued that a test should be thought of as an attempt by a student or a learner to demonstrate a mastery of objectives in the specified area of study, hence written work should do this very well.However, the findings of this study showed that a few teachers believed in assessment that is in the form of written work for ECDE learners.About37% only of the teacher respondents had a perception that assessment by written tests method is the best when teachers want to assess the intellectual development of ECDE learners, but nearly two thirds (62.84%) of the ECDE teachers were not keen on using written test for assessing intellectual growth of their learners.The teachers felt that only older students respond positively to written tests assessment of their course work.Nevertheless, about 63% of those teachers who believed in written form of assessment insisted that intellectual development of ECDE learners is effectively assessed by written tests method of assessment.However, more than a third (34.17%) of the ECDE teacher still remained firm that written exanimations is not the most appropriate form of assessing an ECDE learners' intellectual development.Qualitative findings also revealed that most of the respondents reiterated the effectiveness of written test method of assessment in assessing the cognitive development of ECDE learners.For example, two respondents reported that, "Written test method of assessment enhances memory of the child and it also reflects the mental and academic development and reading and writing readiness of the child" (DICECE Officer, A). "Written test method of assessment helps the teacher to capture all that a child has in mind" (DICECE Officer, B).
This means that most respondents agreed that written test method of assessment effectively assess the cognitive development of ECDE learners.
On the aspect of emotional development, those who believed that written tests methods of assessment does not effectively assess emotional development of ECDE learners carried the day at 66.21%, while the ECDE teachers who felt that written test could still be used to assess emotional development lagged behind at 33.78%.In fact, just less than a fifth (18.24%) of the respondents agreed that emotional development of ECDE learners is effectively assessed by written tests method of assessment, but a whole 81.76% of the ECDE teachers who participated in this study held a divergent view.Qualitative findings from interviews also revealed that, most of the respondents reported that written test method of assessment cannot assess emotional and spiritual development of ECDE learners.Two respondents reckoned that, "Written test can never show emotional development of the learner'' (DICECE officer, C) "I wonder which method of assessment can effectively assess spiritual development of a child, but I am sure written test method of assessment can never tell how spiritual a child is'' (DICECE officer, A) This means that the respondents believed that the spiritual development and emotional development of a child can never be assessed by written test method of assessment.
In support to other previous findings, this study confirms that written tests methods of assessment does not effectively assess the social development of ECDE learners, as was observed by more than four fifth (81.72%) of the respondents.In fact, nearly all (99.32%) of the ECDE teachers from Kisumu Central sub-county, who took part in this study, refuted the claim that the most accurate method of assessment when assessing social development of ECDE learners is written test method.On spiritual development as an important aspect of holistic growth, 72.98% (disagree: 55.41%; strongly disagreed: 17.57%) generally disagreed that spiritual development of ECDE learners is assessed well when they are given written tests and only 27.03% of the respondents were in support of the opinion.On whether written test method was effective or not, only 36.49% of the respondents agreed that written tests methods of assessment effectively assess the spiritual development of ECDE learners and 50.68% of them strongly disagreeing another 12.84% just disagreeing that that written test are suitable for assessing ECDE learners spiritual development.
The physical growth and development which is essential aspect of holistic development of ECDE learners, does not require written tests to be gauged.This was the point of view of the majority (89.76%) of the respondents, they negated the claim that that physical development of ECDE learners can be well assessed through written tests method of assessment; only 10.14% of the ECDE teachers agreed written tests could still be used to assess physical growth of the ECDE learner.Whereas only 42.57% of the ECDE who said written test could be used to measure physical development agreed that it is effective method, 56.76% of them said that even if written tests method is used it was not effective method to assess physical development of ECDE learners.Qualitative findings from interviews revealed that some respondents reported that written tests method of assessment was very effective in assessing the physical development of ECDE learners.For example two respondents reported that, "Written test method of assessment enables learners to develop their small locomotion muscles" (DICECE Officer, A). "Written test method of assessment helps learners develop their finger muscles and also develop eye finger coordination" (DICECE Officer, B) This means that the respondents felt that physical development of ECDE learners can effectively be assessed be assessed by written tests method of assessment.
Lastly, another component of education that forms integral part of holistic growth and development of the learner is the moral aspect.Here again, majority (68.11%; disagree:40.54%,strongly disagree:17.57%) of the respondents held the view that written tests method of assessment is not very effective for assessing the level of moral development, only 16.89% of the teachers who participated in this study agreed that assessment moral development of ECDE learners can be done through written tests.
5.
The findings were that most teachers who believed in written form of assessment insisted that intellectual development of ECDE learners is effectively assessed by written tests method of assessment.This was affirmed by qualitative findings where most of the respondents reiterated the effectiveness of written test method of assessment in assessing the cognitive development of ECDE learners.However, more than a third of the ECDE teacher still remained firm that written exanimations is not the most appropriate form of assessing an ECDE learners' intellectual development.This is contrary to Wangechi (2014) in Kenya whose study revealed that written tests put more emphasis on intellectual development and academic preparation for later schooling and domains such as spiritual and emotional domains have been ignored by written tests.The implication of this finding is that teachers and parents should be sensitized not to insist on written test method of assessment, they should be aware of the fact that they are only assessing one domain of development, the cognitive domain.This is contrary to philosophy of ECDE which embraces more of nurturing than of direct instruction.
The findings were that, those who believed that written tests methods of assessment does not effectively assess emotional development of ECDE learners were at three quarters of teachers.The Qualitative findings from interviews also revealed that, most of the respondents reported that written test method of assessment cannot assess emotional and spiritual development of ECDE learners.This finding is supported by Moss (2012) in England whose study indicated that only older students respond positively or negatively to written tests and hence their emotions could be assessed.But younger learners take every situation as it comes and therefore their emotions will not be assessed by written test.This study confirms that written tests methods of assessment does not effectively assess the social development of ECDE learners, as was observed by more than four fifth of the respondents.This finding is in agreement with Hayat (2011) in Pakistan whose study indicated that written test are only credible when assessing intellectual development of learners but cannot in anyway assess the social and moral development of the student.Most participants also reported that spiritual development is not well assessed by given written tests.
This was the point of view of the majority of the respondents, they negated the claim that that physical development of ECDE learners can be well assessed through written tests method of assessment.This means that the respondents felt that physical development of ECDE learners can effectively be assessed be assessed by written tests method of assessment.This is contrary to Jagero (2013) in Kenya whose study indicated that written tests assess the major domains of development in a learner (language cognitive social physical moral and spiritual development).Finally, majority of the respondents held the view that written tests method of assessment is not very effective for assessing the level of moral development.This finding is in agreement with Hayat (2011) in Pakistan whose study indicated that written test are only credible when assessing intellectual development of learners but cannot in anyway assess the social and moral development of the student.However, Dozva (2009) study reiterates that there is a positive link between early childhood learning and future holistic development of a child which however has been not clearly understood.
Concluding Remarks 6.
The study investigated the use of written tests to assess holistic development of Lower Primary school Learners in Kenya.Most of the respondents reiterated the effectiveness of written test method of assessment in assessing the cognitive development of ECDE learners.However, more than a third of the ECDE teacher still remained firm that written exanimations is not the most appropriate form of assessing an ECDE learners' intellectual development.Those who believed that written tests methods of assessment does not effectively assess emotional, social, moral and physical development of ECDE learners were at three quarters of teachers.
From the findings of the study, the study recommends that the Kenya Institute of Curriculum Development should come up with clear policies on assessment of ECDE learners so that holistic development is guaranteed during the assessment process.Moreover, the Kenyan Ministry of Education should come up with specific methods of assessment that are appropriate for holistic development of ECDE learners and hence review the curriculum on the same.
in administration of the boards.The study also indicated that written test are only credible when assessing intellectual development of learners but cannot in anyway assess the social and moral development of the student.
Table 1 :
Percentage frequency response on written tests method of assessment | 2018-11-06T17:04:55.965Z | 2015-05-03T00:00:00.000 | {
"year": 2015,
"sha1": "51134c690847606caea96e8c4432bf67da369521",
"oa_license": "CCBY",
"oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/6418/6152",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "51134c690847606caea96e8c4432bf67da369521",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
31728500 | pes2o/s2orc | v3-fos-license | The legacy of the Alaska Siberia Medical Research Program: a historical perspective.
Background. The Alaska Siberia Medical Research Program was established at the University of Alaska (UA) at a time when there was no research funded by the National Institutes of Health (NIH) that was concerned with Alaska Native health issues. The program grew out of a dire need for an understanding of the apparently rapidly growing health problems in the Native community. The initial plan included the following objectives. Objectives. The objectives are to develop a self-sustaining infrastructure for biomedical research by gaining support from Alaska Natives, UA, national political leaders, NIH and the Russian Academy of Medical Science (RAMS); to identify researchers committed to helping Alaska Natives; to develop meaningful, Native-driven participatory research; to carry out necessary research to form the foundation for future research; and to develop circumpolar collaborations. Results. The objectives were achieved because of the extraordinary and cheerful contributions by all participants in the program. The collaborative research resulted in some 70 published manuscripts identifying and characterizing research-neglected health problems. Unique risk factors for diabetes, cardiovascular disease, alcoholism and seasonal affective disorders were characterized and institutionalized prevention programs were established. The effort of the program led to U.S. Congressional action establishing the University of Alaska as a minority institution, leading to the funding of a variety of successful NIH-funded research centres and programs at the university that are concerned with Native health problems Conclusion. A small, visionary investment by the University of Alaska for establishing the program led to a co-operative effort by the UA, RAMS, Alaska Native Health communities and the NIH that resulted in the development of self-sustaining medical research efforts in Alaska and Siberia. The program spawned pilot studies, leading to NIH-funded research that has provided fundamental insights into the etiology of health problems and their reduction by research-based intervention and prevention programs.
INTRODUCTION
This historical perspective was written to elucidate what was required to develop an urgently needed medical research program in Alaska and what was accomplished by it. The insights gained from the development of the Alaska Siberia Medical Research Program (ASMRP) illustrate how many individuals and organizations worked together to make an idea into a reality. The ASMRP was established at the University of Alaska (UA) out of a dire need for understanding the rapidly growing health problems among Alaska Natives at a time when there was no research funded by the National Institutes of Health (NIH) that was concerned with Alaska Natives.
When the President of the UA asked me to take over the Alaska Siberia Medical Research Program (ASMRP) in 1988, after Dr. Ted Mala left the program (1,2), the instructions were simple: "Determine if it is feasible to establish a meaningful research collaboration with the Russian Academy of Medical Science. " Since the president promised seed money for pilot studies, I accepted the challenge because I realized that it might lead to research on Alaska Native health problems. This need led to an initial plan that included the following objectives: (1) To develop self-sustaining infrastructure for biomedical research by gaining support from Alaska Natives, UA, national political leaders, NIH and the Russian Academy of Medical Sciences (RAMS).
(2) To identify and recruit researchers committed to helping Alaska Natives.
(4) To carry out necessary research to establish the foundation for future research.
The collaboration began with a visit to Novosibirsk, Russia, in 1988, where I had a chance to select distinguished potential research collaborators in the Russian Academy of Medical Science with the help of my counterpart, Academician Valery Trufakin, Vice-President of the Academy. Returning to Alaska, researchers from the UA, Centers for Disease Control and the Alaska Native Medical Center were persuaded to explore the possibility of collaboration. The focus in Alaska related to helping Alaska Natives with their research-neglected health problems. In Russia, the focus was on Siberian Native people. The UA funded the exploratory project with $50,000. This was sufficient to establish the ASMRP at the University of Alaska and to allow visits back and forth to Siberia and to establish 8 research teams (1) and start research that eventually led to some 70 scientific publications . Over the years, some collaborations led to self-sustained NIH funded research, while others failed when they were unable to obtain research grants. In the end, the collaboration with Russia ceased because of the lack of funding for travel related to the collaboration, but the University of Alaska projects blossomed with the support of the National Institutes of Health.
The initiative also led to a search for federal funding for a medical research center at the University. This involved U.S. Senator Ted Stevens, who made it a mission to find the right mechanism. After a year, he involved Senator Daniel Inouye and they suggested that I organize a brainstorming session in Hawaii with both of their staffs to solve the dilemma. The result of that session was an ingenious solution: designation by the U.S. Congress of the University of Alaska as a "minority institution. " This gave access to many National Institutes of Health programs which resulted in federal funding opportunities for the creation of medical research centers and programs related to minority health at the University. This included the Center for Alaska Native Health Research and the Basic Neuroscience Program at the University of Alaska Fairbanks. The apparent rapid increase in DM and CVD among Alaska Natives also led to a priority effort in the Norton Sound region.
Diabetes (DM)
The idea to study diabetes came from a casual comment by Academician Yuri Nikitin on my first visit to Russia, to the effect that diabetes was extremely rare among Inuit in Siberia. Knowing that rates of DM and CVD among Alaska Natives were increasing rapidly, a need and an opportunity for research collaboration became clear. With the help of Cynthia (Cindy) Schraer, a grant application to the National Institute of Diabetes and Kidney Disorders (NIDDK) was funded to determine the prevalence of DM and identify associated risk factors among Siberian Yup'ik Inuit. We focused on this ethnic group because of our plans to study related groups in Siberia with Academician Nikitin. This was followed by another NIH grant to study the prevention of DM (50)(51)(52)(53)(54)(55)(56)(57)(58)(59)(60)(61)(62)(63)(64)(65)(66)(67). That study is referred to as the Alaska Siberia Project (ASP). Based on our research findings, an institutionalized diabetes prevention program was established in Nome by Michael Swenson, one of our collaborators in the ASMRP (67). Early on, we verified general perceptions that CVD and DM have increased rapidly since the 1960s, when the prevalence of DM was less than 0.2% and heart disease less than 2% (85,86). Our comparable screenings in the Norton Sound region in 1992 and 1994 showed a prevalence of 8% and 15% respectively (56,65). In one ethnic group, 44% of women ≥55 years of age had abnormal glucose tolerance (DM, 19% + IGT, 25%). One early finding was that this high prevalence is partially related to a dietary shift from healthy traditional fats (omega-3 fatty acids (FAs) and monounsaturated FAs to a high consumption of saturated FAs found in store-bought foods (59). High consumption of palmitate, a saturated FA that is found in high concentrations in shortening, butter, bacon and other farm animal fat, is strongly associated with insulin resistance, impaired glucose tolerance and pre-diabetes, suggesting a probable role in the development of DM (59,67) and CVD (78).
A successful 4-year diabetes prevention study, in
The Alaska Siberia Medical Research Program which reduced consumption of sugar and palmitate-containing foods was stressed, appeared to confirm the original hypothesis of this fat (59). In that study, out of 44 subjects who began the study with impaired glucose tolerance, only one person developed DM in 4 years as compared to the expected 40-50%. The prospective study by Vessby et al. (87) also showed that palmitate and myristic acids are associated with the development of diabetes.
Coronary heart disease
Recognizing the growing burden of CVD in the Inuit population, I reached out to David Robbins at the MedStar Institute in Washington, DC, and with the help of Jean MacCluer and Barbara Howard the grant application to the National Heart Lung and Blood Institute for the Genetics of Coronary Artery Disease in Alaska Natives Study (GOCADAN; [70][71][72][73][74][75][76][77][78][79][80][81][82][83][84] was developed on the basis of the ASP results. We are now following some 1,900 Inuit in a very detailed, Framingham-type study with systematic and periodic screenings in villages in the Norton Sound region. In the first completed screening of 7 villages, we had a participation rate of 82.6% of those older than17 years of age (71). The study includes detailed blood chemistry, genetic studies and interviews on nutrition, health history and family relationships in addition to ECGs and ultrasounds of the carotid arteries (70,78,80). Together, ASP and GOCADAN are providing new insights into the sometimes unique risk factors for DM and CVD in this population.
The first systematic population-based study of CVD in Alaskan Inuit in ASP revealed a prevalence of 15% in the age group 45-74 (63,65). These results are similar to those recently obtained in the GOCADAN study (70). This high prevalence reflects the high mortality rate of Alaska Natives from coronary heart disease CHD (40% greater than that of U.S. whites aged 45-54; 4). The results are significantly different than the <2% reported in the 1960s (86) and those reported in Greenland in 1980, which showed that "coronary atherosclerosis is almost unknown among Greenlandic Eskimos when living in their own cultural environment" (88). That low prevalence was interpreted to result from the high consumption of marine ω-3 FAs (88), although no screening was done. Our studies have shown no such association between marine ω-3 FA consumption and the presence of CHD (63). On the other hand, our studies show that over-consumption of saturated FAs is associated with the presence and extent of carotid plaque (78) and with other CVD risk factors such as glucose intolerance (59,67,82), blood pressure (67,77) and elevated heart rate (83,84). These findings support the prospective studies by Vessby's group in Sweden that recently showed a direct link between serum levels of the saturated myristic and palmitic FAs and cardiovascular mortality (89).
Thus, although the value of ω-3 FA consumption is well known to reduce cardiovascular mortality, it appears not to be related to preventing atherosclerotic plaque as previously thought, but rather to reducing arrhythmia, sudden death (90), blood pressure (66,77) and heart rate (83) and improving plaque stability (91) and glucose tolerance (66,67,77,82,83).
Stroke
Cerebrovascular disease has become a major health problem among Inuit as the incidence of stroke is now 50% higher among Alaska Natives as compared to U.S. whites (92). In our 1994 screening, we found evidence of previous stroke in 10.8% of the normoglycemic women and in The Alaska Siberia Medical Research Program 12.8% of the normoglycemic men, and in 18.8% of the women and 8.3% of the men with abnormal glucose intolerance (65). It is clear from Trimble's (93) study of Yup'ik Inuit that most strokes (79%) are ischemic and related to the high burden of carotid plaque that we discovered in Inuit (78,80). This burden of plaque is uniformly higher than in U.S. whites and black populations (78,80). Our studies show that although not associated with ω3 FAs, the presence and extent of plaque are associated with smoking and consumption of the saturated FAs palmitate and stearic acid (76,78).
Participatory research
The principles of participatory research were followed from the beginning, and, I believe, resulted in the exceptionally high participation rate (83% in 7 villages) and goodwill among participants and researchers. The kindness expressed by the Natives has been exceptional; I have been fortunate in being able to make some 7,000 home visits over the last 20 years to explain the research and the results. Hundreds of individuals with abnormal screening values were referred to health care providers. Over the years, I have not had a single negative interaction with the Inuit. In fact, the unadulterated enthusiasm by the Native communities for our research has been a steady reinforcement for our approach. In the intervention study (67), ladies would run out into the street from their homes shouting: "Dr. Ebbesson, I feel so good, I have lost 15 pounds (for example) and I am doing everything you say. " One claimed that she lost 72 pounds in 4 years. The "guide for prevention of DM and CVD" based on our research results, which was given to all participants, has become very popular (Appendix 1). It reflects our mission to conduct participatory research with a goal to help the communities with disease prevention.
Conclusion
A small, visionary investment by the University of Alaska for establishing the program has, in 20 years, led to meeting the original objectives. Considerable progress has been made in elucidating the rapidly growing health problems among Alaska Natives by a diverse cadre of investigators recruited for the effort. A co-operative effort by the UA, RAMS, Alaska Native Health communities, the NIH and collaborators in Canada, Denmark and Sweden has resulted in the development of self-sustaining medical research efforts in Alaska and Siberia. The objectives of the program were achieved because of the extraordinary and cheerful contributions by all participants in the program. The collaborative research resulted in some 70 published manuscripts identifying and characterizing research-neglected health problems including alcoholism, seasonal affective disorders, diabetes and cardiovascular disease. These studies form the basis for ongoing participatory research on many fronts. | 2018-04-03T02:00:07.310Z | 2011-02-18T00:00:00.000 | {
"year": 2011,
"sha1": "5456664b25d7c2619d7851bd47c9cf68f93660ad",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.3402/ijch.v70i5.17853?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "0d24a68ef1646310823d1b84c2a6cd5b2e04bf9a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259914989 | pes2o/s2orc | v3-fos-license | Chemometrics-assisted UV-spectroscopy for simultaneous determination of curcumin and piperine in solid dispersion-based microparticles containing Curcuma longa and Piper nigrum extracts
: Piperine and curcumin can be combined in a mixture as piperine has been known as a bioenhancer. The piperine-curcumin combination was formulated in a solid dispersion-based microparticle containing Piper nigrum and Curcuma longa extracts. The aim of the study was to simultaneously determine piperine and curcumin concentrations in a combined dosage form of solid dispersion-based microparticles using UV-Vis spectrophotometry. A UV-Vis spectrophotometry was combined with a partial least square (PLS) approach with a central composite design (CCD) to develop a calibration series consisting of 36 standard mixtures of piperine and curcumin at concentrations ranging from 0 to 6 g/mL. The model of the calibration series was validated for determination coefficient ( R 2 ), root means square of error prediction/cross-validation (RMSEP/RMSECV) dan predicted residual sum of square (PRESS). Accuracy and precision were determined as per ICH guidelines. The PLS model was successfully validated and applied for resolving overlaid spectra of piperine and curcumin at 206 – 408 nm. Accuracy and precision studies of prepared samples containing a mixture of piperine and curcumin at low, medium, and high concentrations conducted on different days were met with the AOAC International requirements. The limit of detection (LOD) was determined using a pseudo-variates model, and the limit was found to be 0.25 µ g/mL and 0.33 µ g/mL for piperine and curcumin, respectively. The proposed method is suitable for simultaneously determining piperine and curcumin that appeared in a mixture of P. nigrum and C. longa extracts in the solid dispersion-based microparticle samples.
INTRODUCTION
Curcumin is a polyphenolic compound found in the Curcuma longa and Curcuma xanthorrhiza plants. It is a fundamental component of JAMU, an Indonesian traditional medicine believed to treat and prevent various maladies such as liver disease, digestive problems, and dysmenorrhea [1]. Curcumin/curcuminoids have demonstrated antioxidant, anti-inflammatory, and anticancer properties in preclinical and clinical research. Although curcuminoids have been shown to have a wide range of therapeutic properties with various biological targets and interactions, the clinical application of curcuminoids in formal therapy is limited by their poor bioavailability after oral administration. The bioavailability problem is caused by low water solubility, poor dissolution, absorption, and extensive metabolism once absorbed [2].
Piperine is a high lipophilic, weakly basic alkaloid component found in black pepper (Piper nigrum) extract that has been acknowledged as a bio-enhancer. It improves drug absorption, bioavailability, and bioefficacy by stimulating gastrointestinal amino acid transporters and blocking drug-metabolizing enzymes [3]. Given the bioavailability-enhancing mechanism, combining piperine with curcumin in a formulation can combat curcumin's low bioavailability [4]. Because both compounds have poor water solubility, the solid dispersions approach is the preferred strategy for increasing solubility and dissolution. Solid dispersion-based microparticles containing C. longa and P. nigrum were developed in these studies, addressing product quality control. In the manufacturing stage, determining content in the final product is a part of quality assurance. Therefore, the amount of piperine and curcumin in the solid dispersion-based microparticle must be determined accurately. This research looked into an analytical approach that could quickly estimate piperine and curcumin concentrations in their combination in the formulation during regular laboratory analysis.
There is no official method in any pharmacopeia for simultaneously estimating piperine and curcumin present as a mixture in a dosage form [5]. Literature studies reveal that several analytical methods have been developed to simultaneously quantify piperine and curcumin in dosage forms, polyherbal formulations, and plasma samples. There is reverse-phase high-performance liquid chromatography (HPLC) equipped with a UV-Vis detector [6], Liquid Chromatography/Mass Spectroscopy (LC/MS) method for enabling simultaneous quantification of curcumin and piperine for pharmacokinetic evaluation [7,8], and high-performance thin layer chromatography (HPTLC) for simultaneous detection of piperine, curcumin and boswellic acid in a polyherbal transdermal patch [9]. While chromatographic methods have been shown to be selective for quantifying curcumin and piperine concentrations, the published methods require time-consuming sample extraction procedures and a substantial volume of organic solvents, making them less cost-effective and environmental risks due to the solvent waste.
The spectrophotometric method is one of the most preferred approaches for pharmaceutical analysis because it provides simplicity and inexpensiveness compared to other analytical methods due to natural native convenience and usefulness in most quality control studies of drugs. Spectroscopic combined with the application of Vierordt's equation was reported for determining piperine and curcumin concentrations in binary mixture samples measured at the same time analysis. The method was validated for simultaneous quantification of piperine and curcumin in dissolution and nanoparticle formulation samples following spectral measurement of the samples on the maximum wavelength of piperine and curcumin [5,10]. Despite the advantages of the spectroscopic method over the chromatographic method in terms of simplicity and the possibility of using Vierordt's equation for simultaneous measurement in multicomponent formulations, its widespread use poses challenges in the quality control phase of the manufacturing process. Recently, the application of multivariate models in a chemometric approach combined with spectrophotometry is the choice in multicomponent analysis.
Partial Least Square (PLS) is one of the multivariate models acknowledged to be used in many quantitative assays of pharmaceutical formulations among the numerous chemometric techniques applied to multicomponent analysis [11]. PLS are generally used to set up the multivariate model based on two data sets (of the same objects), the chemical values, and the spectra. PLS regression aims to establish a model that allows the analysis of an unknown sample. The PLS has been examined as a chemometric technique for resolving the resolution of overlapping spectra in multicomponent analysis resulting from UV-Vis spectrophotometry. Furthermore, when combined with chemometric data, the PLS methodology allows quantification in a multicomponent mixture with findings that are equivalent to HPLC and in agreement with HPLC results with an accuracy of 98-103 percent [12].
To the best of our knowledge, there are no publications for the simultaneous determination of piperine and curcumin in a formulation based on the UV-Vis spectroscopy and multivariate calibration methods. The work is the first study for the simultaneous analysis of piperine and curcumin in combined pharmaceuticals using chemometrics-assisted spectrophotometric methods based on multivariate calibration techniques, mainly PLS. The study's objective was to develop a UV-Vis spectroscopic method with a PLS approach for simultaneously determining piperine and curcumin concentrations in a combined dosage form of solid dispersion-based microparticles. Figure 1 indicates the absorption spectra of piperine and curcumin in methanol as individual reference standard compounds of piperine (3µg/mL) or curcumin (3µg/mL), their mixture (3/3 µg/mL) in methanolic solution and the synthetical solid dispersion-based microparticle containing P. nigrum and C. longa dissolved in methanol. In the measurement range of 200-600 nm, the spectra of piperine (Figure 1 c) overlap with curcumin spectra (Figure 1 a) at 300 -400 nm. Given that the maximum absorption spectra of piperine in this study was found at 343 nm (Figure 1c), the determination of piperine in the co-existence of curcumin ( Figure 1 b, d) using the conventional spectroscopic method leads to significant analytical error. Therefore, combining spectrophotometry with chemometric techniques was necessary for such determination due to the significant interference of piperin and curcumin spectra. Note: a = curcumin 3 µg/ mL; b = synthetical sample of SD formulation containing P. nigrum/C. longa. c = piperine 3 µg/mL; d = mixture of piperine/curcumin standard (3/3 µg/mL)
Chemometric approach: PLS-assisted spectrophotometry
PLS were used in a chemometric-aided spectrophotometry approach to resolve strong overlapping absorption spectra of piperine and curcumin for the simultaneous identification of both chemicals in a mixed dose form. Among other chemometric models, such as Principal Component Regression (PCR) and Principal Component Analysis (PCA), PLS has been regarded as a powerful tool for resolving the interference between multiple overlapping spectra of many compounds, which necessitates the determination of multiple components as a mixture [12]. In addition, Palur, Archakam, and Koganti (2020) discovered that PLS-assisted spectrophotometry had high method selectivity for the simultaneous detection of paracetamol, diphenhydramine, caffeine, and phenylephrine concentrations in a tablet dosage form. That data was comparable to the HPLC approach [13].
To create a PLS model of the calibration, this study used 36 samples of piperine and curcumin at various concentrations, including the blank sample. The calibration samples' UV-vis spectra in the 200-600 nm range were pre-treated by deleting the less informative data, and the wavelength region offering the most usable data was chosen to develop the PLS model. The selection of a spectra region was reported to improve the prediction accuracy (Kambira et al., 2020 The wavelengths below 206 nm were excluded since their contribution to the measurement was considered minor. Furthermore, wavelengths greater than 482 nm were avoided since, while curcumin's spectra absorption is minimal, any absorbance value greater than 482 would introduce noise into the calibration, thereby increasing imprecision. The number of principal components (NComp) is critical for PLS regression development because the number of components should account for as much of the experimental data as possible without overfitting [12]. In this study, the number of components was determined using a cross-validation method leaving out one sample at a time technique [14]. The optimum number of components found in these studies is 7 for piperine and 6 for curcumin (Table 1). The PLS regression model of piperine and curcumin calibrations was cross-validated using the leaveone-out technique. The actual sample concentrations (measured concentrations) were plotted against the expected concentrations of all calibration samples (Figure 2). This internal validation of the PLS model was done by determining the goodness of fit parameters for the simultaneous piperine and curcumin calculation, such as coefficient of determination (R 2 ), RMSECV, RMSEP, and PRESS. The RMSECV was used a diagnostic test to examine the errors in the predicted concentrations. It denotes the precision and the accuracy of predictions [12]. The number demonstrating the lowest RMSECV, RMSEP, and PRESS values was selected for building the PLS calibration model [15] Furthermore, the PLS regression of the calibration model was considered to be good, with the best prediction if the correlation value R 2 is greater (greater than 0.91 or close to 1) [16]. Table 1 shows the final RMSECV, RMSEP, PRESS, and R 2 values. As shown in Table , the R 2 values were found to be 0.998 for piperine and curcumin. Altogether, the validation parameters determined in this study indicate that the PLS model of piperine and curcumin calibration demonstrates strong predictive capacity.
Accuracy and precession
Accuracy and precision analyses were conducted to validate the selected PLS regression model of the piperine and curcumin. The accuracy and precision studies were conducted on three concentrations of independent samples containing piperine and curcumin. Concentrations of 0.9, 2, and 5 µg/mL were prepared to represent low, middle, and high concentrations of piperine and curcumin. Table 2 summarizes the accuracy and precision parameters. The recovery/RSD values of piperine were 84.07% -110.90%/0.56%-10.31% for intra-day assay and 92.99%-107.87%/2.65%-10.26% for inter-day assay. The intra-inter-day determination of recovery/RSD values of curcumin resulted in the value of 83.27%-110.85%/0.27%-10.31% (intraday) and 96.31%-101.20%/1.29% -12.85% (inter-day). A higher RSD value of 12.85% was demonstrated by the sample containing curcumin at 0.9 µg/mL obtained on inter-day studies. A calculation on Predicted Relative Standard Deviation (PRSDR) using Horwitz formula [17] demonstrated that the maximum RSDR value at a concentration of 0.9 µg/mL is 16%. For the inter-day analysis, the RSD should consider the PRSDR. From this number, it can be concluded that the PLS-developed method was accurate and precise and demonstrates excellent reproducibility as shown by the data of inter-day assay.
Model sensitivity
The PLS regression model sensitivity was determined based on the pseudo-univariate method [18]. From the pseudo-univariate line, the limit of detection (LOD) values was found at 0.25 µg/mL and 0.33 µg/mL for piperine and curcumin, respectively. A low LOD demonstrated from this study indicates the high sensitivity of the PLS regression method developed in this study, which allows the determination of piperine and curcumin in commercial samples primarily prepared from the extract forms.
Assay on synthetical and commercial samples
The validated PLS regression model was used to assess piperine and curcumin concentrations in commercial samples and the solid dispersion-based microparticle containing P. nigrum and C. longa extracts. Table 3 displays the assay results on solid dispersion-based microparticles containing P. nigrum and C. longa extracts. The high value of recovery of 98.42 % and 103.09%, respectively, and the low value of RSD of 3.76 % and 4.26%, respectively, show that the results match the intended piperine and curcumin concentrations in the synthetical samples.
Further application of the proposed PLS regression model was conducted to measure piperine and curcumin concentrations in the commercial sample of tablet dosage containing P. nigrum and C. xanthorrhizae. The assay is presented in Table 4 and is revealed the concentration of 0.07% w/w and 0.08% w/w of piperine and curcumin in the dosage form with the RSD values of 12.13% and 9.32% for piperine and curcumin, respectively. A relatively higher RSD values of piperine and curcumin found in this sample were thought to be related to the concentrations of the low analytes at which the these monitored concentrations were around the limit of detection. Referred to the procedure of the commercial sample preparation in the method section of this manuscript, with each commercial tablet containing 20 mg of C.xanthorrhizae rhizome and 2.5 mg of P.nigri Fructus extracts, the samples subjected to analysis in this study could contain the maximum concentrations of 0.72 µg/mL and 5.74 µg/mL of P.nigri Fructus and C.xanthorrhizae rhizome extracts. Given that the extract may contain components other than piperine (P.nigri Fructus) and curcumin (C.xanthorrhizae) and the LOD values found in the proposed method, the maximum extract concentrations of the sample could have meager amounts of piperin or curcumin at which it may around the detection limits.
CONCLUSION
It can be concluded that PLS is a very efficient method for the simultaneous determination of substances with overlapped spectra in mixtures when the contributions of components to the composite spectra are much disparate. The proposed PLS regression model is suitable for the simultaneous determination of piperine and curcumin in a synthetical solid dispersion-based microparticle containing P. nigrum and C. longa using the simple spectrophotometric method without any separation works in the sample preparation. The method is considered a selective, sensitive, rapid, and accurate analytical method employing UV-Vis spectroscopy combined with the chemometric approach and is applicable for routine analyses.
Chemicals
USP-grade reference standard compounds of curcumin and piperine were purchased from Sigma-Aldrich (St. Louis, USA). C. longa extract was obtained as a gift from PT Phytochemindo Reksa, Bogor, Indonesia (purity of 97.56% w/w curcumin analyzed by spectrophotometry). P. nigrum extract (purity of 98.97% w/w of piperine as determined by HPLC ) was isolated using the reported method [19]. PVP K30 was a gift from PT Konimex, Solo, Indonesia. Pro-analytical grades of methanol and ethanol were obtained from Merck (Darmstadt, Germany). De-ionized water was prepared using a Milli-Q IQ water purification system. Synthetical samples of solid dispersion-based microparticles containing C. longa rhizome and P. nigrum fructus extracts were prepared using a solid dispersion technique, a solvent evaporation method, in a Büchi minispray-dryer (Büchi, Flawil, Switzerland) as described previously [6]. Commercial tablets containing curcumin and piperine of C. xanthorrhiza dan P. nigrum extracts were obtained through the local pharmacy in Yogyakarta, Indonesia (batch number of 20H0152, Expiry date of . Each tablet contains 20 mg of C. xanthorrhizae rhizome and 2.5 mg of Piperis nigri Fructus extracts. The weighted average of 20 tablets was determined in our laboratory and was found to be 417.75 ± 6.41 mg.
Preparation of stocks solutions dan calibration graph
The individual stock of reference standard solutions of curcumin and piperine were prepared in methanol at 1 mg/mL concentrations under sonication for 15 minutes in an Ultrasonic Bath (Fisher FS140H). The stock solution was stored under -20°C for a maximum of 3 weeks.
Preparation of calibration series
In order to obtain a suitable calibration set, a systematic experimental design was used. For designing the multilevel concentrations in calibration series, this study employed a Central Composite Design (CCD) combined with some replication samples at specific concentrations, as presented in the run of Hata! Başvuru kaynağı bulunamadı.. R-studio software of 3.5.1 version was used to generate the CCD in the runs of the calibration series. The calibration samples were prepared by spiking the blank with the stock solutions of curcumin and piperine to achieve compositions of curcumin and piperine in mixture solutions, as described in 5. The solid dispersion carrier's methanolic solution, a PVP K30 (70% w/w) solution, was used as the blank sample. All calibration samples which were filtered through a 0.45 µm filter size, were scanned in the wavelength range of 200-600 nm with a reading interval of 1 nm (UV-VIS 1800 Shimadzu, Japan).
Generating Partial Least Square (PLS) model on calibration data
A calibration model based on PLS model was developed for the simultaneous quantification of curcumin and piperine in solid dispersion-based microparticles and commercial samples. Calibration series, including blanks samples, were used to construct PLS model using an XL-STAT add in Excel software of 2020.4.1.1027 version. A step of backward elimination was conducted to choose the latent variable of the wavelength in the range of 200-600 nm. Replicated samples at certain concentrations shown in Hata! Başvuru kaynağı bulunamadı. were used as internal validation set for the PLS-generated model. Leave-One-Out-Cross-Validation (LOOCV) based on Jack-Knife method was conducted to select the suitable PLS model. R 2 , Root Mean Square Error of Cross-Validation (RMSECV), and Root Mean Square Error of Cross-Validation Prediction (RMSEP) are the parameter to assess the calibration and were calculated on the XL-STAT. The lowest number of the components and wavelength resulting coefficient of determination (R 2 ) above 0.91, smaller RMSECV/RMSEP, and lowest predicted residual sum of square (PRESS) were selected [16].
The selected PLS model obtained by the internal validation method was subjected to an external validation step using an accuracy and precision test. The samples were prepared independently from the calibration series to conduct the accuracy and precision tests. The samples were prepared as a mixture of piperine and curcumin at three concentration levels by spiking the blank sample to result in the concentrations of piperine and curcumin of 0.9-0.9, 2-2, and 5-5 µg/mL. These three levels represented low, middle, and high sample concentrations. The accuracy and precision studies were conducted on three consecutive days. Recovery and Relative Standard Deviation (RSD) values obtained from the three replications in every concentration level were judged according to the Guidelines for Standard Method Performance Requirements in the Association of Official Agricultural Chemists (AOAC) [17].
Commercial sample
Twenty commercial tablets were weighed individually and ground into finely powdered. A 100.0 mg powder was accurately weight and dissolved in 100 mL of methanol. A volume of 0.6 mL was transferred into a 5 mL volumetric flask and diluted with methanol. The samples were filtered through a 0.45 µm filter before the spectral measurement on 200-600 nm using a spectrophotometer at a 1 nm reading interval.
Concentrations determination
The sample spectrum data were analyzed on the validated PLS calibration model to predict then unknown curcumin and piperine concentrations in synthetical and commercial SD samples. | 2023-07-16T15:17:20.904Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "706b9b2742d2151636d8818215a33445c2a6312b",
"oa_license": null,
"oa_url": "https://jrespharm.com/pdf.php?id=1305",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ee3cc247025f0dd6320a8bae1c69615800e97abc",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": []
} |
55662888 | pes2o/s2orc | v3-fos-license | Parity doubling of nucleons, Delta and Omega baryons across the deconfinement phase transition
In this work we analyse positive- and negative-parity channels for the nucleon (spin $1/2$ octet), $\Delta$ and $\Omega$ baryons (spin $3/2$ decuplet) using lattice QCD. In Nature, at zero temperature, chiral symmetry is spontaneously broken, causing positive- and negative-parity ground states to have different masses. However, chiral symmetry is expected to be restored (for massless quarks) around the crossover temperature, implying that the two opposite parity channels should become degenerate. Here we study what happens in a temperature range which includes both the hadronic and the quark gluon plasma (QGP) phase. By analysing the correlation and spectral functions via exponential fits and the Maximum Entropy Method respectively, we have found parity doubling for the nucleon and $\Delta$ baryon channels in the QGP phase. For the $\Omega$ baryon we see a clear signal of parity doubling at the crossover temperature, which is however not complete, due to the nonzero strange quark mass. Moreover, in-medium effects in the hadronic phase are evident for all three baryons, in particular for the negative-parity ground states. This might have implications for the hadron resonance gas model. In this work we used the FASTSUM anisotropic $N_f = 2 + 1$ ensembles.
Introduction
In Nature, at zero temperature, a considerable mass difference between the negative-parity ground state of the baryons and the positive-parity one is understood from chiral symmetry breaking. In the case of the nucleon and ∆ baryon, this mass difference is far too big to be explained by the small explicit breaking of chiral symmetry due to the light u and d quarks. In fact it is well-known that the mass difference between the opposite-parity ground states is mainly a consequence of the spontaneous breaking of chiral symmetry. Since in the case of massless quarks chiral symmetry is expected to be restored above the deconfinement temperature, one would expect to see parity doubling in the QGP phase. On the other hand, chiral symmetry restoration is not fully realised for the Ω baryon because of the relatively large mass of the strange quark. Therefore it would be very interesting to investigate what happens to the opposite parity channels of this particle at high temperatures. While there are many works on chiral symmetry at finite temperature in the mesonic sector (see e.g. [1]), surprisingly only a few quenched analyses are available in the baryonic sector [2][3][4]. Our aim here is to analyse parity doubling in the unquenched baryonic sector, in particular for the nucleon, ∆ and Ω baryons. We study both correlators and spectral functions below and above the crossover temperature T c . Our previous analyses for the nucleon sector can be found in [5,6] and, more recently, [7] for the ∆ baryon.
Baryonic correlators and spectral functions
In general a baryonic correlator is written as (see for instance [8,9]) with an implicit sum over the spin index α . The simplest annihilation operators for the nucleon, ∆ and Ω baryons are respectively and [10,11] where the Lorentz index i is not summed and C corresponds to the charge conjugation matrix. We then project to a definite parity state by taking into account the interpolator O N ± = P ± O N (analogously for the ∆ and Ω baryons) in (1), where projects to positive or negative parity. We consider solely zero three-momentum correlators Each correlator contains both parity channels since C − (τ) = −C + (1/T − τ) . This means that the positive-parity channel propagates forwards in time, whereas the negative-parity one propagates backwards in time. For massless quarks one can prove [9,12] that a chiral rotation on the quark fields gives C ± (τ) = −C ∓ (τ) , implying that the two parity channels are degenerate. Using the Maximum Entropy Method (MEM) [13], we reconstruct the baryonic spectral functions ρ(ω) , which are related to the baryonic correlators through the spectral relation [12] C
Lattice setup
The configurations used here are created by the FASTSUM collaboration [14][15][16], with 2+1 flavours of non-perturbatively-improved Wilson fermions. The configurations and the correlation functions have been generated using the CHROMA software package [11], via the SSE optimizations when possible [17]. Tab. 1 shows the simulation parameters based on the setup of the Hadron Spectrum Collaboration [18]. The masses of the u and d quarks produce an unphysical pion with a mass of 384(4) MeV [19]. The strange quark has been tuned to its physical value, therefore we expect the mass of the Ω baryon to be close to the physical one. In order to better reconstruct the spectral function from the correlator, we used an anisotropic lattice with a s /a τ = 3.5 and a s = 0.1227 (8) fm . This allows us to have a sufficiently large number of points in the Euclidean time direction even at high temperatures. From the calculation of the renormalized Polyakov loop one extracts the crossover temperature T c = 183 MeV, which is higher than in Nature, due to the large pion mass. Concerning the baryonic correlators, gaussian smearing [20] has been employed to increase the overlap with the ground state. In order to have a positive spectral weight, we apply the smearing on both source and sink, i.e.
where A is an appropriate normalization and H is the spatial hopping part of the Dirac operator. We tuned the parameters to the values n = 60 and κ = 4.2 , for maximising the length of the plateau for the effective mass of the ground state in the N 3 s × N t = 24 3 × 128 lattice. The hopping term contains APE smeared links [21] using α = 1.33 and one iteration. The smearing procedure is only used in the spatial directions and applied equally to all temperatures and ensembles.
Results for N, ∆ and Ω baryons
The correlator of the Ω baryon is shown on the left panel of Fig. 1, in which the positive-and negativeparity channels are plotted separately. The correlators for the nucleon and ∆ baryon are shown in [7], and their temperature behaviour is very similar to the one of the Ω baryon correlator. The correlators have been normalised to the first Euclidean time τ = a τ ( τ = N τ a τ − a τ ) for the positive-(negative-) parity partner, i.e. (we write C = C + for ease of notation) Table 2. Ground state masses obtained using exponential fits to the nucleon, ∆ and Ω baryons correlators for temperatures below T c . The masses of the positive and negative parity ground states include an estimate for statistical and systematic uncertainties.The ratios δ N , δ ∆ and δ Ω are defined as δ = (m − − m + )/(m − + m + ) . Note that δ Ω is not accessible because m − Ω is still unknown.
This normalisation allows us to better compare the data at different temperatures. The right panel of Fig. 1 shows approximately symmetric correlators for both the nucleon and ∆ baryon at the highest temperature we consider. This means that the two parity channels are degenerate. Moreover, the N and ∆ correlators are almost identical, suggesting that they represent quasi-free u and d quarks, and do not distinguish the different spin dependence in the two channels. The left panel of Fig. 2 shows the summed ratios of the three baryons, defined as where By definition the R factor lies between 0 and 1, and R = 0 corresponds to a symmetric correlator. We use the statistical uncertainties as weights in eq.(10). On the left of Fig. 2 we see a clear signal of parity doubling around the crossover temperature T c for all the three baryons, with possibly a slightly delayed effect for the Ω particle. The R factor is very close to zero at T = 1.9 T c for the nucleon and ∆ baryon, indicating that their correlation functions become almost symmetric in the QGP phase (as already shown on the right of Fig. 1). On the other hand, the R factor for the Ω baryon remains finite at our highest temperature. This is expected from what is shown on the right plot of Fig. 1, in which the Ω correlator is still asymmetric. This indicates that we do not have a complete parity doubling for the Ω baryon at these temperatures, due to the finite strange quark mass of approxiamately 100 MeV. In Tab. 2 and in the right panel of Fig. 2 we show the ground state masses in the confined phase extracted from a simple exponential fit of the correlation function, that is In order to estimate the systematic uncertainties of the four fit parameters, we have considered various Euclidean time intervals. To further suppress excited states, we have excluded very small times. The so-called Extended Frequentist Method [22,23] has been used for carrying out the statistical analysis. This method considers all possible variations and weights the final results according to the obtained p-value, which measures how extreme an outcome is. Further information on this method can be found in [22,23]. The lattice spacing was set by using the zero-temperature mass of the positive-parity ground state of the Ω baryon [19], therefore, by construction, the value we found at T = 0.24 T c has to be in agreement with the value of 1672.4(0.3) MeV found in Nature [24]. The ground state mass of the negative-parity channel is still unknown in the PDG and there are three possible candidates. The value we obtained in Tab. 2 at T = 0.24 T c seems to favour the candidate with the lowest mass. However, a systematic analysis (continuum extrapolation and physical u and d quarks) is necessary to make a prediction. One can see that in-medium effects are more important in the negative-parity channel for all three CONF12 baryons, since the mass of the negative-parity ground state decreases considerably when temperature is increased, whereas the mass of the positive-parity partner is almost unaffected by temperature. The spectral function of the Ω baryon at different temperatures is plotted in the lower panels of Fig. 3. A similar plot for the other two baryons can be found in [7]. The positive-parity channel corresponds to ω > 0 , whereas ω < 0 refers to the negative-parity channel. The spectral functions of the Ω baryon are not even functions of ω , either below or above T c , indicating that the opposite parity channels are not degenerate. But one can still see a signal of parity doubling above T c , since the spectral function becomes more symmetric with respect to the origin when temperature increases. In order to have a clear plot of many spectral functions in the same figure, error bars are not displayed. However, they do not modify what have been said above about the results. At the top of Fig. 3 we show a comparison between the spectral functions of the three baryons for the lowest and highest temperatures on our lattice. Below T c all the spectral functions are very asymmetric, whereas above T c the spectral functions of the nucleon and ∆ baryon are almost symmetric. Moreover, as a consequence of the almost identical nucleon and ∆-baryon correlator at T = 1.9 T c , the corresponding spectral functions are nearly identical.
Conclusions
By studying the temperature dependence of the correlators, spectral functions and R factor, we clearly observe a signal of parity doubling of the ground state across the crossover temperature for the nucleon and ∆ baryon, with parity doubling realised almost completely at the highest temperature on our lattice. In the Ω-baryon case the opposite parity ground states remain still distinct, but we still observe a tendency towards parity doubling. For the nucleon and ∆ particle, the asymmetry in the parity partners at zero temperature is mainly due to spontaneous breaking of chiral symmetry, hence the observed parity doubling can be understood from the restoration of chiral symmetry, which is expected to occur at high temperature. We note that there is still a small explicit breaking of chiral symmetry since we are using massive u and d quarks in the Wilson formulation. On the other hand, the explicit breaking is not negligible in the case of the Ω baryon, which contains s quarks with physical mass of the order of T c . Therefore the fact that parity doubling is not fully realised for this particle can be understood from this explicit breaking of chiral symmetry due to the massive s quark. | 2016-11-07T12:14:02.000Z | 2016-11-07T00:00:00.000 | {
"year": 2016,
"sha1": "1fe374b9044dd28c0c1de10928d38fa5299ebd33",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2017/06/epjconf_conf2017_07004.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "1fe374b9044dd28c0c1de10928d38fa5299ebd33",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
264988280 | pes2o/s2orc | v3-fos-license | Susceptibility of Vibrio spp. from Viscera Organ and Flesh of Lates calcarifer Against Antibiotics
of Lates calcarifer is widely developed in Indonesia due to its high
Introduction
Cultivation of Seabass (Lates calcarifer) is widely practiced in Indonesia.Cultivation can be carried out in floating net cages or in ponds.Several advantages of Seabass serve as the basis for its high cultivation activity.These advantages include the high economic value of Seabass, with prices ranging from Rp. 75,000 to Rp. 80,000 per kilogram (Santika et al., 2021).Seabass has a wide market coverage both domestically and internationally, including countries like Germany, Spain, England, and Italy (Asdary et al., 2019).Seabass has a broad physiological tolerance range and rapid growth, making the cultivation process easier (Hasibuan et al., 2018).
The biggest challenge in aquaculture is diseases caused by pathogenic bacteria.The presence of pathogenic bacteria can disrupt cultivation activities and even lead to mass mortality in fish (Azhar et al., 2020).This can be detrimental to the aquaculture sector and pose risks to human health in the vicinity.Factors that can potentially contribute to the development of pathogenic bacteria in fish farming include poor water conditions (Lein et al., 2020).Pathogenic bacteria can also originate from contamination of the aquaculture equipment and the use of inappropriate cultivation techniques (Palawe et al., 2018).
Vibrio spp. is one type of pathogenic bacteria that can cause diseases in marine organisms (Azhar and Yudiati, 2023).Vibrio spp.bacteria are commonly found in shallow tropical waters because they thrive in waters with temperatures up to 37°C (Rahmaningsih et al., 2012).The use of antibiotics is one of the measures taken to inhibit the growth rate of Vibrio spp.bacteria (Santi et al., 2017).Antibiotics are widely used in fish farming to control diseases caused by bacteria (Nurhasnawati et al., 2016).However, this has led to cases of antibiotic-resistant bacteria, necessitating resistance testing to determine the level of bacterial resistance to the antibiotics to be used.This research aims to identify the types of antibiotics that can be used to treat diseases in L. calcarifer caused by Vibrio spp.bacteria and to determine the resistance of Vibrio spp.bacteria isolated from the viscera organs and flesh of L. calcarifer affected by the disease to antibiotics.
Collection of Vibrio spp. Isolates.
The Vibrio spp.isolates used in this study were collected from the Laboratory of Biology, Faculty of Fisheries and Marine Sciences, Diponegoro University.The L. calcarifer fish were sourced from aquaculture at the Marine Science Technopark, Diponegoro University, Jepara, Faculty of Fisheries and Marine Science, Diponegoro University, Jepara, Indonesia.
Preparation of Agar Slant Solid Media
The preparation of slant solid media begins by assembling the necessary equipment and materials.The weighed materials are 1.3 grams of Nutrient Broth (Merck) and 1.5 grams of Agar (Merck), which are placed into an Erlenmeyer flask and mixed with 100 ml of distilled water.A magnetic stirring bar is added to the Erlenmeyer flask, and the mixture is homogenized on a hot plate magnetic stirrer.Once the media starts foaming, the Erlenmeyer flask is removed from the hot plate magnetic stirrer and allowed to warm up.When the media is sufficiently warm, it is poured into reaction tubes, filling them to a volume of approximately 5 ml.The reaction tubes are then sealed with cotton and aluminum foil and sterilized using an autoclave.After autoclaving, the tubes containing the media are tilted on a stand on a sterile table surface.Be cautious not to tilt them too much to prevent spillage onto the cotton covering, and allow the media to solidify.
Characterization and purification of Vibrio spp. bacteria
The Vibrio spp.bacteria to be characterized originate from samples of the visceral organs and muscles of snapper fish, which have been previously incubated.The bacteria are observed from petri dishes with the assistance of a flashlight, then marked and identified based on their characteristics following the bacterial characterization method by Ejikeugwu (2017).The characteristics of the bacteria are recorded, and colonies with similar characteristics are counted.Once characterized, the bacteria are purified on prepared solid agar media.Purification is carried out using a sterilized needle that has been heated until it becomes red-hot with a Bunsen burner.After it cools down, the needle is streaked through the bacteria on the petri dish and then streaked onto the solid agar media.Bacteria with different characteristics are placed into separate test tubes and labeled accordingly.The test tubes are sealed with plastic wrap in an aseptic manner and incubated in a sterile container.
Preparation of Liquid Media and Purification of Vibrio spp. Bacteria from Solid Agar to Liquid Media
The preparation of liquid media begins with the preparation of the necessary equipment and materials.Nutrient Broth weighing 1.3 grams and NaCl weighing 1.5 grams are placed into an Erlenmeyer flask, followed by the addition of 100 ml of distilled water.A magnetic stirring bar is placed inside the Erlenmeyer flask, which is then homogenized using a hot plate magnetic stirrer.Once all the ingredients are homogenized, the Erlenmeyer flask is removed from the hot plate magnetic stirrer and left to warm.The liquid media is then transferred into separate vials, each containing 5 ml.The filled vials are sealed with cotton and aluminum foil, followed by sterilization using an autoclave.
Vibrio bacteria grown in the test tubes are scraped with an inoculating needle.The inoculating needle is sterilized using a Bunsen burner until it becomes red hot.After the needle has cooled down, it is used to scrape the bacteria from the solid agar medium.The inoculating needle, now containing a trace of bacteria, is then dissolved in the liquid media inside the vial.The vial is sealed again with cotton in an aseptic manner, sealed with plastic wrap, labeled according to the sample, and incubated in a sterile container.
Preparation of Solid Media
The preparation of solid media begins with the preparation of the necessary equipment and materials.The weighed ingredients, Nutrient Broth (1.3 grams) and Nutrient Agar (1.5 grams), are placed in an Erlenmeyer flask, followed by the addition of 100 ml of distilled water (aquades).A magnetic stirring bar is inserted into the Erlenmeyer flask, and the contents are homogenized on a hot plate magnetic stirrer.Once the media has foamed, the Erlenmeyer flask is removed from the hot plate magnetic stirrer and sealed with cotton and aluminum foil.The media in the Erlenmeyer flask is then sterilized using an autoclave.
After sterilization, the media is allowed to cool slightly and is poured aseptically into petri dishes.The dishes are sealed with plastic wrap and covered with plastic secured with rubber bands.The petri dishes are then stored separately in sterile containers to prevent contamination.The preparation of the McFarland 0.5 standard solution begins with the preparation of the necessary equipment and materials.Sodium chloride (NaCl) weighing 0.85 grams is placed into an Erlenmeyer flask and then 100 ml of distilled water (aquades) is added.A magnetic stirring bar is placed inside the Erlenmeyer flask, and the mixture is homogenized using a hot plate magnetic stirrer.Once all the components are homogenized, the Erlenmeyer flask is removed from the hot plate magnetic stirrer and allowed to cool slightly.The McFarland standard solution is then transferred into individual vials, each containing 5 ml of the solution.These vials are sealed with cotton and aluminum foil, and then sterilized using an autoclave.
Testing the
The standardization of Vibrio spp.bacteria is performed by aseptically transferring the bacteria from the liquid media purification into the vials containing the McFarland standard solution, using a micropipette, until the turbidity level matches the McFarland 0.5 standard.Vials that have reached the McFarland 0.5 standard are sealed with plastic wrap, labeled according to the sample, and stored in a sterile container.2.6.2The preparation of antibiotic stock solutions.
Before starting the preparation of antibiotic stock, 20 ml vials and their caps are sterilized using autoclave.Seven types of antibiotics, namely Ciprofloxacin 500mg, Doxycycline 100mg, Tetracycline 500mg, Chloramphenicol 250mg, Ampicillin 500mg, Co-Amoxiclav 125mg, and Azithromycin 500mg are ground into a fine powder using a mortar and pestle.They are then transferred into separate vials and labeled accordingly.Ethanol 96% is added to the vials containing Ciprofloxacin 500mg, Doxycycline 100mg, Tetracycline 500mg, Chloramphenicol 250mg, Co-Amoxiclav 125mg, and Azithromycin 500mg, each with 10 ml of ethanol.Aquades (sterile water) is added to the vial containing Ampicillin 500mg, also with 10 ml of aquades.The antibiotic solutions are homogenized, capped, and sealed with plastic wrap.The antibiotic stock is stored in the refrigerator.
The antibiotic dilution process begins with the sterilization of aquades in test tubes and 10 ml vials using autoclave.Each vial is labeled for easy identification of the antibiotics.10μL of Ampicillin is added to the vial using a micropipette, followed by the addition of 990μL of sterile aquades.500μL of Genta-100 is added to the vial using a micropipette, followed by the addition of 500μL of sterile aquades.150μL of Doxycycline is added to the vial using a micropipette, followed by the addition of 850μL of sterile aquades.30μL of Tetracycline is added to the vial using a micropipette, followed by the addition of 970μL of sterile aquades.60μL of Chloramphenicol is added to the vial using a micropipette, followed by the addition of 940μL of sterile aquades.5μL of Ciprofloxacin is added to the vial using a micropipette, followed by the addition of 995μL of sterile aquades.80μL of Co-Amoxiclav is added to the vial using a micropipette, followed by the addition of 920μL of sterile aquades.The antibiotic dilution process is carried out aseptically, and the vials are capped and sealed with plastic wrap.
Preparation of Paper Disks and Antibiotic Injection into Paper Disks
The Whatman paper No. 3 is cut using a hole punch to create paper disks.These paper disks are then placed into a glass beaker and covered with aluminum foil.The beaker containing the paper disks is sterilized using an autoclave.
The injection of antibiotics into the paper disks begins with the preparation of sterilized petri dishes using an autoclave.Next, the sterilized paper disks are aseptically arranged in the petri dishes using sterile forceps.The paper disks are placed with some spacing between them to avoid sticking together.The injection of antibiotics is performed with aseptic techniques.Each paper disk is injected with 20μL of the respective antibiotic.The number of paper disks used corresponds to the number of samples in the petri dish.The dishes containing the paper disks are sealed with plastic wrap and left to air dry until the paper disks are partially dry.If the top surface of the dishes with paper disks is still wet, it can be dried using sterile gauze.Each dish containing antibiotic disks is labeled to differentiate its antibiotic content.
The testing of antibiotic potency through paper disk diffusion
The process involves the preparation of agar plates.Bacteria that have been standardized using the MacFarland solution are inoculated into the solid media with 100μL each, using a micropipette.Each type of bacteria is inoculated into two solid media plates.The bacterial inoculation is performed aseptically.The bacteria are then evenly spread across the surface of the solid media using a sterilized spreader heated by a Bunsen burner.Petri dishes are then sealed with plastic wrap, labeled, and placed in an incubator until the bacteria have fully integrated with the solid media.
The next step involves placing antibiotic disks onto the solid media.Four different types of antibiotic disks are placed in a single media plate.Paper labels are used to identify the antibiotic disks placed in each dish.Antibiotic disks are positioned on the solid media using sterilized forceps.They should be spaced apart and not too close to the edge of the dish.Petri dishes containing the media and antibiotic disks are then sealed with plastic wrap and incubated in the incubator for 24 hours.
After the incubation period, zones of inhibition will become visible in the media.These zones are measured using calipers at opposite edges of the inhibition zones, perpendicular to the edge.The diameters of the inhibition zones are measured.The results, including the diameter of the inhibition zones, are recorded.These results are then processed using Microsoft Excel to determine the sensitivity of the bacteria to each type of antibiotic, whether the bacteria are resistant, sensitive, or intermediate, following the criteria for the diameter of the inhibition zone as defined by CLSI (2011) as shown in Tabel 1.
Results
The results of the measurement of inhibition zones and the criteria for the Gentamicin inhibition zone can be seen in Table 2.The measurement results and criteria for Tetracycline HCl inhibition zones can be found in Table 3.The measurement results and criteria for Ciprofloxacin HCl inhibition zones can be seen in Table 4.The measurement results and criteria for Ampicillin inhibition zones can be seen in Table 5.The measurement results and criteria for Chloramphenicol inhibition zones can be seen in Table 6.The measurement results and criteria for Azithromycin inhibition zones can be seen in Table 7.The measurement results and criteria for Doxycycline inhibition zones can be seen in Table 8.The inhibition zones formed on the isolates can be seen in Figure 2. The percentage effectiveness of antibiotics against Vibrio spp.bacteria can be seen in Table 9.
Discussion
Testing the resistance of Vibrio spp.bacteria from the viscera organ and flesh of L. calcarifer begins with the characterization of bacterial colonies.Characterization is done by observing bacterial colonies on agar plates.Colonies are characterized and grouped based on their margin shape, color, and elevation.Bacterial colonies isolated from the JerTR37 plate have a round shape with a convex elevation and are yellow in color.This is consistent with the morphological characteristics of Vibrio spp.colonies as reported by Arisandi et al. (2019), which describes round colonies with a yellow color.In contrast, bacterial colonies isolated from the OTTR36 plate have round and irregular shapes with a convex elevation.The differences in the shape of Vibrio spp.bacterial colonies in one plate are influenced by abiotic environmental factors such as pH, temperature, oxygen levels, nutrient availability in the growth medium, and temperature (Situmeang et al., 2016).The color of Vibrio spp.bacterial colonies is influenced by their ability to utilize sucrose.This is supported by the statement from Ilmiah et al. (2012), which explains that yellow colonies of Vibrio spp.indicate the ability to utilize sucrose, while green colonies indicate the inability to utilize sucrose.
The level of antibiotic resistance is determined by measuring the inhibition zones formed by the reaction of Vibrio spp.bacteria from the viscera organ and flesh of L. calcarifer.Vibrio spp.bacteria from the viscera organ showed resistance to Azithromycin and Ampicillin with percentages above 50%.This is consistent with data from Kusmarwati et al. (2017), which reported that Vibrio spp.bacteria found in shrimp ponds exhibited resistance to Ampicillin, reaching up to 73%.Vibrio spp.bacteria from the white snapper muscle did not show resistance to excessive antibiotics, as indicated by resistance percentages below 50%.The highest resistance percentage was 35.7%, which was observed for Ampicillin and co-Amoxiclav antibiotics.Based on the results obtained, it can be concluded that Vibrio spp.bacteria from the viscera organ and flesh of L. calcarifer have the highest sensitivity to Gentamicin, with a sensitivity percentage of 100%. .
Conclusions
Vibrio spp.bacteria from the viscera organ and flesh of Lates calcarifer exhibit varying levels of resistance and sensitivity to different types of antibiotics.In general, the Vibrio spp.obtained in this study are still sensitive to antibiotics.
Resistance of Vibrio spp.Bacteria to Antibiotics 2.6.1 Preparation of McFarland 0.5 Standard Solution and Standardization of Vibrio spp.Bacteria with McFarland Standard Solution
Table 2 .
Measurement Results and Criteria for Gentamicin Inhibition Zones
Table 9 .
Percentage Effectiveness of Antibiotics against Vibrio spp. | 2023-11-04T15:11:54.987Z | 2023-11-02T00:00:00.000 | {
"year": 2023,
"sha1": "e644406be6df3d143f97de9599788c2eeb1a384a",
"oa_license": "CCBY",
"oa_url": "https://ejournal.immunolmarbiotech.com/index.php/JMBI/article/download/9/3",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "ad911c3c3b4b8ab4caf77aca0966609f7a1a92d6",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": []
} |
261634632 | pes2o/s2orc | v3-fos-license | Digital Health Programs to Reduce Readmissions in Coronary Artery Disease
Background The use of mobile health (mHealth, wireless communication devices, and/or software technologies) in health care delivery has increased rapidly in recent years. Their integration into disease management programs (DMPs) has tremendous potential to improve outcomes for patients with coronary artery disease (CAD), yet a more robust evaluation of the evidence is required. Objectives The purpose of this study was to undertake a systematic review and meta-analysis of mHealth-enabled DMPs to determine their effectiveness in reducing readmissions and mortality in patients with CAD. Methods We systematically searched English language studies from January 1, 2007, to August 3, 2021, in multiple databases. Studies comparing mHealth-enabled DMPs with standard DMPs without mHealth were included if they had a minimum 30-day follow-up for at least one of all-cause or cardiovascular-related mortality, readmissions, or major adverse cardiovascular events. Results Of the 3,411 references from our search, 155 full-text studies were assessed for eligibility, and data were extracted from 18 publications. Pooled findings for all-cause readmissions (10 studies, n = 1,514) and cardiac-related readmissions (9 studies, n = 1,009) indicated that mHealth-enabled DMPs reduced all-cause (RR: 0.68; 95% CI: 0.50-0.91) and cardiac-related hospitalizations (RR: 0.55; 95% CI: 0.44-0.68) and emergency department visits (RR: 0.37; 95% CI: 0.26-0.54) compared to DMPs without mHealth. There was no significant reduction for mortality outcomes (RR: 1.72; 95% CI: 0.64-4.64) or major adverse cardiovascular events (RR: 0.68; 95% CI: 0.40-1.15). Conclusions DMPs integrated with mHealth should be considered an effective intervention for better outcomes in patients with CAD.
A concerning proportion of patients with coronary artery disease (CAD) have major risk factors, 1 such that the residual lifetime risk for cardiovascular events and death could decrease if risk factor control and treatment improved. 24][5][6][7][8][9] However, despite unequivocal evidence for their effectiveness, CR programs are still underutilized with <50% of eligible patients referred worldwide. 1,10Consequently, cardiac readmission rates remain high and result in substantial costs.A major driver of these costs is hospitalization expenditure 11,12 with the average cost of a 30-day readmission postacute myocardial infarction (AMI) costing approximately USD $15,000, with a cumulative cost of over USD $1 billion per year. 13e rapid use of mobile health (mHealth) technologies has produced strategies and modalities to overcome the historical challenges associated with traditional delivery of CR and DMPs.mHealthdelivered DMP interventions are newly recommended in guidelines, 14 albeit based on lower-level quality evidence derived from a limited number of studies.An in-depth synthesis of the literature is required to keep abreast with the rapid boom in mHealth delivered secondary prevention cardiovascular disease (CVD) care. 151][22] Telephone delivery is resourceintensive, time-consuming, and limits scalability.
Less attention has been paid to the most up-to-date digital technologies, which enable a scalable and personalized service to numerous individuals.
Further, the few systematic reviews that have attempted to address newer technologies 23,24 included only a limited number of studies in their meta-analyses with mixed results.Therefore, the aim of this systematic review and meta-analysis was to develop evidence for the effectiveness of mHealth-enabled DMPs, excluding telephone only, on hospital readmissions and mortality in patients diagnosed with CAD.
METHODS
We conducted this systematic review in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines 25 and registered it with International Prospective Register of Systematic Reviews (PROSPERO) (CRD42022306749).The specific keywords, Medical Subject Heading terms, and search strategy are provided in Supplemental Table 2.
STUDY SELECTION.We used Covidence software for this systematic review. 26Two independent reviewers scanned the titles and abstracts of publications while a third reviewer adjudicated discrepancies.The full texts of selected studies were read in detail, and reasons for exclusion were recorded.ASSESSMENT OF RISK BIAS AND QUALITY OF THE EVIDENCE.Risk of bias was assessed using the Cochrane Collaboration's tool 27 for randomized controlled trials and the ROBINS-I assessment tool 28 for observational studies.Risk of bias plots were generated using ROBIS. 29GRADEpro GDT software 30 was used to assess the quality of evidence for each outcome reported.
DATA SYNTHESIS AND ANALYSIS.Analysis was performed using Review Manager (RevMan) version 5.3 software.We measured heterogeneity for each outcome across studies qualitatively by comparing study characteristics and quantitatively using the I 2 test statistic.A meta-regression was performed to account for baseline differences between comparator groups for each outcome.Dichotomous variables were converted to log odds differences between comparator groups.Mean differences were used for continuous variables.We undertook subgroup analysis of: duration of DMP, length of follow-up, year of publication, patient characteristics, and intervention components (outlined in Inclusion criteria) to assess the effect of benefit from mHealth DMPs compared to standard DMPs.
We generated estimates of treatment effect using pooled RR with 95% CIs and random-effects models utilizing Mantel-Haenszel methods for combining results across studies.Data were pooled and displayed in forest plots.Hypothesis testing was set at the 2-tailed 0.05 level.The funnel plot and Egger test were used to examine publication bias (Supplemental Figure 1). 31f these, we assessed 155 full-text studies to include a total of 18 publications in the systematic review.
PRIMARY OUTCOME ANALYSIS.The results for dichotomous primary outcome data are shown in separate forest plots for hospital encounters (Figure 3), MACE (Figure 4), and mortality (Figure 5).
R e a d m i s s i o n s .
][43][44][45]47,48 Pooled analysis showed that risk for all-cause readmission (n ¼ 1,514) (Figure 3A) was reduced by 32% (RR: 0.68; 95% CI: 0.50-0.91)and cardiovascular readmissions (n ¼ 1,009) (Figure 3B) by 45% (RR: 0.55; 95% CI: 0.44-0.68) in the mHealth-enabled DMP group compared to the DMP alone group. There was no evidenc of competing risk analysis whereby mortality may lead to a reduction in readmission given there were a total of 4 deaths from 1,514 patients included in all-cause readmission analysis and 5 deaths from 1,009 patients included in cardiac-related readmission analysis.
M o r t a l i t y .Eight studies 34,36,37,40,42,44,46,47 (n ¼ 2,711) assessed all-cause mortality.As shown in Figure 5, there was no risk reduction for all-cause mortality (RR: 1.72; 95% CI: 0.64-4.64) in the mHealth-enabled DMP group compared with the traditional DMP alone group.There were no included studies reporting cardiac-related deaths.
Braver et al (I 2 ¼ 23%).Stratified meta-regression revealed no baseline differences between comparator groups for any primary outcome (Supplemental Table 5).Subgroup analysis using pooled data revealed no significant group differences (Supplemental Table 6).
There were no group differences after removing the 2 observational studies.
RISK OF BIAS AND GRADE ASSESSMENT.The overall risk of bias across domains for each study was judged to be low or unclear (Supplemental Figure 2).The GRADE quality of evidence for each outcome was assessed as moderate for all-cause readmissions, high for cardiac-related readmissions and ED visits, low for MACE and very low for all-cause mortality (Table 2, Digital Health Programs to Reduce Cardiac Readmissions Supplemental Table 7).There was no evidence of funnel plot asymmetry or significant Egger tests (Supplemental Figure 1), and thus no evidence of publication bias.
DISCUSSION
In this systematic review and meta-analysis, mHealth-enabled DMPs for patients with CAD were effective interventions for reducing hospital readmissions and visits to ED.However, there was no greater benefit for mHealth-enabled DMPs on mortality or MACE outcomes (Central Illustration).care, enhance patient motivation and adherence, and achieve effective results. 50They also create cost efficiencies for health care delivery by reducing clinician and health system burden. 51,52Hence, rather than replacing the entire traditional model of care with a digital solution, digitally integrated models may provide disease management strategies in a more engaging, accessible, and scalable manner. 53 appears that the beneficial effects of novel mHealth DMPs are due to the sum of their parts.The evidence suggests that there is no one specific component that is the key but rather a combination of factors working together to improve provider-patient communication and enhance patient-centered care.
These factors combined enhance engagement, adherence, and subsequent outcomes. 24,50,54r study provides evidence for the effectiveness (Supplemental Table 1).There is heterogeneity between DMP interventions such that more tangible benefits might be realized from improved self-care/ behavior change strategies and symptom awareness.
These patient-focused behaviors may result in effective risk factor reduction and minimize exacerbation of CVD (including the onset of other events) rather than reduce mortality.
While our results provide evidence for mHealth interventions in lowering readmission risk, a consistent finding is that there is no evidence for reducing mortality. 17,20,23,24This may be due to comparator groups 20 (either standard care, traditional DMP or cardiac rehabilitation) receiving close to optimal care (Supplemental Tables 8 and 9) or study populations being at low risk of mortality. 17Given the large heterogeneity between DMP interventions, there is also difficulty in assessing the overall impact on survival rates and health outcomes.Importantly, many studies include relatively short follow-up periods, which may be too short to detect longer-term impacts on mortality.
The results of this systematic review support wider implementation of mHealth-enabled DMPs in secondary prevention settings and should be made accessible to all CAD patients to choose their preferred DMP type and setting.In doing so, one needs to consider the implication for vulnerable or disadvantaged patients.We must ensure to continue to innovate and drive rapid translational research in GRADE Working Group grades of evidence.High certainty: we are very confident that the true effect lies close to that of the estimate of the effect.Moderate certainty: we are moderately confident in the effect estimate; the true effect is likely to be close to the estimate of the effect, but there is a possibility that it is substantially different.Low certainty: our confidence in the effect estimate is limited; the true effect may be substantially different from the estimate of the effect.Very low certainty: we have very little confidence in the effect estimate: the true effect is likely to be substantially different from the estimate of effect.GRADEpro GDT software 30 was used to assess the quality of evidence for each outcome reported.The GRADE quality of evidence for each outcome was assessed as moderate for all-cause readmissions, high for cardiac-related readmissions and ED visits, low for MACE, and very low for all-cause mortality.a The risk in the intervention group (and its 95% CI) is based on the assumed risk in the comparison group and the relative effect of the intervention (and its 95% CI).
ED ¼ emergency department; MACE ¼ major adverse cardiac event; RR ¼ risk ratio.Digital Health Programs to Reduce Cardiac Readmissions digital health, but at the same time, consideration must be placed not to exacerbate health inequalities. 580][61] This is notable because many of these populations have greater rates of CVD compounded by less access to care. 62Additional research is needed to strengthen equitable Digital Health Programs to Reduce Cardiac Readmissions DATA SOURCES AND SEARCHES.MEDLINE, Embase database, the Cochrane Central Register of Controlled Trials, CINAHL, the Web of Science, and Scopus electronic databases were systematically searched for English language studies from January 1, 2007, to August 3, 2021.Grey literature was searched for additional papers.This start date was selected to coincide with the release of the Apple iPhone (the first internet-accessible smartphone with apps).
I
n c l u s i o n c r i t e r i a .Studies of patients who were discharged from hospital with CAD with a minimum of 30-days follow-up and at least 50 patients in the total sample that evaluated a DMP using mHealth compared with a standard DMP without mHealth were included.mHealth was defined as the use of wireless communication devices (mobile phones, smartphones, electronic tablets, and laptops) and/or software technology (apps, video and teleconferencing, email, telemonitoring, social media, and SMS communication), excluding telephone-only interventions.A DMP is defined as a coordinated health care plan to help people manage their disease better.A DMP is the sum of activities that include some if not all of the following: health professional/ nurse consultations, care coordination, regular follow-up, optimization of efficacious medications, education, psychological support, physical activity prescription, self-monitoring strategies (eg, blood pressure measurement), goal setting, and lifestyle/ behavioral self-management strategies (eg, medication adherence and dietary intake).Studies were included if they contained at least one DMP component and reported outcomes for at least one of all-cause or cardiovascular mortality, all-cause or A B B R E V I A T I O N S A N D A C R O N Y M S AMI = acute myocardial infarction CAD = coronary artery disease CR = cardiac rehabilitation CVD = cardiovascular disease DMP = disease management program ED = emergency department MACE = major adverse cardiovascular events mHealth = mobile health ICT = information communication technology Braver et al J A C C : A D V A N C E S , V O L . 2 , N O .8 , 2 0 2 3 Digital Health Programs to Reduce Cardiac Readmissions O C T O B E R 2 0 2 3 : 1 0 0 5 9 1 cardiovascular readmissions, or major adverse cardiovascular events (MACE).E x c l u s i o n c r i t e r i a .Studies were excluded if participants were not diagnosed with CAD or if they had heart failure.Interventions that did not involve mHealth, used the telephone only, or focused on a single behavior (eg, smoking cessation) were excluded.DATA EXTRACTION AND MANAGEMENT.One reviewer extracted information about the study population, intervention and control/comparison group characteristics, and outcome data from each study using a predeveloped data extraction form.Ambiguities were resolved by discussion and consensus.Multiple publications of the same study were assessed for the provision of endpoint data and the most recent publication was chosen for inclusion.
Figure 1 ,
our initial search yielded 3,411 references.After the removal of 1,384 duplicates, 2,016 were reviewed for title and abstract eligibility.
FIGURE 1
FIGURE 1 Study Selection
J 3
A C C : A D V A N C E S , V O L . 2 , N O .8 , 2 0 2 Braver et al O C T O B E R 2 0 2 3 : 1 0 0 5 9 1 Digital Health Programs to Reduce Cardiac Readmissions Digital Health Programs to Reduce Cardiac Readmissions in 3 papers34,39,42 as ST-segment elevation myocardial infarction (34% intervention and 28% control) and non-ST-segment elevation myocardial infarction (mean 34% intervention and 46% control).Overall, 3,818 patients were included ranging from 62 to 879 patients per study.The weighted average age of the intervention and control groups was 60.3 AE 1.3 years and 62.6 AE 1.15 years, respectively, and the majority were men (82% intervention and 80% control).
J
A C C : A D V A N C E S , V O L . 2 , N O .8 , 2 0 2 3 Digital Health Programs to Reduce Cardiac Readmissions O C T O B E R 2 0 2 3 : 1 0 0 5 9 1
FIGURE 3
FIGURE 3 Primary Outcome Analysis
Findings
did not vary across any patient, intervention, or study characteristics.Our results update the evidence for the effectiveness of mHealth-enabled secondary prevention DMPs by including more studies that assessed impact outcomes (hospitalizations, ED visits, MACE, and mortality) and using only the latest digital technologies over and above telephone communication.Our findings indicated a 32% reduction in the relative risk of rehospitalization for any cause and a 45% relative risk reduction in cardiovascular-related rehospitalizations in mHealth-enabled DMP patients compared with patients who undertook a traditional DMP.This contrasts with a prior systematic review that used text messaging or mobile phone app interventions 23 but aligns with others incorporating telephone call interventions, which showed a reduction of between 38% and 44% in all-cause rehospitalizations compared with standard postdischarge secondary prevention care. 20,22Overall, mHealth DMPs are effective and complement existing telephone-based interventions.mHealth-enabled DMPs support the scalability of existing models of
FIGURE 4
FIGURE 4 Primary Outcome Analysis: MACE
FIGURE 5
FIGURE 5 Primary Outcome Analysis: All-Cause Mortality of mHealth interventions (incorporating digital technologies) for reducing readmissions and ED visits in patients with CAD.These tech-integrated models of DMPs provide unique opportunities for providers and health systems to interact directly with patients' contemporary lifestyles, delivering more personalized patient-centered care.Rapid technological advancement, improved user experience, and positive consumer acceptance and adoption (from patients and providers) 55,56 have enhanced engagement and adherence 16,57 to prevention programs and may explain the added benefit of mHealth-enabled DMPs over and above traditional DMPs without mHealth.Despite almost all earlier systemic reviews showing significant improvements in clinical, behavioral and lifestyle risk factors when comparing digital technology interventions with traditional DMPs or usual care, 16-19 previous studies have not investigated the impact of mHealth interventions on readmission and mortality outcomes using emerging digital technologies and devoid of telephone only interventions access to digital health-based DMPs for these key populations58 and investigate the factors that are important for implementation of mHealth-enabled DMPs in real-world settings, particularly in low-and middle-income countries.STRENGTHS AND LIMITATIONS.This systematic review and meta-analysis provides evidence for the effectiveness of the most contemporary mHealthenabled DMPs on readmission outcomes.There are a few limitations to our study.Firstly, the limited CENTRAL ILLUSTRATION mHealth-Enabled DMPs Reduced All-Cause and Cardiac-Related Hospitalizations and Emergency Department Visits Compared to DMPs Without mHealth Braver J, et al.JACC Adv.2023;2(8):100591.There was no significant reduction for mortality outcomes or MACE.DMP ¼ disease management program; MACE ¼ major adverse cardiac event; mHealth mobile health.Braver et al J A C C : A D V A N C E S , V O L . 2 , N O .8 , 2 0 2 3 Digital Health Programs to Reduce Cardiac Readmissions O C T O B E R 2 0 2 3 : 1 0 0 5 9 1availability of mortality outcomes with a relatively short follow-up period made it challenging to assess the intervention's effect on mortality.Secondly, while we extracted all available data in each publication, adjudication of cardiovascular events that constitute a cardiovascular readmission may vary between studies, and similarly, noncardiovascularrelated readmissions may not have been included among all studies.Finally, most studies included were conducted in high-income countries, yet more than 75% of CVD deaths take place in low-and middle-income countries.63Hence, caution is required with regards to generalizability of the findings in these less represented populations.CONCLUSIONSIn this contemporary systematic review and metaanalysis, mHealth-integration into DMPs was an effective intervention for reducing hospital readmissions and visits to ED. DMPs supported by mHealth should be considered for improving outcomes in patients with CAD.ACKNOWLEDGMENTS The authors thank Tania Celeste and Dr Jocasta Ball for their support with the search and selection process.The authors are grateful to Dr Dulari Hakamuwa Lekamlage for undertaking the statistical analysis.The authors also thank Dr Chris Lynch for his support with the risk of bias assessment.
TABLE 1
Publication and mHealth Intervention Characteristics Continued on the next page
TABLE 1 Continued
DMP ¼ disease management program; RCT ¼ randomized controlled trial.
TABLE 2
Summary Findings of Grade Quality Assessment 60. Kotseva K, Wood D, De Bacquer D, EURO-ASPIRE Investigators.Determinants of participation and risk factor control according to attendance in cardiac rehabilitation programmes in coronary patients in Europe: EUROASPIRE IV survey.Eur J Prev Cardiol.2020;25:1242-1251.61.Chindhy S, Taub PR, Lavie CJ, Shen J. Current challenges in cardiac rehabilitation: strategies to overcome social factors and attendance barriers.Expert Rev Cardiovasc Ther.2020;18:777-789.62. Troy A, Xu J, Wadhera R. Abstract 10423: US counties with low broadband internet access have a high burden of cardiovascular risk factors, disease, and mortality.Circulation.2022;146: A10423.63.WHO.Cardiovascular diseases (CVDs) fact sheets.In: Organisation WH, editor.2021.Accessed March 10, 2022.https://www.who.int/news-room/fact-sheets/detail/cardiovascular-diseases-(cvds) KEY WORDS cardiac rehabilitation, coronary artery disease, digital health, disease management, health technology, mHealth APPENDIX For supplemental tables and figures, please see the online version of this paper. | 2023-09-10T15:28:24.211Z | 2023-09-07T00:00:00.000 | {
"year": 2023,
"sha1": "982d47cc8f016c230a29724367e882eefc502751",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jacadv.2023.100591",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e0015e643f7f46ba023922d67a58190e07a3d69",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221178323 | pes2o/s2orc | v3-fos-license | “Their Untold Stories…”: Lived Experiences of Being a Transgender (Hijra), A Qualitative Study From India
Background: Transgender is an umbrella term, used to encompass people who have a gender identity or gender expression, which differs from their sex assignment at birth. Being independent of sexual orientation, they have often been classified as the “third sex.” Based on various sociocultural traditions and beliefs, they are frequently “othered,” discriminated, and stigmatized against. This has led to their limited social inclusion and participation. In the social diversity of a populous country like India, transgenders are termed as “hijra’s,” belonging to a separate social community. Their experiences, perceptions, and unmet needs are rarely evaluated. Methods: Qualitative approach was used to explore the “lived experience” of 4 individuals who are part of the “hijra” community in Kolkata. These individuals were born with ambiguous primary sex characteristics. In-depth interview was conducted with these participants with subsequent transcription. Interpretative phenomenological analysis (IPA) was used for analysis. Results: A total of 2 superordinate themes (identity issues, relationship issues) and 6 subordinate themes emerged from the analysis (identification with feminine gender, perceptions regarding caregivers, perception regarding siblings, perception regarding childhood peer groups, identification with the hijra community, societal rejection). The findings have been discussed in terms of identity process, social and cultural construal of hijras in this part of the world. Conclusion: In India, the transgenders (hijra community) represent a unique subculture besides the heterosexual groups. Understanding their relationships, sexuality and societal interactions are vital for their psychosocial well-being and related interventions. This study adds to the shared understanding of their marginalization and lived experiences, in their own voices.
Introduction
In India, transgenders (termed as "hijras") are those people who are born as hermaphrodites or mixed unformed biological sexual characteristics. The uniqueness of hijras is that they exist beyond the sanctified social or familial structure but, paradoxically, in the Indian society, the hijras find place in the history and mythology. For example, many Hindu deities manifest as both males and females or as a merging of the 2 sexes in their different incarnations, eg, Lord Vishnu as Mohini, an enchantress with the task of luring away the demons from the elixir of life (Amrita), or Ardhanarishwara (merging of Shiva and Parvati), respectively. In South India, Aravan is worshipped as a deity of the hijras as he had communion with Lord Krishna prior to his sacrifice for his father Arjuna. In the contemporary context, the hijra community is heterogeneous and is composed of not only people who are born as hermaphrodites but also men, who voluntarily undergo emasculation and join the community. However, in contemporary India, the gender nonconformity of transgenders makes them marginalized in terms of lack of a gender recognition, sexual expression, employment, decent housing, subsidized health care services, and as well as the risk of violence and various forms of transgressions they suffer. Therefore, in India, the hijras represent a controversial and miniscule community, the term itself having a pejorative context. As a result, their unique psychological and social experiences often do not find a voice or space in academic forums. In order to recognize the hijras' space in the gender continuum and to address their rights as citizens of the subcontinent, it is pertinent to explore their psychological journey, in their own social context.
Aims and Objectives
The aim of the present study was to explore the subjective experience of "living" as a "hijra" in Kolkata. The term "hijra" is used in the indigenous language (Bengali) to describe people who are born with incompletely matured primary sex characteristics of both the sexes and also follow gender roles that are different from socially sanctioned gender norms. The present study adopted a qualitative approach of data collection and analysis as these experiences (lived experience of being a hijra) can be thought to be relative to each individual and specific to the social context under study. Rich data exploring their lived experiences could be best collected using qualitative methods.
Operational Definition of Key Constructs
For the purpose of this study, "hijra" has been construed as individuals who have been born with incomplete sex organs of both sexes thereby making the individual incapable of sexual reproduction and presenting with abnormal or undifferentiated external genitalia. The authors provide a disclaimer that the term is just used to describe their community (as it is called in the local sociocultural context) and has not been intended to label or stigmatize them in any way.
Selection Criteria for Participants
As is already known, hermaphroditism in humans represents a disorder of sexual differentiation and is an extremely rare condition with a prevalence ranging from 0.05% to 0.06%. 1 In India, these individuals with intersex attributes often lead their life in a closed community, which also consists of people who are not born as hermaphrodites but have chosen to be a part of this community because they have unique sexual preferences, different from main stream social norms; chief livelihood being begging or participating in certain social rituals like birth of a child, marriage, etc, and also asking for alms in these occasions. These activities are often enforced as a part of social obligation, due to lack of alternative ways of earning, compounded by the social prejudice.
Sampling Technique and Participants
As has been already stated, in terms of prevalence, hermaphroditism is an extremely rare condition and it is very difficult to get acquainted with a hijra in Kolkata. Also, the livelihood of majority of hijras in Kolkata is moving around in public spaces, like bus, train, crossroads, etc, in groups comprising also of transgender and asking for alms from public. For this research, the researchers approached few of them and briefly explained them the nature of the study and requested them to help the researcher identify individuals who are born as intersex. At first, the researcher got acquainted with one of them who was willing to participate in the study. Subsequently, a sampling technique akin to snowball sampling was used where in 1 participant refers the researcher to another participant. The final sample consisted of 4 participants: All the participants were conversant in Bengali and so the interviews were conducted in their mother tongue.
Tools Used
• Information schedule: A semistructured interview designed for this study to elicit information pertaining to each individuals' age, education, occupation, details of family of origin, current membership of a social group, income, etc. • In-depth interview: It was conducted to enable the participants to tell their stories and to explore their experiences of being "hijra," the purpose being "reproducing the world of the person being interviewed, by attempting to make sense of it." 2 The interviews for each participant were conducted till thematic saturation was achieved. Typically, each interview lasted for about 30 to 45 min. The semistructured interview schedule was designed keeping in mind certain psychosocial issues, which might be pertinent for the "lived experience" of being a hijra. For example, • the psychological and social appraisal of biologically determined sexual identity, • sexual identity and its incongruence with preferred gender identity, • reactions from family and society for being born as a hermaphrodite, • Decision of leaving home and choosing a profession solely belonging to persons with ambiguous biological gender, • Trauma and tribulations faced for being a hijra.
Interpretative Phenomenological Analysis
Interpretative phenomenological analysis (IPA) is an approach to qualitative research, which is concerned with exploring and understanding the lived experience of a specified phenomenon (in this case, "the experience of being a hijra"). 3 IPA was introduced by Smith 4 as a means of analyzing data, but it has evolved to be a methodology in its own right. To be more precise, IPA involves the detailed examination of participants, such as • their lifeworlds, • their experiences of a particular phenomenon, • how they have made sense of these experiences, and • the meanings they attach to them The other distinctive feature of IPA is the concept of "double hermeneutic." Smith and Osborn 5 used the term "double hermeneutic" to emphasize that 2 layers of interpretation are imbued in IPA, namely, 1. the first is the participant's meaning-making (interpreting their own experience), 2. the second is the researcher's sense-making (interpreting the participant's account), 2 and 3. thus, there is an inevitable circularity in the process involving questioning, uncovering meaning, and further questioning; this circular process of understanding a phenomenon is called the "hermeneutic circle." 2,6,7
Procedure
The interview schedule was prepared keeping in mind that members of the hijra community prefer not to disclose much information about themselves and require a sensitive handling because of their unique sexual identity and are mostly recipient of callous and uncaring responses from the general society. The interview schedule tried to elicit information in a nonthreatening manner from the participants regarding their early life, socioeconomic status of family of origin, realization related to their biologically determined ambiguous sexual identity, the age when they took the decision to relocate to a community composed of people like them, educational and occupational training, etc. Another purpose of using the interview schedule was to develop rapport with the participants before going over to the in-depth interview where they are expected to talk about personal issues, which are more significant from an emotional point of view. During this process only, the participants were debriefed about the research and asked to consider carefully whether they were willing to share their personal thoughts with the researcher.
Triangulation and Determination of Trustworthiness of Data
The data was transcribed by the first author, but simultaneously and independently coded and interpreted by the other authors.
Only those codes and/or themes were sustained that were corroborated by all the 3 researchers.
Ethical Issues
Formal ethical clearance was obtained from the University of Calcutta, Kolkata. The data was maintained as strictly confidential, including the identity of the participants. Since the participants would be discussing about life experiences that are extremely personal and might evoke unpleasant emotions at times, participants were told that they can opt out of the research at any point even if they had given prior consent. This was done to ensure that the well-being of the participants is not affected.
Results and Analysis
Data was analyzed in 3 steps. At first, rudimentary themes were listed chronologically, and then subordinate themes were emerged by clustering these rudimentary themes, at last superordinate themes were extracted by clustering the subordinate themes.
Subtheme 1: Identification With Feminine Gender and Sexuality
Participant 1: The participant has a strong need to lead the life of a female by playing the social role of a wife or mother. She also reports experience of distress of not being able to do so. "What about our life? Neither we could be a mother nor will anyone marry us and let us be a part of their family… We have to live our life alone. There is nothing to do, so we spend our life with public entertaining them. This is our life…. We had only one grief in our live that we could never hear the word 'maa'…." Participant 2: The participant has a strong need for family and a need to lead a life of a mother. For this reason, she has taken care of many needy girl children of her society.
"…every human being has a wish that he/she would have a family. I used to brood about this earlier…. Now I have stopped brooding about these things…. In my earlier life, I wished that if I would be just like you people, I could have a child, would have a husband…."
Participant 3:
The participant reports that she craves for a feminine identity as a wife and has the experience of entering into multiple exploratory/exploitative experiences.
"It is true that I have a wish that I will stay with my boyfriend, together. I don't need any child that is not possible also. I just want to stay together. I will wash my boyfriend's clothes, will cook for him. I have a dream that he will love me very much. But I don't know whether my dream will be fulfilled or not. I truly loved the 3 guys but everyone cheated me. They have broken my heart…." Participant 4: The participant states that she has been well accepted by girls but rejected by boys; predominantly her playmates were girls, thereby, she can relate her identity more with girls as well as with their behaviors and gestures. This implies that she desires for a feminine identity as she can relate more with them.
"Before I came into this 'line', I used to sing and dance, and I used to play with my friends. I used to spend most of the time with girls. Everybody used to humiliate and call me 'meyenyakra', 'mogra' (slang languages applied for those, who behave like a female). What else would I do? I didn't like being with boys, so I used to play more with girls."
Subtheme 2: Identification With "Hijra" Community
Participant 3: The participant says that she is more comfortable in sharing her thoughts with people who have same-sex orientation, because they also suffer from social rejection similar to that of persons belonging to the hijra community.
"there were some other people just like me, but they are not hermaphrodite (hijra), they are gay. Do you understand the meaning of gay?? They were made friends of me. They are my very good friends. I used to share all my feeling, emotion with them freely, they also have shared their feelings with me freely…." Participant 3: The participant considers her birth as hijra as God's gift.
"kinnars never harm others. If anyone tease her, then they use slang. They never harm others. They always want well for other people. We always pray to the God. Why God has sent us in this world? For the prayer ('dua' in their term)." Participant 4: The participant describes that she is more comfortable in sharing her thoughts and feelings with other hijras and relating to them helps her to deal with her loneliness and also gives her sense of identity.
"No, I have told you before, we do all the things among ourselves, I have my guruma, she treats me like her child so she can obviously scold me. If she wouldn't rebuke me, who else will? Like when I feel sad, I would tell guruma, or else who will listen to my sorrows. Because even if I tell others, they would say, 'go solve your problems on our own'." Participant 4: The participant states that people who are similar (in terms of sexual identity and experience of social alienation) belong together. So, her life begins with people who are either hijras or transgenders and also ends with them. But she fails to explain why she experiences anguish about what will happen at her old age with the need for dependency.
"I don't have friends' related trouble. My friend is my own self; my friends are those who work in this 'line'. I love them, they love me. I fight with them; they fight with me. We quarrel among ourselves. We do all the things among ourselves, because we know today we may fight, but tomorrow we can work out things between ourselves. But if we fight with others, they would not resolve with us."
Subtheme 1: Exploitative Relationship With Males
Participant 1: The participant craves for a sexual partner, who will be a male. She terms such relationships as "normal." She reports of feeling sad for not being able to play the conventional "female social roles" in a family and questions the society for alienating them (hijras) for their biological condition.
"Many people told me they would be with me, they would marry me, but I think if they would even marry me, after they got to know about my particular self, would then also they want to be with me, spend their life with me? They never would. I live silently with this pain burying in my mind."
Participant 1:
The participant describes her experiences as a child in his family of origin. She claims that her family members including her parents had a positive attitude toward her despite her being born as a hermaphrodite. But at the same time the participant states that she voluntarily left her parents' home when she attained puberty providing no explanation of why she had to make such a decision and why was she not prevented from doing so.
"Neither I'm a boy nor a girl. I was born as a kinnar. When my parents got to know about me, they didn't tell anything to the masis who used to come in my neighborhood. Because, no parents would ever leave their child no matter what the child will be; a boy, a girl, or a kinnar. No guardian would ever leave…." Participant 2: The participant describes her early life experiences in terms of positive relationship with parents, at the same time she also states that she voluntarily came out of her family of origin after attaining puberty. She also does not throw any light on the reason behind her decision of leaving home and not being forced to stay back by her family. This description does not match with the portrayal of her parents as accepting her condition.
"…there is very much suffering in parent's home. Father worked in bank. Suppose we have 7 to 8 brothers. When father died suddenly, our family started suffering. I have worked as a maid servant in other's house, picked cow-dung, wood, etc. I have saved my family members by doing very hard work. Then I met with some hijras in our locality. They took me with them." Participant 3: The participant describes that for being born as a hermaphrodite, she was not allowed to go out of her house and was neglected and abused by her father. The participant states that her mother had always been loving and accepting her and still stays with her.
"My mother did not allow me to go outside … in my childhood…. I never understood why my mother kept me inside the home. She never allowed me to mix with others. My father did not like me at all. He does not love me…." "I was not allowed to go outside, my father used to behave with me very badly. He did not talk with me. My father said that my face is very unlucky for him. But my mother always used to say, whatever she may look like, she is my child. I have kept her in my womb for 10 months 10 days. She may be blind, may be lame, or may be a kinnar whatever he/she is, he is my son, she is my daughter. She used to say in this manner…."
Subtheme 2: Perception Regarding Siblings
Participant 1: The participant states that her current relationship with her siblings is very positive. But at the same time, she experiences anguish regarding her old age, ie, who will take care of her in her old age. The participant does not justify why her siblings will not take care of her or her mother.
"I have sisters. I need to take care of them. I have learnt to earn now. I work hard for them only and I pray to god that my luck has turned out to be this, but at least my sisters could lead a good life and my mother would be in good health. One of my sisters got married while my father was alive…. I took the responsibility of marrying of another sister…. There is one more sister who is yet to get married…. I am trying to find out a match for her…." "…My sisters might look after me. Everyone should understand that they cannot be self-reliant for the entire life…. As god has made me like this that is why I'm bearing all these sufferings in my life…." Participant 2: The participant describes her early life experiences in terms of positive relationship with siblings. At the same time, she states that she had to start earning after her father's death in order to let her sibling continue studying. However, she could not explain why she would not require studying or why her family never forces her to do so.
"I have 8 siblings. Then there was so much poverty, suffering. Elder brother did not get a job…. I fell on the feet of the barokorta (Supervisor) to let my brother work…."
Subtheme 3: Perception Regarding Childhood Peers
Participant 1: On one hand, the participant describes her life and environment to be congenial, but at the same time, on a different occasion, she describes her painful experiences of not being understood by her peers.
"When my friends got to know about me, they started to avoid me as I grew up and became different from them. Their families told them not to talk with me and that had devastated me." Participant 3: Participant describes her childhood as lonely because the peers used to avoid her, and she had nobody to share her "inner" thoughts.
"When I grew up … other girls as well as boys used to stay away from me. Nobody talked with me. I spent my childhood all alone. Nobody wanted to mix with me…. Nobody played with me. After growing up, I found that I don't have any friends…."
Subtheme 4: Rejection and Exploitation by Society
Participant 1: The participant states that there is a dearth of social and legal policies in favor of hijras and even the administration is callous about the well-being and livelihood of hijras.
"About the government? Yes, they also don't want us to work on road. When we used to work on road, 'they' (the politicians) prohibited our road work. But at the time of vote, they want us; kinnars' votes. So, are we counted on for election purposes only? Because if the government didn't get our votes, they wouldn't even probably have had won the election." Participant 2: The participant mostly experiences agony for not being treated properly by the society and even not acknowledged by the girls whom she had taken care of.
"When people have used the term 'hijra' naturally it was very hurtful. There is a sympathy, kindness, and love in every human being both in male and female. We also have the same thing. Everybody has a heart…." "…There was a child who used to beg at the railway track. One day I took her with me into my house. I have reared her. She used to do household task, used to be at home. I have reared her as my own daughter. All I have done alone. I reared her up, extravagantly married her to a good match, gifted ornaments. Everything I did by myself. The husband is very good. But now she doesn't talk to me…." Participant 3: The participant has passed her secondary examinations from Ramakrishna Mission as a transgender but had to use a burka (concealing her identity) for appearing in the examination.
"I gave exams in private from Ramakrishna Mission…. I gave exam from there. I did not go to school the whole year, used to read at home. I went outside once in a year but wearing burka…." Participant 3: In spite of being educated, the participant was rejected in many job sectors for her biological condition, whereby the employers explicitly stated that she might act as a source of maladaptive sexual provocation for male co-workers.
"I went to a company for a job once or twice, where boys and girls both work. The manager of that company said to me, I can't work there. The boys will discuss about me. So, we can't give you the job…. Then I went to the factory of inner garments, they also told me the same thing…." Participant 4: The participant claims that in our country, social and legal norms and State policies are skewed against hijras, and they usually do not have access to social and legal privileges.
"There is no government that would give us money or work. People say government has money, but they never give us a goddamn penny, I have seen that since I was born. There is no one to look after us, take care of us. There is no law for us, nothing."
Discussion
This study is unique in conceptualizing the "lived" experience of those people who have been born with an ambiguous biological sex and belong to the "hijra" community. These people in this part of the world are primarily engaged in badhai and badhni (a dialect within hijra community where badhai refers to the dance performance during wedding and birth of a baby and badhni refers to begging). It is interesting that all the participants share some similarities in the manner they describe their lived experiences, which also reflected their psychosocial status. Wandrekar and Nigudkar, in their detailed review of mental health related to the LGBT community from 2000 to 2019, mentioned the dearth of research regarding their societal needs, lived experiences, and factors related to their well-being. 8 This study essentially attempted to explore that under-stated area. On the basis of the analysis of the transcripts, mainly 4 subordinate themes and 2 superordinate themes could be identified.
Identity Issues of Hijras
As is evident from the transcript, all of the participants reported that they were born with an intersex condition. It was evident from the transcripts that most of them had an intense need to be a "female" and to lead the life of a female, like marrying a male, setting up a family, bearing children, etc. But, because of their biological constitution, they were incapable of reproduction and are also bereft of secondary sex characteristics unique to a female. However, they dressed up as females and adopted feminine behavior such as having long hair, putting up makeup, plucking facial hair, etc. Other researches have also pointed out that most of the hijras have an intermediate gender identity which lean toward feminine identity. 9 There is a social convention that hijras do not marry. The hijras who were the participants of the study also confirmed that they were not married. There has been ambiguity related to transgender marriages in the Hindu Marriage Act, 1955. However, after the historical "Section 377" judgment of the supreme court, thoughts were rekindled about rights of the LGBT community, and discrimination against them were condemned. 10 In fact, in one of the recent landmark judgments, the Madurai bench of Madras high court upheld the rights for transgender marriage, stating that a person who is born as an intersex but identifies herself as a "women," should be considered to be a "bride" under the Hindu Marriage Act, 1955. 11 It was reaffirmed that transgender rights fall within the "rights to equality" granted by article 14 of the Indian Constitution, which is a "fundamental right." However, unfortunately these laws rarely transit into common understanding and community practice. This creates agony and sense of purposelessness in hijras. These participants have repeatedly expressed their cravings for a so called "normal" family, ie, where they play the role of a wife or a mother. It is possible that this adoption of feminine identity may help to restructure their cognitive framework and, somehow, give them a sense of satisfaction. These participants have also spoken about their seemingly unsettling emotional experience of being sexually attracted toward males and the coercion of the society to suppress such feelings. The pressure of the heteronormative society makes these people (hijras) develop a self that is unsure, insecure, and over compliant, show a constant expression of docility and abjection in search of intimacy. Interestingly, the interviews are abundant with examples of their obsequiousness toward unworthy people in search of a "family" or "conjugal" life. It is possible that these hijras in their struggle to fill the vacuum created by their inability to play the role of a "mother" or a "wife" and to search for natural intimacy fall prey to exploitative and abusive relationships dominated by males.
A very important finding was the manifested need of the hijra participants to be "mothers" and their attempts to take care of other vulnerable children in the community as a way of satisfying unfulfilled maternal desires. A different study had also found out that her respondents (hijras) put special emphasis on motherhood, in their daily life, eg, guruma and chela relationship and initiation rituals all of which supposedly attempt to construct a sense of identity, characterized by filial bondage. 12
Identification With the Hijra Community
Another interesting observation was that all of the participants preferred to use a dichotomous pattern of referral where "them" refers to people who have normative sexual and gender identity and "us" are people who are non-normative (mostly hijras and also other people who deviate from this norm, eg, transgenders, homosexuals, etc). These hijras clearly state that the people belonging to the "them" category treat the hijras in a demeaning manner. Hence, to maintain their dignity, they (hijras) try to maintain a distance from these people. This "we versus they" dichotomy has been the basis for their "othering" in society. According to the hijra participants, their sense of belongingness with the hijra community helped them cope with feelings of deprivation and vulnerability. Also, it is in this group that they could openly discuss their emotions, pains, needs, etc, without the fear of being negatively evaluated or discriminated. These participants were of the opinion that though our society has witnessed a lot of progress in various matters, the society is still largely dominated by the concept of an essentialist binary gender. From a psychological point of view, the striving for identification with the hijra community is in line with the concepts 13 that all have a basic need to be loved and accepted. Ghosh had also obtained similar findings from her ethnographic study of hijras of Bankura, a district in West Bengal, India. 14 Her respondents have repeatedly articulated the importance of these relationships within the community to develop a stable sense of identity. Such identification with the community also serves to strengthen the social position of being identified as a hijra. Shawkat 15 had described a complex social network system within the hijra community, which acts as a buffer against the discrimination of the larger society. Bakshi 16 had also talked about the distinctive social system of the hijras characterized by unique forms of communication, initiation, and death rituals, which help them to deal with stigma and marginalization.
Relationship Issues
Most of the participants stated that their relationship with parent and siblings was very positive. At the same time, they also reported that they have independently taken the decision to leave their home once they attained puberty. So, apparently their engagement in this profession is a choice freely made by them. Puberty is a universal developmental stage, which is accompanied by significant physical, cognitive, and emotional changes. In many parts of the world, the onset of adolescence is marked by autonomy and sexual freedom. In the Indian social context, individuals have an extended period of social and economic dependence on the parents or the family of origin. As a result, adolescents in this part of the world have a lesser share of autonomy with respect to various psychosocial issues. Hence, even in economically backward families, adolescents are not expected to leave the "cocoon" of parental protection and start earning. Hence, the self-disclosure of all the hijra participants that they have left the parental home on attaining puberty warranted further explanation regarding the following questions: • Why their parents accepted their decision of leaving home, moving to a foster family (hijra community) and taking care of their financial well-being, without any apparent emotional resistance? • How could their parents not urge them to lead lives at par with their siblings?
This brings forth a very critical and sensitive question-"Were the decisions taken by the participants to move out of the family of origin, actually voluntary?"/Were these decisions shaped by circumstances and societal apathy. The participants had also shared similar vague thoughts regarding their relationship with siblings and peers. On one hand, these participants reported of having an affectionate relationship with their contemporary figures; but, on the other hand, they also report of experiences of neglect, bullying, and shaming from their peers and siblings along with feelings of loneliness. Late childhood is typically referred to as "gang age" by developmental psychologists and such interactions with members of the same age is crucial for the social as well as identity development of the child. It is likely that these individuals (hijra) experienced alienation from their contemporaries during this age, which had a lasting impact on their psyche. But, for some reason, all of them (4 hijra participants) refuse to acknowledge the experience of lack of love and trust with their peer and siblings.
Such an ambivalent attitude on the part of the hijras toward their early life relationships seems to portray their need for acceptance from the conventional family. There is no denying the fact that family is the crucial most source of nurturance and positive self-appraisal, and obviously it is not possible to accept the extreme rebuff from the family, without detrimental psychosocial consequences. It is noteworthy that more than 1 participant reported of having compromised their education for the sake of their siblings. At the same time, it was obvious from their unspoken words that they do not share a very congenial relationship with these siblings. Such events are also suggestive of the widespread nature of social discrimination directed toward intersexed people. Research findings are also abundant with regard to the fact that the marginalization of the hijras usually starts at the family level and subsequently spreads to the society at large. 17
Rejection From Society
The findings of the present research are consistent with earlier research findings, 17 which reflect that hijras are mostly alienated from their families and society. In spite of the fact that there are some lingering beliefs that hijras bring good luck at weddings or after birth, there is widespread fear and mistrust associated with hijras. All 4 of the participants affirmed that they had been stigmatized and remained underprivileged throughout their life. One of the participants shared the story of being forced to wear a burka while appearing for Board (Secondary) examinations. Though the Board granted her a transgender status, it is obvious that she was not sure whether the attitude of the invigilators or coexaminees would be favorable. The same participant shared her experience of being unduly rejected from jobs because of her marginalized gender identity. Shakwat 12 had interviewed 5 hijras from Bangladesh and reported that often family members such as elder siblings or fathers would bully, assault, or lock up the individual (hijra) as a means of displacing the frustration of being shamed by their peer for having a "deviant" offspring/sibling. In fact, one of the hijra participants of this study also reported of being abused and assaulted by the father. So, it is reasonable to assume that the stigma of being a hijra is initiated within the family of origin forcing them to seek a life outside the biological family constellation. According to Reddy 18 in a traditional Indian society, when the siblings reach marriageable age and one sibling is entering the institution of marriage, the presence of the other sibling who is not supposed to marry becomes conspicuous. So, these people (hijras) are often left with no choice other than moving out of the family to protect the latter from social wrath and stigma. In that way, they seek their own group for inclusion and connectedness. Peer validation has shown to improve their self-identity and psychological well-being. A queer affirmative cognitive behavioral therapy based group-therapy intervention had shown to improve the distress, reduce the social isolation, and enhance knowledge and skills related to self-sustenance in the transgender community. 19 This study had a small sample size, owing to the difficulty of accessing and interviewing people from the transgender communities. They often are apprehensive to share their inner feelings or consider it as another "social shaming" attempt. However, rich data from in-depth interviews can be rigorous, irrespective of sample size in qualitative studies. 20 Also, the inherent limitations of a qualitative study are to be considered, like the subjectivity bias and generalizability. However, triangulation was used to improve the data rigor.
Conclusion
In India, hijras represent a unique subculture existing alongside the heterosexual family. One of the difficulties in writing about the transgenders is the disjunction that exists between the cultural definition of "hijra" role and the variety of individually experienced social roles, gender identities, sexual orientation, and life histories of people who become the "third sex." Sexual identity is a complex and heterogeneous concept, and thus involves experience and expression of each unique group to shape their attitudes, beliefs, and practices. 21 This study is thus a unique attempt to understand the subjective experience of 4 such individuals born with intermediate primary sex characteristics and leading the life of a hijra. All the participants have retold their experiences of living based on their "third gender." The shared understanding that has emerged reflects that hijras in this part of the world have not been able to come to terms with their inimitable sexual identity because of the subversive pressures from the heteronormative society; their experience of social discrimination starts from the family, forcing them to leave the family of origin; they try to reconcile with their needs for love and acceptance by partially denying the role of their family in marginalizing them, and by giving in to abusive and demanding relationships. But they also reconstruct their sense of self by developing a committed relationship with their community members and trying to take care of children who are vulnerable. They mostly have inclination toward a feminine identity. However, the cascade of rejection and discrimination from an early life has a ripple effect on the lives of these people; wherein they are less educated, less privileged, less empowered, polarized, and "othered." This study could have been enriched, if the ambivalence or disjunction in the biological, social, and cultural constructs of a hijra and the associated agonies that plague such individuals were explored. Qualitative studies using grounded theory and ethnographic approach would be necessary to further explore their societal roles, relationships, unmet needs, and hence shape awareness, understanding, and administrative decisions regarding their care and safety. | 2020-08-20T10:03:04.787Z | 2020-04-01T00:00:00.000 | {
"year": 2020,
"sha1": "30068267a39f3c65df3da2f35c3f21e10571cb34",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2631831820936924",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "485dc02c5489382e1ab572987468193b959ff39c",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
245593907 | pes2o/s2orc | v3-fos-license | Immunometabolic signatures predict recovery from thyrotoxic myopathy in patients with Graves' disease
Abstract Background Thyroid hormone excess induces protein energy wasting, which in turn promotes muscle weakness and bone loss in patients with Graves' disease. Although most studies have confirmed a relationship between thyrotoxicosis and muscle dysfunction, few have measured changes in plasma metabolites and immune cells during the development and recovery from thyrotoxic myopathy. The aim of this study was to identify specific plasma metabolites and T‐cell subsets that predict thyrotoxic myopathy recovery in patients with Graves' disease. Methods One hundred patients (mean age, 40.0 ± 14.2 years; 67.0% female), with newly diagnosed or relapsed Graves' disease were enrolled at the start of methimazole treatment. Handgrip strength and Five Times Sit to Stand Test performance time were measured at Weeks 0, 12, and 24. In an additional 35 patients (mean age, 38.9 ± 13.5 years; 65.7% female), plasma metabolites and immunophenotypes of peripheral blood were evaluated at Weeks 0 and 12, and the results of a short physical performance battery assessment were recorded at the same time. Results In both patient groups, methimazole‐induced euthyroidism was associated with improved handgrip strength and lower limb muscle function at 12 weeks. Elevated plasma metabolites including acylcarnitines were restored to normal levels at Week 12 regardless of gender, body mass index, or age (P trend <0.01). Senescent CD8+CD28−CD57+ T‐cell levels in peripheral blood were positively correlated with acylcarnitine levels (P < 0.05) and decreased during thyrotoxicosis recovery (P < 0.05). High levels of senescent CD8+ T cells at Week 0 were significantly associated with small increases in handgrip strength after 12 weeks of methimazole treatment (P < 0.05), but not statistically associated with Five Times Sit to Stand Test performance. Conclusions Restoring euthyroidism in Graves' disease patients was associated with improved skeletal muscle function and performance, while thyroid hormone‐associated changes in plasma acylcarnitines levels correlated with muscle dysfunction recovery. T‐cell senescence‐related systemic inflammation correlated with plasma acylcarnitine levels and was also associated with small increases in handgrip strength.
Introduction
Thyroid hormones (THs) participate in contractile function, myogenesis, bioenergetics metabolism, and regeneration of skeletal muscle. [1][2][3][4] Cytoarchitecture and metabolic features of skeletal muscle are also regulated by circulating TH levels or local triiodothyronine (T3). Additionally, enhanced mitochondrial biogenesis by T3 treatments activates oxidative pathways, leading to increased maximal oxygen consumption in skeletal muscle. 5 While intracellular T3 is mainly involved in the development of skeletal muscle and myogenic differentiation, 6 a decrease in TH signalling is linked with reduced myogenesis and type II fibres in skeletal muscle during ageing. 7 In line with this, excess circulating THs, called thyrotoxicosis, induce loss of muscle mass, strength, and balance in humans. 8 Thyrotoxic myopathy, involving mainly proximal muscles, is an important clinical feature of patients with Graves' disease. 9 Hyperthyroidism increases Ca 2+ -activated myosin ATPase activity in soleus muscle and produces atrophy of muscle fibres and conversion of type I (slow twitch) to type II (fast twitch) fibres in rats. 10 Increased protein catabolism caused by high levels of circulating TH is a critical factor in muscle dysfunction in patients with Graves' disease, 11 but the pathogenesis of thyrotoxic myopathy remains to be elucidated.
Muscle atrophy and weakness in various disorders is attributed to the catabolic effect of pro-inflammatory cytokines during inflammatory responses. Graves' disease, as an autoimmune disorder, induces not only local (e.g. eye) but also systemic (e.g. blood and muscle) inflammation. Pro-inflammatory cytokines produced by effector T cells play a critical role in mediating tissue injury, 12 which may be associated with loss of muscle mass and strength in patients with Graves' disease. In addition, systemic inflammation results in plasma metabolite changes, which may be derived from muscle wasting. 13 However, detailed immunophenotypic features of peripheral blood T cells and their relationship with thyrotoxic myopathy of Graves' disease have not been determined.
In the present study, we investigated whether specific plasma metabolites and different subsets of T cells were associated with recovery of muscle strength and function in patients with Graves' disease. Furthermore, we studied whether restoration of euthyroidism by treatment with methimazole altered immunophenotypes of peripheral inflammatory cells and plasma metabolites in patients with Graves' disease.
Study population
We initially recruited Koreans with newly developed or relapsed Graves' disease who visited the Department of Internal Medicine, Chungnam National University Hospital in Daejeon between January 2019 and December 2019. To determine the timing of muscle strength recovery in patients with Graves' disease, we evaluated handgrip strength and the time taken to perform the Five Times Sit to Stand Test (5XSST) in the participants treated with methimazole at Weeks 0, 12, and 24. Optimal sample size was determined by repeated measures analysis of variance, an effect size of 0.25, an α error probability of 0.01, and 95% power. The total sample size calculated was 80. Assuming a 20% dropout rate, 100 patients were required (calculated by G*Power 3.1.9.4).
Next, patients referred to Chungnam National University Hospital in Daejeon between May 2019 and April 2020 for a diagnostic workup or treatment of newly developed or relapsed Graves' disease were enrolled for a more intensive study. The Consensus Report of the Korean Thyroid Association recommends methimazole as the preferred drug for patients with Graves' disease 14 ; therefore, all enrolled patients were maintained on methimazole or carbimazole. No patients were treated with radioiodine or thyroidectomy as an initial therapy. Beta-blockers were used for less than 1 week to reduce tachycardia or tremor. Patients with a thyroid storm were not included in the study. Measurement of muscle function and isolation of peripheral blood mononuclear cells (PBMCs) and plasma was conducted in all enrolled patients at an initial visit (Week 0) and a follow-up visit, which was scheduled 12 weeks later.
Inclusion criteria were as follows: (i) newly diagnosed or relapsed Graves' disease with thyrotropin (TSH) levels below the lower limit of the reference interval (0.25-4.0 μU/mL) and/or free thyroxine (free T4) above the upper limit of normal (ULN, 1.9 ng/dL), as well as plasma levels of TSH-binding inhibitor immunoglobulin (TBII) above the ULN (>15%), and (ii) age ≥18 years. Patients with any of the following conditions were excluded from the study: previous coronary heart disease, malignant hypertension, severe pulmonary disease, acute or chronic kidney disease (estimated glomerular filtration rate <45 mL/min/1.73 m 2 ), anaemia (haemoglobin <12 g/dL), history of any malignant or chronic inflammatory disease, current liver disease, drug or alcohol abuse, or pregnancy. Only patients without a previous history of musculoskeletal or joint disease were considered for inclusion in the study.
This study was reviewed and approved by the Institutional Review Board of Chungnam National University Hospital (CNUH 2019-02-012), according to the standards of the Declaration of Helsinki. Each participant gave informed consent, documented by the Department of Internal Medicine of Chungnam National University Hospital in Korea.
Handgrip strength, Five Times Sit to Stand Test, and short physical performance battery measurement
Experienced nurses were charged with collecting participant information, such as demographic characteristics and surgical or medical histories, through detailed interviews and reviewing medical records. Handgrip strength was measured using an electronic hand dynamometer (Lavisen, Namyangju, Korea). Grip strength of the dominant hand was measured once in a sitting posture with 0°shoulder angle, 90°elbow angle, and a neutral wrist angle. In the 5XSST, the participants were placed in a chair with their arms crossed over their chest and their feet flat on the floor. The participants were asked to rise and sit five times in a row as fast as possible without using their hands. The time taken to perform the test was recorded for analysis. The short physical performance battery (SPPB) consists of measurements of gait speed, standing balance, and repeated chair stands. 15 In the standing balance test, participants were instructed to take a tandem stance, semi-tandem stance, and side-by-side stance, with each stance held for up to 10 s. Scores were recorded, which ranged from 0 to 12 points, with a higher SPPB score indicating better lower extremity function.
Sample preparation for plasma metabolomics
Metabolites in human plasma were prepared as described previously. 16 For whole metabolite extraction, 10 μL of plasma was added to 240 μL of water and 250 μL of ice-cold methanol, before being vortexed and centrifuged (14 000 g, 4°C, 15 min). The supernatant was collected in a 1.5 mL Eppendorf microtube, processed for the extraction of various types of compounds listed in the succeeding text, and used for liquid chromatography-mass spectrometry (LC-MS) measurement. For water-soluble metabolites, including amino acids and nucleotides, 25 μL of the supernatant was diluted three-fold with 0.1% formic acid. For acylcarnitines, 30 μL of the supernatant was added to 270 μL ice-cold methanol, vortexed, sonicated, and centrifuged (14 000 g, 4°C, 15 min), and the supernatant was collected. For free fatty acids, 25 μL of the supernatant was diluted two-fold with ice-cold methanol. For bile acids, 50 μL of the supernatant was evaporated and dissolved with 25 μL of 20% methanol. For phospholipids, 5 μL of the supernatant was diluted 200-fold with 0.1% formic acid in 20% acetonitrile.
Metabolomics data processing and analysis
Data processing was carried out using the LabSolutions LC-MS software program (Shimadzu), statistical analysis was performed using GraphPad Prism 8 software, and volcano plots were visualized using the EnhancedVolcano package (Ver. 1.6.0) in R.
Isolation of peripheral blood mononuclear cells
Peripheral blood samples were obtained from all study participants, transferred aseptically into 50 mL polystyrene centrifuge tubes containing ethylenediaminetetraacetic acid (Sigma-Aldrich) as an anticoagulant, and gently mixed. Serum samples were prepared by centrifugation at 2000 g for 10 min at 4°C, and then PBMCs were isolated by centrifugation on a Ficoll-Paque density gradient (GE Healthcare Life Sciences, Buckinghamshire, UK) at room temperature. After centrifugation, the layer of PBMCs was collected and washed in Dulbecco's phosphate-buffered saline. The isolated and washed PBMCs were resuspended in 2 mL Roswell Park Memorial Institute 1640 medium (Welgene, Daegu, Korea), and trypan blue dye exclusion testing was used to determine the number of viable cells in the suspension. Samples were stained for flow cytometry analyses using direct fluorescence-conjugated monoclonal antibodies.
Biochemical measurements
Peripheral blood was collected into heparin-coated tubes. Plasma levels of TSH, T3, free T4, and TBII were measured by standard methods on an automated analyser (Cobas 6000; Roche Diagnostics GmbH, Mannheim, Germany). Plasma lipid profiles, including low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, triglycerides, and creatine kinase, were evaluated using a blood chemistry analyser (Hitachi 47; Hitachi, Tokyo, Japan). Aspartate transaminase and alanine transaminase activities were measured using the International Federation of Clinical Chemistry Ultra Violet method without pyridoxal phosphate (TBA-2000FR; Toshiba, Tokyo, Japan).
Statistical analysis
All continuous variables are reported as mean ± standard error of the mean, except where otherwise stated. Statistical analyses were performed using GraphPad Prism 8 software (GraphPad, San Diego, CA, USA). All data were analysed by a one-way analysis of variance followed by Tukey's post hoc test or a two-tailed Student's t-test. Statistical correlations were evaluated using Spearman's correlation coefficient. P values <0.05 were considered statistically significant.
Recovery of skeletal muscle strength and function in 100 patients with Graves' disease treated with methimazole
It is well known that patients with Graves' disease exhibit lower muscle strength than euthyroid controls. 8 However, it has not been established how quickly treatment with antithyroid drugs restores muscle function in patients with Graves' disease. To determine when muscle strength recovers in patients with Graves' disease, 100 patients were followed for 24 weeks. At Weeks 0, 12, and 24, handgrip strength and 5XSST performance were measured. Demographics and clinical characteristics of the enrolled participants at Weeks 0, 12, and 24 are summarized in Supporting Information, Table S1. At Week 12, TH levels were stabilized on methimazole treatment (Table S1) and most patients with Graves' disease regained handgrip power and lower extremity strength ( Figure 1A and 1B). We also found that there was no further improvement in skeletal muscle function at Week 24 ( Figure 1A and 1B and Table S1).
Changes in biochemical parameters in 35 patients with Graves' disease treated with methimazole
To further investigate the relationship between serum chemical characteristics and TH excess, 35 patients (12 men, 34.3%; 23 women, 65.7%) with newly diagnosed or relapsed Graves' disease were recruited. The demographics and baseline characteristics of the participants at the initial visit are summarized in Table S2. The mean age of the study population was 38.9 ± 13.5 years, and the mean baseline body mass index of the participants was 20.5 ± 2.4 kg/m 2 . All enrolled patients showed high levels of serum free T4 and T3 as well as low levels of serum TSH prior to methimazole treatment (Week 0). Treatment with methimazole stabilized levels of serum free T4 and T3 in most patients ( Figure S1A), but did not induce significant changes in the serum TSH concentrations between Weeks 0 and 12 ( Figure S1A). At the follow-up visit, markers of liver injury were significantly decreased ( Figure S1B), whereas lipid profiles, including total, low-density lipoprotein, and high-density lipoprotein cholesterol and triglycerides, were remarkably increased compared with Week 0 ( Figure S1C). To assess the effect of thyrotoxicosis on bone turnover markers, we measured serum levels of alkaline phosphatase and C-telopeptide at the initial and follow-up visit; serum levels of C-telopeptide were significantly lower after treatment with methimazole ( Figure S1D). These findings suggest that 12 weeks of methimazole treatment results in biochemical changes in patients with Graves' disease. To exclude the possibility of hypokalemic periodic paralysis in the study participants, we measured serum levels of potassium in all participants at initial visit as well as at 12 weeks after the initiation of methimazole treatment. Normal potassium levels were confirmed in all participants at initial and follow-up visits ( Figure S1E), and there was no significant differences in potassium levels between initial visit and follow-up visits ( Figure S1E). Serum levels of creatine kinase, a marker of muscle damage, in patients with Graves' disease did not change significantly between the initial visit and the follow-up visits, although three patients with Graves' disease showed markedly higher levels of serum CK at the follow-up visit than at the initial visit ( Figure S1F). Fasting plasma glucose levels fell significantly after 12 weeks of antithyroid therapy ( Figure S1G).
Recovery of skeletal muscle strength and function in 35 patients with Graves' disease treated with methimazole
Next, we investigated the recovery of muscle function in patients with Graves' disease using measurements of handgrip strength, the 5XSST, and the SPPB at the initial visit and at Week 12. As shown in Table S3, handgrip strength was remarkably increased in the patients with Graves' disease treated with methimazole for 12 weeks. Methimazoleinduced euthyroidism resulted in a significant improvement in 5XSST performance and SPPB score between the initial and follow-up visit (Table S3). To determine associations between gender and recovery of physical performance in patients with Graves' disease, we divided the participants into male and female participants. We observed a significant improvement in grip strength and 5XSST performance in both men and women, although there were no significant differences in SPPB score in either gender (Figure 2A and 2B and Tables S4 and S5). Moreover, we observed improvements in grip strength and 5XSST, regardless of body mass index (low, 18.8 ± 0.89 kg/m 2 vs. high, 22.4 ± 1.98 kg/m 2 ) or age (young, 28.5 ± 5.26 years old vs. old, 51.4 ± 8.87 years old) Tables S6-S9). Taken together, these results suggest that restoring euthyroidism with methimazole treatment improves skeletal muscle function and performance in patients with Graves' disease at 12 weeks.
Plasma levels of acylcarnitines are associated with muscle dysfunction in 35 patients with Graves' disease
To identify markers of thyrotoxic myopathy, we measured plasma metabolites including amino acids and water-soluble metabolites, free fatty acids, acylcarnitines, bile acids, and phospholipids in patients with Graves' disease at the initial (Week 0) and 12 week visit. Several metabolites changed after methimazole treatment regardless of gender (Tables S10-S12), with changes in acylcarnitine species being the most prominent ( Figure 3A and 3B). To find plasma metabolites associated with muscle wasting in patients with Graves' disease, Spearman's correlation coefficients were calculated (Figure 4). Many plasma metabolites correlated with recovery of motor function: aspartic acid and some bile acids, such as hyodeoxycholic acid and chenodeoxycholic acid, correlated with the recovery of muscle strength ( Figure 4A and 4B), while many acylcarnitine species were associated with recovery of muscle endurance ( Figure 4C and 4D). Taken together, these results suggest that TH-associated changes in plasma metabolite levels are associated with the recovery of muscle function in patients with Graves' disease.
Peripheral blood mononuclear cell immunophenotype changes in 35 patients with Graves' disease treated with methimazole
Loss of muscle mass and function can be attributed to the catabolic effect of pro-inflammatory cytokines during inflammatory responses. In addition, increased levels of TH lead to an amplification of the pro-inflammatory response of many kinds of immune cells. 17 Therefore, to observe systemic inflammation status in patients with Graves' disease, we investigated the immunophenotype of PBMCs at Weeks 0 and 12. Levels of PBMCs and lymphocytes were not significantly different between visits ( Figure S2A and S2B). As expected, high monocyte populations were normalized by treatment with methimazole at Week 12 ( Figure S2C), whereas neutrophil levels were remarkably increased by recovery of euthyroidism ( Figure S2D). Low levels of haemoglobin and platelets caused by TH excess were also restored at 12 weeks ( Figure S2E and S2F). Overall, complete blood cell counts were improved in patients with Graves' disease at 12 weeks of methimazole treatment.
Previously, it was reported that among T-cell subsets, the CD28 À CD57 + senescent population of CD4 + and CD8 + T cells is significantly larger in drug-naïve patients with Graves' disease. 6 T-cell senescence is also associated with a systemic inflammatory response, which seems to be associated with age-related sarcopenia. 18 Thus, we assessed the frequency of CD57 + and/or CD28 À T cells among the CD4 + and CD8 + T cells in the PBMCs from the study participants at the initial and follow-up visit. Surface expression of CD4 and CD8 was then determined in this gated population ( Figure S3). Although the population of senescent CD4 + T cells was not different between the initial and follow-up visit ( Figure 5A), senescent CD8 + T cells were significantly decreased at Week 12 ( Figure 5B). We also detected significant decreases in the production of IFN-γ in senescent CD8 + T cells at Week 12 ( Figures 5C and S4). Furthermore, recovery of euthyroidism attenuated IFN-γ and TNF-α production in memory CD8 + T cells (Figures 5D, 5E, and S4).
Based on the high levels of plasma acylcarnitines in drugnaïve patients with Graves' disease, we also investigated the relationship between acylcarnitines and T-cell senescence. Senescent CD8 + T cells exhibited a significant, positive correlation with plasma levels of acylcarnitines at the initial visit ( Figure 5F and 5G). High frequencies of senescent CD8 + T cells at Week 0 were significantly associated with smaller increases in handgrip strength in patients with Graves' disease at 12 weeks, but were not statistically associated with 5XSST performance ( Figure 5H). This finding suggests that CD8 + T-cell senescence may predict recovery of muscle strength in patients with Graves' disease.
Discussion
This study demonstrates that reduced muscle strength in Graves' disease can be restored by methimazole-induced euthyroidism after 12 weeks of treatment. Analysis of plasma metabolites in patients with Graves' disease revealed that elevated acylcarnitine levels were associated with thyrotoxic myopathy. In addition, we found that monocytes and senes- We also showed that production of pro-inflammatory cytokines in senescent CD8 + T cells was positively correlated with plasma levels of acylcarnitines. Furthermore, this study showed that changes in plasma metabolites and high levels of senescent CD8 + T cells measured during an initial visit were associated with smaller increases in handgrip strength in patients with Graves' disease at 12 weeks of methimazole treatment ( Figure 5). It is well documented that muscle strength and endurance are decreased in patients with Graves' disease compared with euthyroid controls. 8 Previous investigations showed that thigh strength and cross-sectional area are reduced in patients with overt or subclinical hyperthyroidism at baseline compared with controls and are restored following treatment. 19,20 While euthyroidism mediates improvement of muscle weakness in Graves' disease, treatment with betablockers also contributes to attenuation of catecholamineinduced muscle wasting. 21 A recent large, population-based, age-matched and sex-matched case-control study (Graves' disease-euthyroid) suggests that postural stability and muscle strength are impaired by excess TH, which may increase falling risk in patients with Graves' disease. 8 Moreover, subclinical hyperthyroidism induced by treatment with levothyroxine in differentiated thyroid carcinoma deteriorates muscle function of upper limbs and health-related quality of life. 22 As shown in Figure 1, the handgrip power and lower extremity strength of most patients with Graves' disease recovered by Week 12, although there was no further improvement in skeletal muscle function by Week 24. This finding is consistent with a previous study of a small Swedish cohort 23 ; that study showed some recovery of skeletal muscle function and visceral adipose tissue during the initial 3 month period of recovery from hyperthyroidism, while near complete recovery was observed at 12 months after achieving a euthyroid state. This suggests that early recovery of skeletal muscle function in Asian patients with Graves' disease may occur more quickly (within the first 12 weeks after starting methimazole treatment) than in European patients. However, multinational clinical trials with long-term follow-up are required to determine ethnic differences with respect to recovery of skeletal muscle function after treatment of hyperthyroidism.
Although higher, 'supraphysiological' levels of TH contribute to deterioration of muscle strength and physical performance in Graves' disease patients, the association between systemic inflammation and thyrotoxicosis-mediated muscle dysfunction remains to be determined. Here, we found that pro-inflammatory, cytokine-senescent CD8 + T cells were significantly increased in Graves' disease patients with high acylcarnitine levels. These findings suggest that systemic inflammation is a critical factor contributing to the elevation of plasma acylcarnitine levels, which may be derived from muscle wasting by TH excess.
Pro-inflammatory cytokines induced by Graves' disease can cause the progression of intrathyroidal autoimmune processes, 24 orbital inflammation, 25 and systemic inflammatory responses by changing the immune cell subsets of PBMCs. 26 T-cell senescence, which is associated with systemic inflammation, affects the development and progression of autoimmune and metabolic diseases. [27][28][29] However, it has not been determined whether T-cell senescence is related to the development and recovery of thyrotoxic myopathy in patients with Graves' disease. In this study, we found that high senescent CD8 + T-cell levels recorded at an initial visit were associated with smaller increases of handgrip strength. This result indicates that although the effect of TH excess on immunosenescence in patients with Graves' disease has not been fully established, CD8 + T-cell senescence may contribute to the progression of systemic inflammationassociated thyrotoxic myopathy. However, given the lack of significance of the 5XSST results, this study was unable to reveal an association between T-cell senescence and muscle endurance. We hypothesized that muscle strength, as measured by grip strength, recovers before the recovery of muscle endurance, as measured by 5XSST. However, a longer period of study including muscle tissue analysis will be required to prove this hypothesis.
We found that free carnitine levels were elevated in serum from patients with Graves' disease ( Figure 2). This result is consistent with that reported by Pietzner et al., who showed that administration of levothyroxine to healthy young men increases free carnitine (as well as acylcarnitines) in association with an increase in blood free T4 levels. 30 Thus, it is plausible that the increase in serum free carnitine in Graves' disease is caused by an increase in free T4. Carnitine plays an important role in fatty acid metabolism and in beta-oxidation pathways in mitochondria; it is also utilized for production of acylcarnitines. By contrast, it acts as an antagonist that inhibits the function of TH by suppressing nuclear translocation of T3. 31 Thus, elevated levels of free carnitine in Graves' disease appear to serve as a source of acylcarnitines required for endurance exercise, while reducing excess TH levels via a feedback mechanism driven by elevated free T4. This may explain why carnitine therapy is effective at reducing muscle function-associated hyperthyroidism. [32][33][34] Previous reports of serum metabolome analysis in patients with hyperthyroidism describe changes in acylcarnitine levels. Chng et al. reported that levels of short-chain, middle-chain, and long-chain acylcarnitines in serum from Chinese women with Graves' disease were higher than those in euthyroid women, 35 whereas Al-Majdoub et al. reported that only middle-chain acylcarnitines were increased. 36 Our results are consistent with those of Chng et al. in that all subjects were Asian, and all types of acylcarnitine were elevated in the serum of Graves' disease patients ( Figure 2). Interestingly, we found that a relatively large number of middle-chain acylcarnitines (C8, C10, and C12) were among the metabolites that correlated with recovery of lower extremity skeletal muscle function ( Figure 3C and 3D), suggesting a substantial role for middle-chain acylcarnitines (at least a more important role than that of short-chain and long-chain acylcarnitines) in the reduced muscle endurance associated with Graves' disease. Supporting this, skeletal muscle is thought to release middle-chain acylcarnitines into the blood during endurance exercise. [37][38][39] In the current study, broad, untargeted, LC-MS-based profiling of plasma metabolites was used to elucidate the changes that occur in biochemical metabolic networks in patients with Graves' disease. The present findings suggest that plasma acylcarnitines are closely associated with biological events related to muscle dysfunction. In fact, a previous study revealed that higher plasma acylcarnitines predict lower levels of objectively measured physical performance in older adults. 40 Although fluxes in acylcarnitines in humans are tissue and context dependent, the liver is a major source of short-chain acylcarnitines, whereas medium-chain acylcarnitines are derived from skeletal muscle during exercise. 39 Moreover, TH excess stimulates heart mitochondrial carnitine translocase activity by facilitating the entry of fatty acids through mitochondrial inner membranes in rats. 41 Thus, further studies are needed to establish fatty acid flux in skeletal muscle as well as the contribution of major organs to plasma acylcarnitines in thyrotoxic myopathy, which may provide insight into the relationship between immunosenescence and lipid metabolism in Graves' disease.
The main strength of this study is that it used serial data acquisition from LC-MS-based plasma metabolomics and FACS-based peripheral blood immunophenotyping, alongside assessments of functional parameters including grip strength, gait speed, 5XSST performance, and SPPB score, to obtain data that may have clinical value for the treatment of Graves' patients. However, this study has several limitations. Most importantly, as an observational study, it cannot determine a causal relationship between variables. Secondly, our study population was exclusively South Korean, and we cannot be certain that our results are applicable to other populations. Thirdly, various confounding factors could not be considered in multivariate analyses due to the relatively small sample size. Fourth, the follow-up period of 12 weeks is too short to assess fully euthyroid-induced recovery of biochemical metabolic networks and functional properties of immune cells. Extended follow-up may provide additional information on the long-term effects of thyrotoxicosis. Finally, metabolic changes induced by an excess of TH are a major factor that drives development of hyperthyroid myopathy. Although the populations and activation of circulating immune cells are altered in patients with Graves' disease, little is known about immune cell infiltration into muscle tissues. However, an increase in serum T3 levels, as seen in hyperthyroidism, can induce B-cell activation and plasma cell antibody secretion in the absence of antigens. 42 Interstitial myositis was found in nine post-mortem cases of Graves' disease. Myocardial degenerative lesions have been reported in thyrotoxicosis, with foci of cell necrosis and mononuclear and polymononuclear infiltrates in patients dying of thyrotoxicosis. 43 Because immune cell infiltration of muscle tissue is observed under extreme conditions, patients with various stages of Graves' disease should be investigated to validate the possibility of thyrotoxicosis-induced inflammatory myopathy. However, such studies come with ethical challenges. Thus, further studies using animal models of thyrotoxicosis are warranted to fully understand the role of immune cells in the development of hyperthyroid myopathy.
In conclusion, this study suggests that CD8 + T-cell senescence is associated with smaller increases in handgrip strength in South Koreans with Graves' disease and immunosenescence is closely related to TH excess-mediated changes in plasma metabolites, including acylcarnitines ( Figure S5). Further large-scale, prospective studies are needed to clarify the mechanism of TH excess-induced increases in plasma acylcarnitines and T-cell senescence, which may define a causal relationship between immunometabolism and muscle function in Graves' disease.
Online supplementary material
Additional supporting information may be found online in the Supporting Information section at the end of the article. Data are expressed as the mean ± SEM. *P < 0.05, **P < 0.01, and ***P < 0.001, compared with the corresponding controls (Student's t-test). Figure S2. Comparison of complete blood counts in patients with Graves' disease at Week 0 and Week 12. Measurement of blood leukocytes (A), lymphocytes (B), monocytes (C), neutrophils (D), hemoglobin (E), platelet (F). Data are expressed as the mean ± SEM. **P < 0.01 and ***p < 0.001, compared with the corresponding controls (Student's t test). Figure S3. Gating strategy for analysis of senescent T cells, and naïve and memory T cells, within the peripheral blood CD4 + and CD8 + populations of patients with Graves' disease. Figure S4. The number of IFN-γ-and TNF-α-producing cells in the population of peripheral blood senescent CD8 + T cells and memory CD8 + T cells of patients with Graves' disease at the initial visit and at the 12-week visit. Data are expressed as the mean ± SEM. ***p < 0.001 compared with the corresponding controls (Student's t test). Figure S5. Graphical summary of the study. Measurement of plasma metabolites, including acylcarnitines, and senescent peripheral CD8 + T cells can be used to predict recovery of muscle dysfunction in patients with thyrotoxic myopathy. Table S1. Clinical and biochemical characteristics of the study population at week 0, 12, and 24. Table S2. Clinical and biochemical characteristics of the study population at initial visit. Table S3. Muscle function measured by handgrip strength, chair stand test, and short physical performance battery. Table S4. Muscle function measured by handgrip strength, chair stand test, and short physical performance battery in male patients with Graves' disease (n = 12). Table S5. Muscle function measured by handgrip strength, chair stand test, and short physical performance battery in female patients with Graves' disease (n = 23). Table S6. Muscle function measured by handgrip strength, chair stand test, and short physical performance battery in patients with lower BMI (n = 18). Table S7. Muscle function measured by handgrip strength, chair stand test, and short physical performance battery in patients with higher BMI (n = 17). Table S8. Muscle function measured by handgrip strength, chair stand test, and short physical performance battery in young patients with Graves' disease (n = 19). Table S9. Muscle function measured by handgrip strength, chair stand test, and short physical performance battery in old patients with Graves' disease (n = 16). Table S10. Global plasma metabolomics profiling in the study population. Table S11. Global plasma metabolomics profiling in male patients with Graves' disease. Table S12. Global plasma metabolomics profiling in female patients with Graves' disease. | 2022-01-01T07:24:21.144Z | 2021-12-30T00:00:00.000 | {
"year": 2021,
"sha1": "f14e56d34e2010ca426a974ff886988c2cc910b6",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jcsm.12889",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e229f42e2e74669e4162bb00fc616dc1ad8c31d7",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55203123 | pes2o/s2orc | v3-fos-license | The first passage sets of the 2D Gaussian free field: convergence and isomorphisms
In a previous article, we introduced the first passage set (FPS) of constant level $-a$ of the two-dimensional continuum Gaussian free field (GFF) on finitely connected domains. Informally, it is the set of points in the domain that can be connected to the boundary by a path along which the GFF is greater than or equal to $-a$. This description can be taken as a definition of the FPS for the metric graph GFF, and it justifies the analogy with the first hitting time of $-a$ by a one-dimensional Brownian motion. In the current article, we prove that the metric graph FPS converges towards the continuum FPS in the Hausdorff metric. This allows us to show that the FPS of the continuum GFF can be represented as a union of clusters of Brownian excursions and Brownian loops, and to prove that Brownian loop soup clusters admit a non-trivial Minkowski content in the gauge $r\mapsto |\log r|^{1/2}r^2$. We also show that certain natural interfaces of the metric graph GFF converge to SLE$_4$ processes.
Introduction
In this article, we continue the study of the first passage sets (FPS) of the 2D continuum Gaussian free field (GFF), initiated in [ALS17]. Here, we cover different aspects of it: the approximation by metric graphs and the construction as clusters of two-dimensional Brownian loops and excursions.
The continuum (massless) Gaussian free field, known as bosonic massless free field in Euclidean quantum field theory [Sim74,Gaw96], is a canonical model of a Gaussian field satisfying a spatial Markov property. In dimension d ≥ 2, it is a generalized function, not defined pointwise. In dimension d = 2, it is conformally invariant in law.
A key notion in the study of the GFF is that of local sets [SS13,Wer16,Sep17], along which the GFF admits a Markovian decomposition. For the 2D GFF important examples are level lines [SS13,She05,Dub09,WW16], flow lines [MS16a,MS16b,MS16c,MS17], and two-valued local sets [ASW17,AS18b]. These are examples of thin local sets, that is to say they are not "charged" by the GFF and only the induced boundary values matter for the Markovian decomposition.
In [ALS17], we introduced a family of different non-thin local sets: the first passage sets (FPS). Although the 2D continuum GFF is not defined pointwise, one can imagine an FPS of level −a as all the points in D that can be reached from ∂D by a continuous path along which the GFF has values ≥ −a. In some sense, an FPS is analogous to the first passage time of a Brownian motion, analogy we develop in [ALS17]. Although an FPS has a.s. zero Lebesgue measure, the restriction of the GFF to it is in general non-trivial. It is actually a positive measure, a Minkowski content measure in the gauge r → | log r| 1/2 r 2 . In this case, the behavior of the GFF on this local set is entirely determined by the geometry of the set itself. Observe that this differs from the one-dimensional case of Brownian first passage bridges.
In this article, we make the above heuristic description of the FPS exact by approximating the continuum GFF by metric graph GFF-s. A metric graph is obtained by taking a discrete electrical network and replacing each edge by a continuous line segment of length proportional to the resistance (inverse of the conductance) of the edge. On the metric graph, one can define a Gaussian free field by interpolating discrete GFF on vertices by conditional independent Brownian bridges inside the edges [Lup16a]. Such a field is pointwise defined, continuous, and still satisfies a domain Markov property, even when cutting the domain inside the edges. For a metric graph GFF, the first passage set of level −a is exactly defined by the heuristic description given in the previous paragraph: it is the set of points on the metric graph that are joined to the boundary by some path, on which the metric graph GFF does not go below the level −a [LW16].
The main result of this paper is Proposition 4.7. It states that when one approximates a continuum domain by a metric graph, then the FPS of a metric graph GFF converges in law to the FPS of a continuum GFF, for the Hausdorff distance. This result holds for finitely-connected domains, and for piece-wise constant boundary conditions.
In fact, Proposition 4.7 shows that the coupling between the GFF and the FPS converges. The proof relies on the characterization of the FPS in continuum as the unique local set such that the GFF restricted to it is a positive measure, and outside is a conditional independent GFF with boundary values equal to −a [ALS17]. It is accompagned by a convergence result on the clusters of the metric graph loop soup that contain at least one boundary-to-boundary excursion (Proposition 4.11).
Together, these convergence results have numerous interesting implications, whose study takes up most of this paper. Let us first mention a family of convergence results: certain natural interfaces in the metric graph GFFs on a 2D lattice converge to level lines of the continuum GFF (Proposition 5.12). These results are reminiscent on Schramm-Sheffield's convergence of the zero level line of 2D discrete GFF to SLE 4 [SS09,SS13].
Let us remark that Proposition 5.12 does not cover the results of [SS13], as the interfaces we deal with do not appear at the level of the discrete GFF. Yet, the discrete interfaces we consider are as natural, and the proofs for the convergence are way simpler. In particular, we show that if we consider metric graph GFFs on a lattice approximation of D, with boundary conditions −λ on the left half-circle and λ on the right half-circle, then the left boundary of the FPS of level −λ converges to the Schramm-Sheffield level line, and thus to a SLE 4 curve w.r.t. the Hausdorff distance (Corollary 5.13).
Several other central consequences of the FPS convergence have to do with isomorphism theorems. In general, the isomorphism theorems relate the square of a GFF, discrete or continuum if the dimension is less or equal to 3, to occupation times of Markovian trajectories, Markov jump processes in discrete case, Brownian motions in continuum. Originally formulated by Dynkin [Dyn83,Dyn84a,Dyn84b], there are multiple versions of them [Eis95, EKM + 00, Szn12a, LJ07, LJ11], see also [MR06,Szn12b] for reviews. For instance, in Le Jan's isomorphism [LJ07,LJ11], the whole square of a discrete GFF is given by the occupation field of a Markov jump process loop-soup. The introduction of metric graphs as in [Lup16a] provides "polarized" versions of isomorphism theorems, where one has the additional property that the GFF has constant sign on each Markovian trajectory. More precisely, one considers a metric graph loop-soup and an independent Poisson point process of boundary-to-boundary metric graph excursions. Among all the clusters formed by these trajectories, one takes those that contain at least an excursion, that is to say are connected to the boundary. Then, the closed union of such clusters is distributed as a metric graph FPS (Proposition 2.5).
As consequence of the convergence results, this representation of the FPS transfers to the continuum. In other words, the continuum FPS can be represented as a union of clusters of twodimensional Brownian loops (out of a critical Brownian loop-soup as in [LW04], of central charge c = 1) and Brownian boundary-to-boundary excursions (Proposition 5.3). This description can be viewed as a non-perturbative version of Symanzik's loop expansion in Euclidean QFT [Sym66,Sym69] (see also [BFS82]).
In Proposition 5.5, we combine our description of the FPS by loops and excursions with the renormalized Le Jan's isomorphism [LJ10], formulated in terms of renormalized (Wick) square of the GFF, and the renormalized centered occupation field of loops and excursions. In this way, we get the square and the interfaces of the GFF on the same picture. In the simply-connected case with zero boundary conditions, one can further ask these interfaces to correspond to the Miller-Sheffield coupling of CLE 4 and the GFF [MS11], [ASW17]. This implies that conditional on its outer boundary, the law of a Brownian loop cluster of central charge c = 1 is that of a first passage set of level −2λ (Corollary 5.4), extending the results of [QW15].
A natural question which arose in view of this new isomorphism was how to take the "square root" of the Wick square of the continuum GFF in order to access the value of the GFF on the FPS. This was solved in [ALS17], going through the Liouville quantum gravity (Gaussian multiplicative chaos). The "square root" turned to be a Minkowski content measure of the FPS in the gauge r → | log r| 1/2 r 2 . This also gives, via the isomorphism, the right gauge for measuring the size of clusters in a critical Brownian loop-soup (c = 1). For subcritical Brownian loop-soups (c < 1), the gauge is still unknown.
We draw additional consequences from the isomorphism theorems in Section 5.1 -we show local finiteness of the FPS, prove that its a.s. Hausdorff dimension is 2 and that it satisfies a Harris-FKG inequality. In Section 5.2, we also study more general families of level lines, and show for example that the multiple commuting SLE 4 [Dub07] are envelopes of Brownian loops and excursions (Corollary 5.10 and Remark 5.11). Previously, similar results were known only for single SLE κ (ρ) processes [WW13] and the conformal loop ensembles CLE κ (loop-soup construction [SW12]). Finally, in Corollary 5.15 how to construct explicit coupling of Gaussian free fields with different boundary conditions such that some level lines coincide with positive probability.
In a follow-up paper [ALS18], we will use the techniques developed here to define an excursion decomposition of the GFF.
The rest of this paper is structured as follows.
In Section 2, we recall the construction of the GFF on the metric graph and the related isomorphims. We also recall the definition of the first passage set on metric graph and its construction out of metric graph loops and excursions.
Section 3 is devoted to preliminaries on the continuum GFF, its first passage sets, and Le Jan's isomorphism representing the Wick's square of the GFF as centered occupation field of a Brownian loop-soup. In particular, we extend Le Jan's isomorphism to the GFF with positive variable boundary conditions by introducing boundary to boundary excursions.
In Section 4, we first introduce the notions of convergence of domains, fields, compact sets, trajectories we use. Then, we show the convergence of metric graph FPS to continuum FPS; and the convergence of metric graph loops and excursions to clusters of 2D Brownian loops and excursions.
Finally, in Section 5 we derive several consequences of the convergence results, including the identification of the continuum FPS with clusters of Brownian loops and excursions in Section 5.1 and the convergence of certain natural FPS interfaces to the SLE 4 curves in Section 5.2.
Preliminaries on the metric graph
In this section, we first give the definition of the metric graph and define the GFF on top of it -basically it corresponds to taking a discrete GFF on its vertices, and extending it using conditionally independent Brownian bridges of length equal to the resistance on all edges. Next, we browse through the measures on loops and excursions on the metric GFF; and the isomorphism theorems. In Section 2.4, we define the first passage set (FPS) of the metric graph introduced in [LW16] and bring out its representation using Brownian loops and excursions.
The results in this section are either already in the literature or are slight extensions of already existing results. For example, we extend the isomorphism theorems on the metric graph to nonconstant boundary conditions. 2.1. The Gaussian free field on metric graphs. We start from a finite connected undirected graph G = (V, E) with no multiple edges or self-loops. We interpret it is as an electrical network by equipping each edge e = {x, y} ∈ E with a conductance C(e) = C(x, y) > 0 . If x, y ∈ V , x ∼ y denotes that x and y are connected by an edge. A special subset of vertices ∂G ⊂ V will be considered as the boundary of the network. We assume that ∂G and V \∂G are non-empty. For x ∈ V \∂G, we denote Let ∆ G be the discrete Laplacian: Let E G be the Dirichlet energy: Let φ be the discrete Gaussian free field (GFF) on G, associated to the Dirichlet energy E G , with boundary condition 0. That is to say, if we defined the Green's function G G as the inverse of −∆ G , with 0 boundary conditions on ∂G, we have that φ is the only centred Gaussian process such that for any f, g : We would sometimes be interested in a GFF with non-0 boundary conditions. For that we call u : V → R a boundary condition if it is harmonic function in V \∂G and when the context is clear we identify it with its restriction to ∂G. Now note that φ + u is then the GFF with boundary condition u. Its expectation is u and its covariance is given by the Green's function.
Given an electrical network G, we can associate to it a metric graph, also called cable graph or cable system, denoted G . Topologically, it is a simplicial complex of degree 1, where each edge is replaced by a continuous line segment. We also endow each such segment with a metric such that its length is equal to the resistance C(x, y) −1 , x and y being the endpoints. One should think of it as replacing a "discrete" resistor by a "continuous" electrical wire, where the resistance is proportional to the length.
Given a discrete GFF φ with boundary condition 0, we interpolate it to a function on G by adding on each edge-line a conditionally independent standard Brownian bridge. If the line joins the vertices x and y, the endvalues of the bridge would be φ(x) and φ(y), and its length C(x, y) −1 . By doing that we get a continuous functionφ on G (Figure 1). This is precisely the metric graph GFF with 0 boundary conditions. Consider the linear interpolation of u inside the edges, still denoted by u.φ + u is the metric graph GFF with boundary conditions u. The restriction ofφ + u to the vertices is the discrete GFF φ + u.
The metric graph GFF satisfies the strong Markov property on G. More precisely, assume that A is a random compact subset of G. We say that is optional forφ if for every O deterministic open subset of G, the event A ⊆ O is measurable with respect the restriction ofφ to O. For simplicity we will also assume that a.s., A has finitely many connected components. Then G\A has finitely many connected components too, and the closure of each connected component is a metric graph, even if an edge of G is split among several connected components or partially covered by A.
Proposition 2.1 (Strong Markov property, [Lup16a]). Let A be a random compact subset of G, with finitely many connected components and optional for the metric graph GFFφ. Then we have a Markov decompositionφ where, conditionally on A,φ A is a zero boundary metric graph GFF on G\A independent ofφ A (and by convention zero on A), andφ A is on A the restriction ofφ to A and on G\A equals a harmonic functionh A , whose boundary values are given byφ on ∂G ∪ A.
2.2.
Measures on loops and excursions. Next, we introduce the measures on loops and boundaryto-boundary excursions which appear in isomorphism theorems in discrete and metric graph settings.
Consider the nearest neighbour Markov jump process on G, with jump rates given by the conductances, and let p G t (x, y) and P G,x,y t be the associated transition probabilities and bridge probability measures respectively. Let T ∂G be the first time the jump process hits the boundary G. The loop measure on G is defined to be µ G loop is a measure on nearest neighbour paths in V \∂G, parametrized by continuous time, which at the end return to the starting point. Note that it associates an infinite mass to trivial loops, which only stay at one given vertex. This measure was introduced by Le Jan in [LJ07,LJ10,LJ11]. If one restricts the measure to non-trivial loops and forgets the time-parametrisation, one gets the measure on random walk loops which appears in [LTF07,LL10].
Γ will denote the family of all finite paths parametrized by discrete time, which start and end in ∂G, only visit ∂G at the start and at the end, and also visit V \∂G. We see a path in Γ as the skeleton of an excursion from ∂G to itself. We introduce a measure ν G exc on Γ as follows. The mass given to an admissible path (x 0 , x 1 , . . . , x n ) is Note that this measure is invariant under time-reversal. For x, y ∈ ∂G, Γ x,y will denote the subset of Γ made of paths that start at x and end at y. We defined the kernel H G (x, y) on ∂G × ∂G as . It is symmetric. H G is often referred to as the discrete boundary Poisson kernel, and this is the terminology we will use. P G,x,y exc will denote the probability measure on excursions from x to y parametrized by continuous time. The discrete-time skeleton of the excursion is distributed according to the probability measure 1 Γx,y H G (x, y) −1 ν G exc . The excursions under P exc x,y spend zero time at x and y, i.e. they immediately jump away from x and jump to y at the last moment. Conditionally on the skeleton (x 0 , x 1 , . . . , x n ), the holding time at x i , 1 ≤ i ≤ n − 1, is distributed as an exponential r.v. with mean C tot (x i ) −1 , and all the holding times are conditionally independent. To a non negative boundary condition u on ∂G we will associate the measure Consider now the metric graph setting. We will consider on G a diffusion we introduce now. For generalities on diffusion processes on metric graphs, see [BC84,EK01]. ( X t ) t≥0 will be a Feller process on G. The domain of its infinitesimal generator ∆ G will contain all continuous functions which are C 2 inside each edge and such that the second derivatives have limits at the vertices and which are the same for every adjacent edge. On such a function f , ∆ G will act as ∆ G f = f /4, i.e. one takes the second derivative inside each edge. X behaves inside an edge like a one-dimensional Brownian motion. With our normalization of ∆ G , it is not a standard Brownian motion, but with variance multiplied by 1/2. When X hits an edge of degree 1, it behaves like a reflected Brownian motion near this edge. When it hits an edge of degree 2, it behaves just like a Brownian motion, as we can always consider that the two lines associated to the two adjacent edges form a single line. When X hits a vertex of degree at least three, then it performs Brownian excursions inside each adjacent edge, until hitting an neighbouring vertex. Each adjacent edge will be visited infinitely many times immediately when starting from a vertex, and there is no notion of first visited edge. The rates of small excursions will be the same for each adjacent edge. See [Lup16a,EK01] for details.
Just as a one-dimensional Brownian motion, ( X t ) t≥0 has local times. Denotem the measure on G such that its restriction to each edge-line is the Lebesgue measure. There is a family of local times (L x t ( X)) x∈ G,t≥0 , adapted to the filtration of ( X t ) t≥0 and jointly continuous in (x, t), such that for any f measurable bounded function on G, On should note that in particular the local times are space-continuous at the vertices. See [Lup16a]. Consider the continuous additive functional (CAF) It is constant outside the times X spends at vertices. By performing a time change by the inverse of the CAF (2.3), one gets a continuous-time paths on the discrete network G which jumps to the nearest neighbours. It actually has the same law as the Markov jump process on G with the rates of jumps given by the conductances. See [Lup16a]. The process ( X t ) t≥0 has transition densities and bridge probability measures, which we will denote p G t (x, y) and P G,x,y t respectively. T ∂G will denote the first time ( X t ) t≥0 hits the boundary ∂G. The loop measure on the metric graph G is defined to be It has infinite total mass. This definition is the exact analogue of the definition (2.1) of the measure on loops on discrete network G. Under the measure µ G loop , the loops do not hit the boundary G. One can almost recover µ G loop from µ G loop . Just as the process ( X t ) t≥0 itself, the loops under µ G loop admit a continuous family of local times. One can consider the CAF (2.3) applied to a metric graph loop γ that visits at least one vertex. By performing the time-change by the inverse of this CAF, one gets a nearest neighbour loop on the discrete network G. The image by this map of the measure µ G loop , restricted to the loops that visit at least one vertex, is µ G loop , up to a change of root (i.e. starting and endpoint) of the discrete loop. So, if one rather considers the unrooted loops and the measures projected on the quotients, then one obtains µ G loop as the image of µ G loop by a change of time. Moreover, the holding times at vertices of discrete network loops are equal to the increments of local times at vertices of metric graph loops between two consecutive edge traversals. Note that µ G loop also puts mass on the loops that do not visit any vertex. These loops do not matter for µ G loop . See [FR14] for generalities on the covariance of the measure on loops by time change by an inverse of a CAF.
On the metric graph one also has the analogue of the measure µ G,u exc on excursions from boundary to boundary defined by (2.2). Let x ∈ ∂G and let k be the degree of x. Let ε > 0 be smaller than the smallest length of an edge adjacent to x. x 1,ε , . . . , x k,ε will denote the points inside each of the adjacent edge to x which are located at distance ε from x. The measure on excursions from x to the boundary is obtained as the limit where F is any measurable bounded functional on paths. If y ∈ ∂G is another boundary point, possibly the same, µ G,x,y exc will denote the restriction of µ G,x exc to excursions that end at y. µ G,y,x exc is the image of µ G,x,y exc by time-reversal. If y = x, µ G,x,y exc has a finite mass, which equals H G (x, y). To the contrary, the mass of µ G,x,x exc is infinite. However, the restriction of µ G,x,x exc to excursions that visit V \∂G has a finite mass equal to H G (x, x).
Given u a non-negative boundary condition on ∂G, we define the following measure on excursions from boundary to boundary on the metric graph: If one restricts µ G,u exc to excursions that visit V \∂G and performs on these excursions the time-change by the inverse of the CAF (2.3), one gets a measure on discrete-space continuous-time boundaryto-boundary excursions which is exactly µ G,u exc . Particular cases of above metric graph excursion measures were used in [Lup15].
Next we state a Markov property for the metric graph excursion measure µ G,x exc . Let K be a compact connected subset of G. The boundary ∂K of K will be by definition the union of the topological boundary of K as a subset of G and ∂G ∩ K. K is a metric graph itself. Its set of vertices is (V ∩ K) ∪ ∂K. If an edge of G is entirely contained inside K, it will be an edge of K and it will have the same conductance. K can also contain one or two disjoint subsegments of an edge of G. Each subsegment is a (different) edge for K, and the corresponding conductances are given by the inverses of the lengths of subsegments. So K is naturally endowed with a boundary Poisson kernel (H K (x, y)) x,y∈∂K and boundary-to-boundary excursion measures (µ K,x,y exc ) x,y∈∂K . Note that these objects depend only on K and ∂K, and not on how K is embedded in G.
Proposition 2.2. Let x ∈ ∂G, and K a compact connected subset of the metric graph G which contains x and such that G\K = ∅. Denote by γ 1 • γ 2 the concatenation of paths γ 1 and γ 2 , where γ 1 comes first. For any F bounded measurable functional on paths, we have where E y stands for the metric graph Brownian motion X inside G, started from y.
2.3. Isomorphism theorems. The continuous time random walk loop-soup L G α is a Poisson point process (PPP) of intensity αµ G loop , α > 0. We view it as a random countable collection of loops. We will also consider PPP-s of boundary-to-boundary excursions Ξ G u , of intensity µ G,u exc , where u : ∂G → R + is a non-negative boundary condition.
The occupation field of a path (γ(t)) 0≤t≤tγ in G, parametrized by continuous time, is The occupation field of a loop-soup L G α is Same definition for the occupation field of Ξ G u . At the intensity parameter α = 1/2, these occupation fields are relate to the square of GFF: Proposition 2.3. Let u : ∂G → R + be a non-negative boundary condition. Take L G 1/2 and Ξ G u independent. Then, the sum of occupation fields where φ + u is the GFF with boundary condition u.
Proof. If u ≡ 0, there are no excursions we are in the setting of Le Jan's isomorphism for loop-soups ( [LJ07,LJ11]). If u is constant and strictly positive, then the proposition follows by combining Le Jan's isomorphism and the generalized second Ray-Knight theorem ( [MR06,Szn12b]). Indeed, then one can consider the whole boundary ∂G as a single vertex, and the boundary to boundary excursions as excursions outside this vertex.
The case of u non-constant can be reduced to the previous one. We first assume that u is strictly positive on ∂G. The general case can be obtained by taking the limit. We define new conductances on the edges: C(x, y) := C(x, y)u(x)u(y), where x and y are neighbours in G. Letφ be the 0 boundary GFF associated to the new conductances C. We claim that (φ(x)) x∈V To check the identity in law one has to check the identity of energy functions: From the second to the third line we used that u is harmonic. Now, we can apply the case of constant boundary conditions to 1 2 (φ + 1) 2 . We get that it is distributed like the occupation field of a loop-soup of parameter α = 1/2 and an independent Poissonian family of excursions fromx tox, both associated to the jump rates C(x, y). If on these paths we perform the time change we get L G 1/2 and Ξ G u . The time change (2.4) multiplies the occupation field by u 2 , which exactly transforms (φ + 1) 2 into (φ + u) 2 .
Note that the coupling (L(L G 1/2 ), L(L G 1/2 ) + L(Ξ G u )) is not the same as ( 1 2 φ 2 , 1 2 (φ + u) 2 ). On a metric graph, the isomorphism given by Proposition 2.3 still holds. But in this setting one has a stronger version of it, which takes in account the sign of the GFF. Consider a PPP of loops (loop-soup) L G 1/2 on the metric graph G, of intensity 1 2 µ G loop , and an independent PPP of metric graph excursions from boundary to boundary, Ξ G u , of intensity µ G,u exc . For x ∈ G, L x (L G 1/2 ) is defined as the sum over the loops of the local time at x accumulated by the loops. The occupation field L is a locally finite sum, except at the boundary points ∂G, but there it converges to 1 2 u 2 . Indeed, for this limit only matter the excursions that do not visit V \∂G, but then we are in the case of excursions of a one-dimensional Brownian motion. To the contrary, L x (L G 1/2 ) is a.s. an infinite sum at a fixed point x ∈ G\∂G. However x → L x (L G 1/2 ) admits a continuous version ([Lup16a]), and we will only consider it. We will also consider the clusters formed by L G 1/2 ∪Ξ G u . Two trajectories (loops or excursions) belong to the same cluster if there is a finite chain of trajectories which connects the two, such that any two consecutive elements of the chain intersect each other.
, which is non-empty with positive probability, is exactly the set of points not visited by any loop or excursion. The connected components of the positive set of L x (L G 1/2 ) + L x (Ξ G u ) are exactly the clusters of L G 1/2 ∪ Ξ G u , i.e. all the trajectories inside such a connected component belong to the same cluster. In [Lup16a], it is proved only for clusters of loops, but one can easily generalize it to the case with excursions. Also note that on the metric graph with positive probability the clusters of loops and excursions are strictly larger than the ones on the discrete network, i.e. they connect more vertices. We state next isomorphism without proof as it can be deduced from Proposition 2.3 following the method of [Lup16a].
Proposition 2.4. Let u be a non-negative boundary condition and L G 1/2 and Ξ G u be as previously. Let σ(x) be a random sign function with values in {−1, 1}, defined on the set The definition of σ will be extended to G by letting σ to equal 0 on {x ∈ G|L is distributed likeφ + u, the metric graph GFF with boundary condition u.
First passage sets of the GFF on a metric graph.
There is a natural notion of first passage sets for the metric graph GFFφ + u, which are analogues of first passage bridges for the one-dimensional Brownian motion. Let a ∈ R. Define A u −a = A u −a (φ) := {x ∈ G|∃γ continuous path from x to ∂G such thatφ ≥ −a on γ}. We report to Figure 4 for a picture of a first passage set on metric graph. A u −a is a compact optional set andφ −a is the first passage set of level −a. These first passage sets were introduced in [LW16]. From Proposition 2.4 we obtain a representation of the FPS using Brownian loops and excursions: Proposition 2.5. If a = 0 and the boundary condition u is non-negative, then in the coupling of Proposition 2.4, A u 0 is the union of topological closures of clusters of loops and excursions that contain at least an excursions (i.e. are connected to ∂G), plus ∂G.
Continuum preliminaries
In this section, we discuss about the continuum counterpart of the objects defined in the last section. First, we recall the notion of the continuum two-dimensional GFF. Then, we discuss Brownian loop and excursion measures. Further, we will give an isomorphism relating Brownian loops and excursions to the Wick square of the GFF. Finally, we will recall some properties of the first passage set of the continuum GFF that appear in [ALS17].
We denote by D ⊆ C an open bounded planar domain with a non-empty and non-polar boundary. By conformal invariance, we can always assume that D is a subset of the unit disk D. The most general case that we work with are domains D such that the complement of D has at most finitely many connected component and no complement being a singleton. Recall that by the Riemann mapping for multiple-connected domains [Koe22], such domains D are known to be conformally equivalent to a circle domain (i.e. to D\K, where K is a finite union of closed disjoint disks, disjoint also from ∂D).
3.1. The continuum GFF and its local sets. The (zero boundary) Gaussian Free Field (GFF) in a domain D [She07] can be viewed as a centered Gaussian process Φ (we also sometimes write Φ D when we the domain needs to be specified) indexed by the set of continuous functions with compact support in D, with covariance given by For this choice of normalization of G (and therefore of the GFF), we set In the literature, the constant 2λ is called the height gap [SS09,SS13]. Sometimes, other normalizations are used in the literature: if G D (z, w) ∼ c log(1/|z − w|) as z → w, then λ should be taken to be (π/2) × √ c. The covariance kernel of the GFF blows up on the diagonal, which makes it impossible to view Φ as a random function defined pointwise. It can, however, be shown that the GFF has a version that lives in some space of generalized functions (Sobolev space H −1 ), which justifies the notation (Φ, f ) for Φ acting on functions f (see for example [Dub09]).
In this paper, Φ always denotes the zero boundary GFF. We also consider GFF-s with non-zero Dirichlet boundary conditions -they are given by Φ + u where u is some bounded harmonic function whose boundary values are piecewise constant 1 .
Local sets: definitions and basic properties.
Let us now introduce more thoroughly the local sets of the GFF. We only discuss items that are directly used in the current paper. For a more general discussion of local sets, thin local sets (not necessarily of bounded type), we refer to [SS13,Wer16,Sep17].
Even though, it is not possible to make sense of (Φ, f ) when f = 1 A is the indicator function of an arbitrary random set A, local sets form a class of random sets where this is (in a sense) possible: A is a random closed subset of D and Φ A a random distribution that can be viewed as a harmonic function when restricted to D\A. We say that A is a local set for Φ if conditionally on Throughout this paper, we use the notation h A : D → R for the function that is equal to Φ A on D\A and 0 on A.
Let us list a few properties of local sets (see for instance [SS13,Aru15,AS18a] for derivations and further properties): (1) Any local set can be coupled in a unique way with a given GFF: satisfy the conditions of this definition. Then, a.s. Φ A = Φ A . Thus, being a local set is a property of the coupling (Φ, A), as Φ A is a measurable function of (Φ, A).
3.3. First passage sets of the 2D continuum GFF. The aim of this section is to recall the definition first passage sets introduced in [ALS17] of the 2D continuum GFF, and state the properties that will be used in this paper.
The set-up is as follows: D is a finitely-connected domain where no component is a single point and u is a bounded harmonic function with piecewise constant boundary conditions. Definition 3.3 (First passage set). Let a ∈ R and Φ be a GFF in the multiple-connected domain D. We define the first passage set of Φ of level −a and boundary condition u as the local set of Φ such that ∂D ⊆ A u −a , with the following properties: −a , ε > 0 and z ∈ ∂O, and for all sufficiently small open ball U z around z, we have that a.s.
Notice that if u ≥ −a, then the conditions (1) and (2) Moreover, in this case the technical condition (3) is not necessary. This condition roughly says that nothing odd can happen at boundary values that we have not determined: those on the intersection ∂A u −a and ∂D. This condition enters in the case u < −a as we want to take the limit of the FPS on metric graphs and it comes out that it is easier not to prescribe the value of the harmonic function at the intersection of ∂D and ∂A u −a . Remark 3.4. One could similarly define excursions sets in the other direction, i.e. stopping the sets from above. We denote these sets by A b . In this case the definition goes the same way except that (2) should now be replaced by Φ We now present the key result in the study of FPS.
Theorem 3.5 (Theorem 4.3 and Proposition 4.5 of [ALS17]). Let D be a finitely connected domain, Φ a GFF in D and u be a bounded harmonic function that has piecewise constant boundary values. Then for all a ≥ 0, the first passage set of Φ of level -a and boundary condition u, A u −a , exist and satisfy the following property: (1) Uniqueness: if A is another local set coupled with Φ and satisfying 3.3, then a.s.
3.4. Brownian loop and excursion measures. Next, we discuss Brownian loop and excursion measures in the continuum. Consider a non-standard Brownian motion (B t ) t≥0 on C, such that its infinitesimal generator is the Laplacian ∆, so that E B t 2 = 4t . The reason we use a nonstandard Brownian motion comes from the fact that the isomorphisms with the continuum GFF have nicer forms. We will denote P z,w t the bridge probability measures corresponding to (B t ) t≥0 .
Given D an open subset of C, we will denote The Brownian loop measure on D is defined as where dx denotes the Lebesgue measure on C. This is a measure on rooted loops, but it is natural to consider unrooted loops, where one "forgets" the position of the start and endpoint. This Brownian loop measure was introduced in [LW04], see also [Law08], Section 5.6 2 From the definition follows that the Brownian loop measure satisfies a restriction property: if D ⊂ D, . It also satisfies a conformal invariance property. The image of µ D loop by a conformal transformation of D is µ D loop up to a change of root and time reparametrization. In particular, the measure on the range of the loop is conformal invariant. For µ C loop , there is also invariance by polar inversions (up to change of root and reparametrization). A Brownian loop-soup in D with intensity parameter α > 0 is a Poisson point process of intensity measure αµ D loop , which we will denote by L D α .
Now we get to the excursion measure. H D (dx, dy) will denote the boundary Poisson kernel on ∂D × ∂D. In the case of domains with C 1 boundary, the boundary Poisson kernel is defined as For general case, we use the conformal invariance of the measure H D (x, y)dxdy, where dx and dy are arc lengths, and can define H D (dx, dy) as a measure. See [ALS17] for details. Given x = y ∈ ∂D, P D,x,y exc will denote the probability measure on the boundary-to-boundary Brownian excursion in D from x to y, associated to the non-standard Brownian motion of generator ∆. Let u be a nonnegative bounded Borel-measurable function on ∂D. We define the boundary-to-boundary excursion measure associated to u as These excursion measure are analogous to the one on metric graphs defined in Section 2.2. In the particular case of D simply connected and u positive constant on a boundary arc and zero elsewhere, the measure µ D,u exc appears in the construction of restriction measures ( [LSW03] and [Wer05], Section 4.3). Next, we state without proof some fundamental properties of these excursion measures that follow just from properties of boundary Poisson kernel and 2D Brownian motion.
Proposition 3.6. Let D be a domain as above and u a bounded non-negative condition. The boundary-to-boundary excursion measure µ D,u exc satisfies the following properties: (1) Conformal invariance: [Proposition 5.27 of [Law08]] Let D be a domain conformally equivalent to D and f a conformal transformation from D to D . Then µ D ,u exc is the image of µ D,u exc by f , up to a change of time ds = |f (γ(t))| −2 dt. (2) Markov property: Let B be a compact subsets of ∂D and assume that u is supported on B.
Let K be a compact subset of D, at positive distance from B. We assume that K has finitely many connected components. For any F bounded measurable functional on paths, we have where γ 1 (t γ 1 ) is the endpoint of the path γ 1 and • denotes the concatenation of paths.
2 In [Law08], Section 5.6 the authors rather consider the loop measure associated to a standard Brownian motion.
This is just a matter of a change of time ds = dt/ √ 2.
The Markov property above is analogous to the Markov property on metric graphs given by Proposition 2.2.
Given B 1 and B 2 two disjoint compact subsets of ∂D, we will denote M(B 1 , B 2 ) the conformal modulus, which is the inverse of extremal length [Ahl10,ALS17]. If B 1 and B 2 form a partition of connected components of ∂D, then exc (excursions having one end in B 1 and the other in B 2 ).
In general, 3.5. Wick square of the continuum GFF and isomorphisms. The isomorphism theorems on discrete or metric graph (Propositions 2.3, 2.4) involve the square of a GFF. However for the continuum GFF in dimension 2, which is a generalized function, the square is not defined. Instead, one can define a renormalized square, the Wick square ( [Sim74,Jan97]). Let D be as in the previous subsection an open connected bounded domain, delimited by finitely many simple curves. First, we consider Φ the GFF with 0 boundary condition. Φ ε will denote regularizations of Φ by convolution with a kernel. The Wick square : Φ 2 : is the limit as ε → 0 of converges in L 2 , and at the limit, : Φ 2 : is a random generalized function, measurable with respect to Φ, which lives in the Sobolev space H −1 (D), that is to say in the completion of the space of continuous compactly supported functions in D for the norm See [Dub09], Section 4.2. Indeed, In [LJ10,LJ11], Le Jan considers (following [LW04, LSW03, SW12]) Brownian loop-soups in D, L D α , which are Poisson point processes with intensity αµ D loop . One sees L D α as a random countable collection of Brownian loops. He considers the centred occupation field of L D α . The occupation field of L D α with ultra-violet cut-off is where f is a test function and T γ is the life-time of a loop γ. The measure L ε (L D α ) diverges as ε → 0, i.e. in the limit we get something which is even not locally finite. The centred occupation field is The convergence above, evaluated against a bounded test function, is in L 2 . For α = 1/2, Le Jan shows the following isomorphism: Proposition 3.7 (Renormalized Le Jan's isomorphism, [LJ10,LJ11]). The centred occupation field L ctr (L D 1/2 ) has the same law as half the Wick square, 1 2 : Φ 2 :, where Φ is the GFF in D with zero boundary condition.
We consider now a bounded non-negative boundary condition u and also denote by u its harmonic extension to D. Consider the GFF with boundary condition u, Φ + u. One can define its Wick square as : (Φ + u) 2 :=: Φ 2 : +2uΦ.
Let Ξ D u be a Poisson point process of boundary-to-boundary excursions with intensity µ D,u exc . The occupation field L(Ξ D u ) is well defined an it is a measure. One can still introduce the centered occupation field as . Below we extend the renormalized Le Jan's isomorphism to the case of non-negative boundary conditions. For an analogous statement in dimension 3 see [Szn13]. In particular, the field L ctr (L D 1/2 ) + L ctr (Ξ D u ) has same law as 1 2 : (Φ + u) 2 :. Proof. We need to show that for every non-negative continuous compactly supported function χ on D, For the finiteness and the expression of E e − 1 2 (:Φ 2 :,χ) , see Sections 10.1 and 10.2 in [LJ11]. Since Ξ D u is independent from L D 1/2 , we need to show that .
We will use the following lemma, whose proof we postpone: From above lemma follows that where G χ is the Green's function of −∆ + χ. Indeed, it is an exponential moment of a massive GFF. Thus, we have to show that The above relation holds at discrete level, on a lattice approximation of domain D. It is a consequence of the isomorphism of Proposition 2.3. On a continuum domain one gets this relation by convergence of excursion measures (Lemma 4.6) and massive Green's functions (see Remark 4.4).
Proof of Lemma 3.9. It is enough to show that for every constant ε > 0, is the density of a massive GFF corresponding to the Dirichlet form Then, by letting ε tend to 0, we get our lemma. For ε > 0 fixed, we can follow step by step the proof of the very similar Lemma 3.7 in [LRV14] and use as there the decomposition of Φ (0) according the eigenfunctions (with 0 boundary condition) of the Laplace-Beltrami operator associated to the metric (ε + χ(z)) 1 2 |dz| (the area element being (ε + χ(x))dx). Let us note that as for the isomorphism 2.3, the coupling (2L ctr (L D 1/2 ), 2L ctr (L D 1/2 ) + 2L(Ξ D u )) is not the same as (: Φ 2 :, : Φ 2 : +2uΦ + u 2 ).
Convergence of FPS and clusters
In this section, we show that the metric graph FPS converges to the continuum FPS topology. We also prove that the clusters of metric graph Brownian loop soups and boundary-to-boundary excursions converge to their continuum counterpart. Both results are about convergence in probability, with respect to Hausdorff topology on closed subsets. We start the section with detailing the set-up and recalling some basic convergence results. Thereafter, we prove the convergence of the metric graph FPS towards its continuum counterpart. 4.1. Set-up and basic convergence results. In this section, we set up the framework for our convergence statements. We also review some convergence results for random closed sets, random fields and path measures. Most of the content is standard, but slightly reworded and reinterpreted. For simplicity, we restrict ourselves to Z 2 n , the metric graph induced by vertices (2 −n Z) 2 and with unit conductances on every edge. However, one should be able to extend all the convergence results to isoradial graphs without too much effort. We always consider our metric graph Z 2 n to be naturally embedded in C, and when we mention distances and diameters for sets living on metric graphs, we always mean the Euclidean distance inherited from C. 4.1.1. Topologies and convergences on sets and functions. We mostly work with finitely-connected bounded domains D. For us, a domain is by definition open and connected. We approximate these domains with metric graph domains obtained as intersections of Z 2 n with domains of C, i.e. by D n := Z 2 n ∩ D n , where D n → D in an appropriate sense detailed below. We say that such an approximation D n satisfies the condition if There exists C, C > 0 such that D n ⊆ [−C, C] 2 , and the amounts of connected components of C\D n is less or equal to C .
At times, we also need to work in the setting where both D and D n are non-connected open sets (e.g. the complement of a CLE 4 carpet). The same condition makes sense in this case too.
We use the following topologies for open and closed sets: • For domains D z with a marked point z ∈ D z , approximated by marked open domains (D n , z n ) we say that (D n , z n ) converges to (D z , z) in the sense of Carathéodory if (1) z n → z, for any x ∈ ∂D there are x n ∈ ∂D n with x n → x.
Notice that in this wording we have not assumed simply-connectedness, as the Carathéodory topology generalizes nicely to multiply-connected setting (e.g. see [Com13]). Notice that if (D z , z) is any pointed connected component of D, then convergence of D n → D in the sense that their complements converge, implies the Carathéodory convergence of (D n , z n ) → (D z , z) for any z n → z; see for example Theorem 1 of [Com13].
We are also interested in the convergence of functions on D n = Z 2 n ∩ D n to (generalized) functions on D ⊂ [−C, C] 2 . In fact, it is more convenient to look at functions, whose domain of definitions is extended to the whole of [−C, C] 2 . Thus, we extend a functionf defined on D n to the whole of [−C, C] 2 by taking the harmonic extension off , with zero boundary values on ∂[−C, C] 2 . In particular, this extended function f is then well-defined inside the square faces delimited by D n .
Observe that in the case of the metric graph GFFφ, such an extension φ is still a Gaussian process. We use these extensions everywhere when talking about the convergences of functions and often omit the word 'extension' for readability. If we want to be explicit, we use the decoration as above. In particular G Dn will denote the Green's function of the metric graph GFF defined on D n and extended to [−C, C] 2 .
Both harmonic functions and GFF-s can be considered on any open set. If Φ is a GFF in D, then we can write Φ = D z Φ D z where the sum runs on the connected components D z of D and where Φ D z is a GFF in D z independent of all the others. We consider the following topologies for the spaces of functions: • For the convergence of the extensions of bounded functions we use the uniform norm on compact subsets of [−C, C] 2 \∂D. We avoid ∂D because we want to allow for a finite number of jumps on ∂D. • The GFF-s on metric graphs and on domains are always considered as elements of the Sobolev space H −1−ε ([−C, C] 2 ). For background on Sobolev spaces we refer the reader to [AF03].
We will shortly see that these convergences are well-behaved in the sense that natural approximations of continuum objects converge. A key ingredient is the weak Beurling estimate (see for e.g. Proposition 2.11 of [CS11] for the discrete case and Proposition 3.73 of [Law08] for the continuum case): Lemma 4.1 (Beurling estimate). There exists β > 0 such that for all K ⊆ Z 2 n with C connected components all of them with size at least δ, and for all z ∈ Z 2 n \K and ε ≤ δ/2 where X is a metric graph Brownian motion started at z. The same estimate holds in the continuum, i.e. if we replace Z 2 n by C and consider the two-dimensional Brownian motion. The following lemma is basically contained in [CS11] Proposition 3.3 and Corollary 3.11. Although the statements there include more stringent conditions, in particular, the boundaries are assumed to be Jordan curves and domains simply-connected, one can verify that this is not really used in the proofs. For similar statements one can also see Proposition 3.5 and Lemma A.1 in [BL14].
where G Dn is the harmonic extension of the metric graph Green's function on D n .
Similarly, for any connected component D z of D containing z, if (D n , z n ) converge towards (D, z) in the Carathéodory sense, then the statements also hold.
Remark 4.3. We include the possibility of finitely many discontinuity points on ∂D, as then the statement provides an explicit way of constructing (metric graph) harmonic functions, whose extensions converge to the original harmonic function in the topology defined above.
Proof. As mentioned just before the statements, the proofs are basically contained in [CS11]. Hence we will only sketch the steps with appropriate references.
(1) Pre-compactness in the uniform norm on compacts of D, and harmonicity outside of ∂D both follow from the proof of Proposition 3.1 in [CS11]. In particular, we know that each subsequential limit is a bounded harmonic function. To determine the boundary values one uses Beurling estimate as in the proof of Proposition 3.3 in [CS11]. (1) to G D (z, ·) − G [−C,C] 2 (z, ·). To deduce the convergence of the integral one finally uses and dominated convergence together with the fact that G [−C,C] 2 ∩ Z 2 n (z, w) is upper bounded by c(log(|z − w|) + 1). For more details see e.g. Proposition 3.5 of [BL14].
Remark 4.4. Note that statement (2) can be proved similarly for a massive Green's function. One just need to replace the harmonic extensions by the solutions of the appropriate Poisson equation, and the standard Brownian motion by a Brownian motion killed at an appropriate exponential rate.
Lemma 4.2 allows us to give a short argument for the convergence of the metric GFF-s: Proof. Lemma 4.2 (2) guarantees the convergence of finite-dimensional marginals. Thus it remains to prove tightness. The norm of the Sobolev space H −1 (D) is given by (e.g. see [Dub09], Section 4.2.) But using Lemma 4.2 (2) and denoting Q C = [−C, C] 2 we can explicitly calculate to see that Hence by the Sobolev embedding theorem, we have that (φ n ) n∈N is tight in H −1−ε ([−C, C] 2 ) for any ε > 0 and the convergence follows. The latter part follows similarly.
4.1.2.
Topologies and convergences on loops and excursions. Now, let L D α and L Dn α be respectively a continuum and a metric graph loop-soup, i.e. PPPs with intensity measures αµ D loop and αµ Dn loop respectively. Moreover, for u a positive function on ∂D and u n a positive function on ∂ D n , let Ξ D u and Ξ Dn un be respectively independent PPP of boundary-to-boundary Brownian excursions of intensity µ D,u exc and boundary-to-boundary metric graph excursions of intensity µ Dn,un exc . We use the following topologies when we work with paths, i.e. excursions and loops, and sets of paths: • We consider paths as closed subsets in D and consider the Hausdorff distance d H on these subsets. • For a set of paths Γ, define Γ ε as the subset Γ, consisting of paths that have diameter larger than ε. Now on the sets Γ, for which the cardinality of Γ ε is finite for all ε > 0, we define the distance d(Γ 1 , Γ 2 ) to be equal to inf δ > 0 : There is a bijection f : One can verify that d(Γ n , Γ) → 0 is equivalent to the existence of δ k → 0 such that Γ δ k n → Γ δ k in the sense that there exists a sequence of bijections f n : Γ δ k n → Γ δ k such that sup ∈Γ δ k n d H ( , f ( )) → 0, and that moreover the sets of path for which the cardinality of Γ ε is finite for all ε > 0, endowed with this distance, defines a Polish space.
The following lemma says that these convergences also behave nicely: Proof. In both points (1) and (2) the second conclusion follows directly from the first. For example, in the case (1) we can choose δ k → 0 such that the PPPs of intensity measures µ Dn loop 1 Diam(γ)≥δ k converge jointly in law to PPPs of intensity measure µ D loop 1 Diam(γ)≥δ k . By Skorokhod representation theorem we can couple them all on the same probability space to have an almost sure convergence of these PPPs. But then by the equivalent description of the topology on sets of paths given above, we obtain the second conclusion. Thus, in what follows we just prove the first statement for both (1) and (2).
(1) The statement for random walk loop-soups on Z 2 n ∩D z for a domain D z follows from Corollary 5.4 of [LTF07]. The proof for the metric graph loop-soups in that context is exactly the same. As remarked just after the proof (of Corollary 5.4 of [LTF07]), the ideas extend to our non-simply connected case with finitely many boundary components. Moreover, one can verify that one can also approximate D z using Z 2 n ∩ D n where (D n , z n ) → (D z , z) in the sense of Carathéodory. As the convergence of D n → D in the sense that the complements converge in the Hausdorff metric implies the Carathéodory convergence for all components, and we have only countably many components, the claim follows.
(2) Essentially the proof follows the steps of [LTF07]: we need to first show convergence of excursions with diameter larger than ε that visit some compact set inside D, and then to show that there are no excursions of diameter ε that stay δ close to the boundary.
For the first part it suffices to show that for any closed square Q ⊆ D with rational endpoints, we have weak convergence 1 γ∩Q =∅ µ Dn,un exc → 1 γ∩Q =∅ µ D,u exc . This follows from the Markov property for the metric graph excursions (Proposition 2.2) and the Brownian excursion measure (Proposition 3.6). Indeed, we can decompose the excursions in D (or D n ) at their first hitting time at Q into an excursion from ∂D (or ∂ D n ) to ∂Q and a Brownian motion (continuum 2D or on metric graph) started on ∂Q and stopped at its first hitting time of ∂D (or ∂ D n ). The convergence of the second part just follows from the convergence of random walks to Brownian motion inside compacts of D and Beurling estimate for the convergence of the actual hitting point. For the excursion from ∂D (or ∂ D n ) to ∂Q, we can decompose it further into an excursion from ∂Q to ∂Q, where Q is some closed square with rational endpoints containing Q in its interior, and a time-reversed Brownian motion (continuum 2D or on metric graph) from ∂Q to the boundary of ∂D (or ∂ D n ). The convergence of both pieces is now clear. To do this we can again use the Markov decomposition. We cover the boundary of D n , for all n, with open disks (B(z i , ε)) i∈I . The minimal number of disks needed depends on ε, but is uniformly upper bounded in n. Any excursion that is at least 2ε in diameter and has one endpoint in B(z i , ε), has to hit ∂B(z i , ε). But then it can be decomposed into an excursion from ∂ D n to ∂B(z i , ε) and a metric graph BM from ∂B(z i , ε) to ∂ D n . The probability that the latter goes ε far without getting δ far from ∂D can be bounded by Beurling estimate (Lemma 4.1) and goes to 0 as δ → 0 uniformly in sufficiently large n.
4.2.
Convergence of first passage sets. In this subsection we prove that the discrete FPS converge to the continuum FPS. Recall that by convention the FPS always contains the boundary of the domain, that D n is the intersection of D n with Z 2 n , and that we use φ n to denote the extension of the metric graph GFF on D n to the rest of [−C, C] 2 .
Furthermore, if we couple ( φ n ) n∈N and Φ D such that φ D n → Φ D in probability as generalized functions, then in probability. Remark 4.8. The convergence of the open sets D n → D in the sense that their complements converge implies, for any z ∈ D and any z n → z, the Carathéodory convergence of (D n , z n ) to (D z , z). Yet it does not imply that ∂D n converge to ∂D z in the Hausdorff metric, hence the need to treat the boundary separately.
The proof follows from two lemmas. The first one says that the metric graph local sets converge towards continuum local sets. The second one is a general lemma, which in our case will imply that, due to the uniqueness of the FPS, the convergence in law of the pair (GFF, FPS) can be promoted to a convergence in probability. We remark that similar lemmas appear in [SS13], where the authors prove the convergence of DGFF level lines [SS13]. Moreover, let (φ n , A n ) be such that A n is optional forφ n and that for some c > 0, the sets A n have almost surely less than c components none of which reduces to a point.
Then ( φ n , A n , ( φ n ) An ) is tight and any sub-sequential limit (Φ, A, Φ A ) is a local set coupling. Additionally, for any connected component D z of D we have that ( φ D z n , (A n ∩ D z )) converges to a local set coupling in D z and Φ A∩D z is given by the restriction of Φ A to D z .
Proof. Let us first argue tightness. By Lemma 4.5 we know that the GFF-s converge in law. Moreover, the space of closed subsets of the closure of a bounded domain is compact for the Hausdorff distance. Hence the sequence A n is tight. By conditioning on A n , we can uniformly bound the expected value of the H −1 ([−C, C] 2 ) norm of (φ n ) An and obtain tightness of (φ n ) An in H −1−ε .
Finally, by the Markov decompositionφ − (φ n ) An =φ An and the triangle inequality, we see that also (φ n ) An is a tight sequence in H −1−ε ([−C, C] 2 ). Thus, we have tightness of the quadruple (φ n , A n , (φ n ) An , (φ n ) An ), from which the tightness for ( φ n , A n , ( φ n ) An , ( φ n ) An ) also follows.
We pick a subsequence (that we denote the same way) such that (φ n , A n , (φ n ) An , (φ n ) An ) converges in law to (Φ, A, Φ 1 , Φ 2 ). From the joint convergence we then have that for any bounded continuous functionals f 1 and f 2 On the other hand, conditionally on (A n , ( φ n ) An ), the law of ( φ An n ) is that of a metric graph GFF in D n \A n . By Lemma 4.5, it follows that where conditionally on A,Φ A is a GFF in D\A. Thus, by bounded convergence, we have that This implies that conditionally on Φ 2 and A, the law of Φ 1 is that of a GFF on D\A.
Thus, it remains to show that Φ 1 is almost surely harmonic in D\A: indeed, then from Lemma 3.2, it would follow that A is local and Φ 1 = Φ A and Φ 2 = Φ A .
Let ∆ n be the discrete Laplacian. From Lemma 2.2 of [CS11], it follows that for any smooth function f , inside any compact set where derivatives of f remain bounded we have that ∆ n f (u) is equal to ∆f (u) + O(2 −n ). However, from integration by parts it follows that if f is a smooth function with compact support in D\A, then ((φ n ) An , ∆ n f ) = 0 for sufficiently large n. Hence (Φ 1 , ∆f ) = 0 almost surely and thus Φ 1 is harmonic in D\A.
The final claim just follows from Lemma 4.5 and the simple fact that if A is a local set for Φ in a non-connected domain D, then for any component of D, D z , we have that A ∩ D z is a local set of Φ D The next lemma shows how to promote convergence in law to convergence in probability. See Lemma 4.5 in [SS09], and Lemma 31 in [Sha16] for earlier appearances in the context of GFF level lines and of Gaussian multiplicative chaos, respectively. We give a slight rewording of the latter proof adapted to our setting.
Lemma 4.10. Let (X n , Y n ) n∈N∪{∞} be a sequence of random variables in a metric space, living all of them in the same probability space. Suppose we know that Proof. Denote M n := (X n , Y n , X ∞ , F (X ∞ )). Because, each coordinate is tight we have that up to a subsequence M n ⇒ (X ∞ , F (X ∞ ), X ∞ , F (X ∞ )). Thus, any linear combination of them will also converge in law. Note that by (2), (X n , X ∞ ) → (X ∞ , X ∞ ), soX ∞ = X ∞ . This fact implies that a.s.Ȳ ∞ = F (X ∞ ), thus Y n − F (X ∞ ) converges in law, and therefore in probability, to 0.
We have now all the tools to prove the convergence.
Proof of Proposition 4.7. When min ∂ Dn u n ≥ −a, we know that (φ n ) An + u n is constantly equal to −a on D n \A n and the claim follows directly from Lemmas 4.9 and 4.10 .
When min ∂ Dn u n < −a, we can again use the Lemmas 4.9 and 4.10 to obtain the convergence to a local set (A, Φ A ) in probability. Moreover, it is easy to see that the conditions (1) and (2) in the Definition 3.3 hold for A, as these properties hold for all approximations and pass to the limit. Thus, it just remains to argue for (3). This condition however follows from Beurling estimate. Pick some component O of the complement of A and any z on its boundary. We can then choose a small enough ball U 1 z around z such that the boundary conditions only change once in this neighborhood. By Beurling estimate (Lemma 4.1), we can further choose an even smaller ball U z such that the Brownian motion started inside U z ∩ O exits O through U 1 z ∩ ∂O with a probability larger than 1 − ε/(4 max |u|). By the convergence of A n → A in probability and Beurling estimate again, we can choose n 0 large enough so that for all n ≥ n 0 the metric graph Brownian motion started inside U z ∩ ( Z 2 n \A n ) exits Z 2 n \A n through U 1 z ∩ ( Z 2 n \A n ) with probability larger than 1 − ε/(2 max |u|) and u n − u ≤ ε/2 uniformly over D n ∩ D. A final use of Beurling estimate then implies that for any z n ∈ U z ∩ ( Z 2 n \A n ), we have thath An (z n ) + u n (z n ) ≥ min{−a, inf w∈Uz∩O u(w)} − ε, where h An is the metric graph harmonic function outside of A n as in Proposition 2.1.The claim follows.
Convergence of clusters of loops and excursions.
In this subsection we assume that u is non-negative. Let L D α and L Dn α denote respectively a continuum and metric graph loop-soups of intensity α ∈ (0, 1/2]. Similarly, let Ξ D u and Ξ Dn un denote PPP of boundary-to-boundary excursions in the continuum of intensity µ D,u exc and in the metric graph setting of intensity µ Dn,un exc respectively. We sample the loop-soups and PPP of excursions independently and are interested in the clusters of L D α ∪ Ξ D u and L Dn α ∪ Ξ Dn un that contain at least one excursion. By definition two paths belong to the same cluster if they are joined by a finite chain of paths along which two consecutive ones intersect. We denote by A = A(L D α , Ξ D u ) and A n = A n (L Dn α , Ξ Dn un ) the closed union of such clusters. The main content of this subsection shows that metric graph clusters converge to their continuum counterparts: Proposition 4.11. Suppose ( D n , z n ) satisfy the condition and converge to (D, z) in the Carathéodory sense. Moreover suppose that u is a non-negative bounded harmonic function and u n → u uniformly on compact subsets of D. We also assume that whenever u = 0 on a part of the boundary B, then for any sequence of metric graph boundary points x n → x ∈ B we have that u n (x n ) = 0 as well, for n large enough. Then, the sequence of compact sets ( A n ∩ D) n≥0 converges in law for the Hausdorff metric towards A.
Let us explain the additional condition on the convergence of u n . We want to avoid the following situation. Assume B is an arc of the boundary ∂D and u equals 0 on B. Then A does not intersect B. However one could approximate u by u n small but positive on B n ⊆ ∂ D n approaching B. Then almost surely B n ⊂ A n and the limit of A n would contain B.
Before proving Proposition 4.11, let us show how it allows us to improve the convergence result of for the FPS. Indeed, from Proposition 4.7 it follows that ( A un −a ∩ D) ∪ ∂D converges in law to A u −a . However, by convention A u −a is defined to contain ∂D, and Proposition 4.7 does not guarantee that there is no part of A un −a that for each n intersects D but at the limit converges to a non-trivial arc on ∂D. This can be addressed using Proposition 4.11. Assume that for any sequence of metric graph boundary points x n ∈ ∂ D n converging to a point x ∈ B, we have that u n (x n ) ≤ −a for n large enough. Then, the limit of ( A un −a \∂ D n ) ∩ D has empty intersection with the part of the boundary where u ≤ −a.
Proof. First assume that −a ≤ inf u. Note that A u+a 0 \∂D has same law as A(L D 1/2 , Ξ D u+a ). Then, ( A un −a \∂ D n ) ∩ D has the law of ( A un+a 0 \∂ D n ) ∩ D that has the law of A(L Dn 1/2 , Ξ Dn un+a ) ∩ D. Thus, the claim follows from Proposition 4.11 and the fact that the set A does not touch the parts of the boundary with u = −a.
For the general case, consider the boundary condition u * := u∨(−a) and u * n = u n ∨(−a) on D and D n respectively. Notice that then u * n , u * still satisfy the hypothesis in the statement. Furthermore, by monotonicity of the FPS on the metric graph A un −a ⊆ A u * n −a . We conclude by applying the previous case to A u * n −a . Let us now comeback to the proof of Proposition 4.11. The core of our proof is the following lemma, saying that there are no loop-soup clusters that at the same time stay at a positive distance from the boundary, but also come microscopically close to it.
Lemma 4.13. Let α ∈ (0, 1/2]. Suppose that ( Ω n , w n ) n∈N satisfy and ( Ω n , w n ) → (Ω, w) in the Carathéodory sense. Then, for all δ > 0, Note that the above lemma is not implied by the convergence result proved by Lupu in [Lup15]. However, it could have been proved using the same strategy as in [Lup15]. In our article, we will have a slightly different approach, relying on the convergence of first passage sets. We will first show how the proposition follows from this lemma, and then prove the lemma.
Proof of Proposition 4.11. From Lemma 4.6 we know that {γ ∈ L Dn α |γ ∩ D = ∅} ⇒ L D α , {γ ∈ Ξ Dn un |γ ∩ D = ∅} ⇒ Ξ D u , as n → ∞. Also ( A n ) n∈N is a sequence of random closed sets and thus is tight. Thus, as each coordinate is tight, we can extract a subsequence (which we denote in the same way) along which ({γ ∈ L Dn α |γ ∩ D = ∅}, {γ ∈ Ξ Dn un |γ ∩ D = ∅}, A n ∩ D) n∈N converges in law to a triple (L D α , Ξ D u , A). By using Skorokhod's representation theorem, we may assume that this convergence is almost sure. Then, as A is a measurable function of L D α and Ξ D u , it remains to show that A = A almost surely.
Let us first show that A ⊆ A. To do this we consider loops and excursions with cutoff on the diameter and the clusters formed by these loops and excursions. More precisely, respectively in the continuum and on the metric graph, let A ε and A ε denote the union of clusters, that are formed of loops and excursions that have diameter greater than or equal to ε > 0, and that contain at least one excursion. Recall that the diameter is always measured using the Euclidean distance on C, even for paths living on metric graphs.
Note that both A ε and A ε consist a.s. of finitely many path, and are in particular compact, since a.s. there are finitely many loops and excursions of diameter larger than some value. Now, in our coupling almost surely metric graph loops converge to continuum Brownian loops, metric graph excursions to Brownian excursions, and moreover by (Lemma 2.7 in [Lup16b]) their intersection relations also converge. Hence we have that A ε n ∩ D a.s. → A ε . On the other hand A ε n ⊆ A n and A ε → A as ε → 0. We conclude that A ⊆ A almost surely.
Let us now show that A ⊆ A. First notice that there exists a deterministic sequence ε(n) → A ε as n → ∞, and A ε a.s. → A as ε → 0 in the Hausdorff distance, we can apply a diagonal argument to choose the sequence ε(n). Now, fix a dense sequence of distinct points (w i ) i≥0 in D. Let O n (w i ) and O ε(n) n (w i ), denote the connected components containing w i of D n \ A n and D n \ A ε(n) n respectively. By connected component of w i on a metric graph, we mean the connected component that either contains w i or contains the dyadic square surrounding w i . For any fixed w i it is defined only with certain probability that converges to 1 as n → +∞. Further, define O(w i ) as the connected component of w i in D\A and for any δ > 0 let Θ δ (w i ) be the connected component of w i in D\ (A + B(0, δ)). As the condition on the boundary convergence of u n → u guarantees that A ∩ ∂D = A ∩ ∂D, it remains to prove that A ∩ D ⊆ A ∩ D. To do this it suffices to show that for all w i and δ > 0 For any fixed w i , we will apply Lemma 4.13 to Ω = O(w i ) and Ω n = O ε(n) n (w i ). Note that C\O(w i ) has at most as many connected components as C\D. Moreover, from Theorem 1 of [Com13] we know that the Hausdorff convergence of A ε(n) n to A implies the Carathéodory convergence of The metric graph loops that intersect but are not contained in A ε(n) n are by construction all of diameter smaller than ε(n). Thus, the only way for A n to have points δ-far from A ε(n) n is the event in (4.2) to be satisfied. We conclude that, with probability converging to 1, we have A n ∩Θ δ (w i ) = ∅. Hence we obtain (4.1) and conclude the proof of the proposition. Now, we present a short proof of the lemma using the already proved convergence of FPS. The idea is to add Brownian excursions to the loop soup to get an FPS. Then, when the event of having a macroscopic cluster close to the boundary occurs, we use bounds on the FPS and the fact that Brownian excursions intersect any cluster that goes from microscopically close to the boundary to a macroscopic distance, to conclude.
Proof of Lemma 4.13. Notice that by monotonicity of the clusters in α, it suffices to prove the claim for α = 1/2. By Lemma 4.2, we can couple (L Ωn 1/2 ) n≥0 and L Ω 1/2 in such a way that L Ωn 1/2 a.s. → L Ω 1/2 . We also add PPP-s of excursions Ξ Ωn n and Ξ Ω u for some constant u > 0 to be chosen later. We do it in such a way that Ξ Ωn n independent of L As the excursion measure has infinite mass on the diagonal, it follows that for any fixed x ∈ ∂Ω, there is a.s. a Brownian excursion in Ξ Ω u disconnecting x from Ω\B(x, δ/2) in Ω. Hence, any connected set joining x to a point at distance δ from ∂Ω has to intersect this excursion. However, we know that Ξ Ωn n is independent of L Z 2 n 1/2 and that {γ ∈ Ξ Ωn u |γ ∩ Ω = ∅} converges in law to Ξ Ω u . Thus, the lemma follows.
Consequences of the convergence results
In this section, we use Proposition 4.7 and Proposition 4.11 to obtain several results concerning FPS and the Brownian loop soup. These results can be roughly partioned into two: In Section 5.1 we discuss a representation of the FPS with Brownian loops and excursions, and the consequences of this representation: extensions of the isomorphism theorems and several basic properties of the FPS like its local finiteness. In Section 5.2, we discuss consequences on the level lines of the GFF, in particular we prove a convergence result of certain interfaces of the metric graph GFF towards SLE 4 (ρ) processes. Let us however start from an easy consequence on the probability of percolation for super-level sets of a metric graph GFF in a large box. This type of percolation questions are for example studied in [DL18]. [LW16] we see that p N is bounded away from 0 and 1 uniformly in N .
Let us now consider the continuity in θ. Since for any θ fixed, a.s. either ( is continuous in θ for any fixed N . To obtain the uniformity in N we argue as follows. Let p(θ) be the probability that the continuum FPS A a 0 intersects [−θ, θ] 2 . Again, p(θ) is non-decreasing and continuous, because for fixed θ, a.s. either A a 0 ∩ [−θ, θ] 2 = ∅ or A a 0 ∩ (−θ, θ) 2 = ∅. But now Proposition 4.7 tells that ( A a 0 ) N rescaled by N −1 converges in law to A a 0 in [−1, 1] 2 . Thus, by convergence in law, the sequence (p N (θ)) N converges pointwise to p(θ), and since the functions are non-decreasing, the convergence is uniform in θ. Hence the continuity of p(θ) gives the uniformity in N .
Remark 5.2. One can similarly get the continuity in percolation in annuli at macroscopic distance from the boundary of the domain (∂ Λ N ). For this, the convergence of first passage sets is however not enough. One needs the convergence of all excursion sets, i.e. sign components ofφ N + b. This will be done in [ALS18].
5.1. Representation of the continuum FPS with Brownian loops and excursions, and consequences on basic properties of FPS. From Proposition 2.5, we know that a FPS on a metric graph is represented as closure of clusters of metric graph loops and excursions. By using the convergence of the metric graph FPS to the continuum FPS (Proposition 4.7) and the convergence of clusters of metric graph loops and excursions to their continuum counterparts (Proposition 4.11), we obtain a similar representation in continuum. inside Int(Υ). Moreover, in the same article the authors prove that conditioned on Υ, the loops that intersect Υ are independent from those that do not intersect it, and they have the law of a PPP of Brownian excursions from Υ to Υ inside Int(Υ) with intensity µ Int(Υ),2λ exc . Combining this with Proposition 5.3, we can give a geometric description of the whole outermost cluster: . Let the domain D be simply connected. Conditioned on the outer boundary Υ of a Brownian loop-soup cluster in L D 1/2 , the topological closure of the cluster itself is distributed like a first passage set A 2λ 0 = A −2λ inside Int(Υ), the interior surrounded by Υ. One can also combine the isomorphism for the Wick square of the GFF (Proposition 3.8) and the construction of the FPS from clusters of loops and excursions: Proposition 5.5 (FPS + Wick square). Let u be a non-negative harmonic function with piecewise constant boundary values. One can couple on the same probability space a GFF Φ and two point processes L D 1/2 and Ξ D u of loops, resp. excursions, with Ξ D u independent from L D 1/2 , such that the two following conditions hold simultaneously: (1) 1 2 : Φ 2 : +uΦ + 1 2 u 2 = L ctr (L D 1/2 ) + L(Ξ D u ), Proof. We follow the method of [QW15] and use subsequential limits of couplings on metric graphs to create a coupling in continuum. As in Propositions 2.5 and 4.11, we consider metric graph domains D n converging to D and non-negative bounded metric graph harmonic functions u n converging to u. By Propositions 2.4 and 2.5 on D n , one can couple a GFFφ n and loops and excursions (L Dn 1/2 , Ξ Dn u ) such that (1) 1 2φ 2 n − E φ 2 n + u nφn + 1 2 u 2 n = L(L Dn 1/2 ) − E L(L Dn 1/2 ) + L(Ξ Dn un ), In [QW15] 3 , it was shown that 1 D (L(L Dn 1/2 )−E L(L Dn 1/2 ) ) converges in law to L ctr (L D 1/2 ), in the sense that tested against any finite family of smooth functions compactly supported in D, f 1 , . . . , f k , the finite-dimensional vectors converge. Also there one can find the convergence in law of 1 D (φ 2 n −E φ 2 n ) to : Φ 2 :. The family of random variables (φ n , 1 D (φ 2 n − E φ 2 n ), L Dn 1/2 , Ξ Dn u , 1 D (L(L Dn 1/2 ) − E L(L Dn 1/2 ) ), L(Ξ Dn un ), A n ∩ D) n∈N is tight because each component converges in law. Thus, the whole coupling has subsequential limits in law, and identities (1) and (2) pass to the limit. 5.1.1. Basic properties of the FPS. In this section, we prove several basic but fundamental properties of the continuum FPS: we show that its Hausdorff dimension is a.s. 2, that it is a.s. locally finite and finally, that it satisfies the FKG inequality. ⊆ A u −a , so is A u −a . Next, we show that A u −a is locally finite. Proposition 5.7 (Local finiteness of A u −a ). Let D be a bounded finitely connected domain of C, u a harmonic function with piecewise constant boundary values, and a ∈ R. Then A u −a is locally finite, that is to say that for any ε > 0, there are finitely many connected components of D\A u −a of diameter larger than ε.
Proof. First, one can assume that u ≥ −a. If this is not the case, one can first sample A u −a and note that D\ A u −a has only finitely many components where h A −a > −a, and proceed as in the proof of Corollary 5.6. For simplicity, we can take a = 0 and u ≥ 0.
Let U be an annulus of form {z ∈ C|r < |z − z 0 | < 4r} such that U ∩ D = ∅. If A u −a is not locally finite with positive probability. Then, for some rational δ, at least one annuli with a rational midpoint and with r = δ/4 is crossed by infinitely many components of A u −a with positive probability. Thus, it is enough to show that for any fixed annulus U it is a.s. not crossed by an infinity of connected components of A u −a .
So consider a fixed annulus U and divide it into sub-annuli We will use the representation of A u −a by clusters of Brownian loops and excursions as in Proposition 5.3. Our aim is to bound the probability of E(k), the event that there are at least k connected components of A u −a crossing U . To do this, let us first let us consider the following five events: • E 0 (k 0 ): there are at least k 0 chains of Brownian loops and excursions crossing U mid , such that no two different chains contain a common loop or excursion of L D 1/2 ∪ Ξ D u ; • E 1 (k 1 ): there are at least k 1 different Brownian paths (loops or excursions) in L D 1/2 ∪ Ξ D u that cross U int ; • E 2 (k 2 ): at least k 2 different Brownian paths (loops or excursions) in L D 1/2 ∪Ξ D u crossing U ext ; • E 3 (k 3 ): there is at least one loop or excursion in L D 1/2 ∪ Ξ D u crossing at least k 3 times U int ; • E 4 (k 4 ): there is at least one loop or excursion in L D 1/2 ∪ Ξ D u crossing at least k 4 times U ext . We claim that Indeed, the k connected components of D\A u −a crossing U are separated by k chains on Brownian loops and excursions that cross U . Any two such chains have subchains crossing U mid . These subchains may be composed of disjoint paths, or have some Brownian paths in common. In the latter case, the shared paths have to cross either U int or U ext (or both), as two chains cannot be connected inside U . Now, suppose that the event E 1 (k 1 ) ∪ E 2 (k 2 ) ∪ E 3 (k 3 ) ∪ E 4 (k 4 ) does not hold. Then, any subchain crossing U mid can be connected to at most (k 1 + k 2 − 1)(k 3 + k 4 − 2) others, implying that under (5.1) the event E(k) also cannot hold.
To finish the proof, we just need to argue that P(E(k) = ∞) = 0. Let us first note that P(E 1 (k 1 )), resp. P(E 2 (k 2 )), are tail probabilities of Poisson random variables, and hence decrease faster than Ce −k i log(k i ) , for some constant C > 0. Further, by elementary properties of Brownian paths P(E 3 (k 3 )), resp. P(E 4 (k 4 )), decrease at least exponentially fast in k 3 , resp. k 4 . Finally, in order to control P(E 0 (k 0 )) one can apply the van den Berg -Kesten (BK) inequality for Poisson point processes [vdBK96]. More precisely, the event E 0 (k 0 ) corresponds to k 0 disjoint occurrences of the event E 0 (1) and by BK inequality we have P(E 0 (k 0 )) ≤ P(E 0 (1)) k 0 . Now, taking for all i ≥ 0, k i = k 1 3 /2 , so that (5.1) is satisfied, we have that for some constants C , C > 0. In particular, this probability tends to 0 as k → +∞.
Next, we see that A u 0 satisfies an Harris-FKG inequality. This follows from the general Harris-FKG inequality for Poisson point processes (Lemma 2.1 in [Jan84]) Corollary 5.8 (Harris-FKG). Consider a non-negative boundary condition u. Let F 1 and F 2 be two bounded measurable functionals on compact sets. We assume that F 1 and F 2 are increasing, that is to say if K ⊆ K , F i (K) ⊆ F i (K ). Then Remark 5.9. One could also obtain a Harris-FKG inequality for A u −a from a Harris-FKG inequality for the GFF Φ. Then one does not need the constraint u ≥ −a. First, note that A u −a is an nondecreasing function of Φ: s. This can be proven similarly to the monotonicity part in Proposition 4.5 in [ALS17]. Further, Φ satisfies itself a Harris-FKG inequality: if F 1 and F 2 are functionals such that Pit82] for the Harris-FKG property for finite-dimensional Gaussian vectors with covariance matrix having non-negative entries.
5.2.
Convergence to the level lines and an explicit coupling of level lines. In [WW13] the authors show that in simply connected domains SLE κ (ρ) curves with κ ∈ (8/3, 4] can be obtained as "envelopes" of clusters of Brownian excursions from boundary to boundary and Brownian loops inside the domain. We will first show how to extend this result to the generalized level lines in multiply connected domains, defined in [ALS17] Section 3, and then use this to prove the main result of this subsection: we show that certain interfaces of metric graph GFFs converge to generalized level lines. In particular, we show that for specific boundary conditions and for simply connected domains, some basic metric graph GFF interfaces converge to SLE 4 (ρ) processes.
We work in the following set-up: D is a finitely connected domain and ∂ ext D the outermost connected component of ∂D. In other words, ∂ ext D separates D from infinity. We consider two boundary points x 0 = y 0 ∈ ∂ ext D that split ∂ ext D in two boundary arcs, B 1 and B 2 (see Figure 2). Assume that u is a harmonic function such that on the boundary it is piecewise constant, equal to −λ on B 2 , inf B 1 u > −λ and inf ∂D\∂extD u ≥ λ. By Corollary 4.12 of [ALS17], A u −λ does not intersect B 2 and thus we can take O the unique connected component of D\A u −λ such that B 2 ⊆ ∂O. Let η denote the curve defined by A u −λ ∩ ∂O. It is proven in [ALS17] Corollary 4.12 that η is a.s. equal to the generalized level line of Φ + u going from y 0 to x 0 . However, for the rest of this section, one can also treat the generalized level lines as the FPS boundaries just described.
Consider also an independent PPP-s of loops L D 1/2 and boundary-to-boundary excursions Ξ D u+λ . By definition there are no excursions hitting B 2 \{x 0 , y 0 } in Ξ D u+λ . Let D 2 be the unique connected component of D\A(L D 1/2 , Ξ D u+λ ) such that B 2 ⊂ ∂D 2 and let ∂ 2 A(L D 1/2 , Ξ D u+λ ) = ∂D 2 ∩A(L D 1/2 , Ξ D u+λ ). It is also a path in D joining y 0 and x 0 like the generalized level line η. The following corollary says that these two paths agree (see Figure 2 for an illustration): Corollary 5.10 (Level line = envelope of Brownian excursions and loops). Let D be finitely connected and u, η and ∂ 2 A(L D 1/2 , Ξ D u+λ ) as above. Then the generalized level line η has same law as ∂ 2 A(L D 1/2 , Ξ D u+λ ). Remark 5.11. More generally, other level lines, or families of multiple level lines, can be obtained as boundaries of clusters of Brownian loops and excursions, as long as these level lines are boundaries of a same first passage set. For instance, in a simply connected domain, one can get in this way multiple commuting SLE 4 curves, which correspond to alternating boundary conditions 0, 2λ ( Figure 3). In [PW17], the authors give an expression for probabilities of these different pairings.
Next, we show that certain interfaces of the metric graph GFF converge in law to level lines of the continuum GFF. Let D, x 0 , y 0 , B 1 , B 2 , u, η be as previously. Consider D n open subset of Z 2 n such that we have Hausdorff convergence of D n ∪ ∂ D n to D and that of Z 2 n \ D n to C\D. Let ∂ ext D n be the boundary of the only unbounded connected component of Z 2 n \ D n . We assume that B 1,n ∪ B 2,n is a partition of ∂ ext D n , such that B i,n converges to B i , and moreover B 1,n and B 2,n are separated by exactly two 2 −n × 2 −n dyadic squares, of which one contains x 0 and the other y 0 (see Figure 4). Let u n be harmonic on D n such that u n is constant −λ on B 2,n , inf B 1,n > −λ, inf ∂ Dn\∂ext Dn ≥ λ and u n converges to u uniformly on compact subsets of D. We have seen that with this boundary conditions, the metric graph first passage set A un −λ contains the boundary B 2,n only by convention, i.e. it satisfies A un −λ \∂ D n = ∂ D n \B 2,n .
Let ∂ 2 A un −λ be all the points in ∂ A un −λ that are connected in A un −λ to B 1,n and in D n \A un −λ to B 2,n . A.s. ∂ 2 A un −λ contains no vertices and the edges it intersects define a path from x 0 to y 0 in the dual lattice of (2 −n Z) 2 (in red on Figure 4). if the domain D is simply connected and the boundary condition u is constant equal to b > −λ on B 1 , ∂ 2 A un −λ converges in law to the trace of an SLE 4 (ρ) process, with ρ = b/λ − 1. Let us stress a corollary that gives a convergence result of very simple metric graph GFF interfaces towards the SLE 4 : Corollary 5.13. Consider D and metric graphs D n = Z n ∩ D. Let the boundary conditions u, u n be given by −λ for all z with Re(z) < 0 and by λ for all z with Re(z) ≥ 0. Let ( φ n ) n≥0 be metric graph GFFs on D n , and suppose that their extensions converge to a GFF Φ in probability. Then, the left boundary of the FPS A un −λ and the right boundary of the FPS A un λ both converge in probability w.r.t. the Hausdorff distance to the −λ, λ level line of [SS09], which has the law of SLE 4 from −i to i in D.
Proof of Proposition 5.12. ∂ 2 A un −λ is a boundary "component" of A un −λ and η that of A u −λ . The convergence of A un −λ to A u −λ in the Hausdorff topology implies that the limit of ∂ 2 A un −λ contains η and does not intersect D 2 , i.e. the B 2 side of η (right on Figures 2 and 4). Yet this convergence does not exclude that in the limit there are bubbles attached to η on its B 1 side (left on Figures 2 and 4). To address this issue, we are going to use the representation of the level line η as the boundary of clusters of loops and excursions, and some results from [vdBCL16] that state that the clusters of a Brownian loop-soup are "well connected", that is to say that, if we remove the microscopic Brownian loops up to some scale, the outer boundaries of clusters do not change too much.
From Corollary 5.10, we have the representation η = ∂ 2 A(L D 1/2 , Ξ D u+λ ). Consider further metric graph loop-soup L Dn 1/2 , metric graph PPP of excursions Ξ Dn un+λ and the union of clusters containing at least one excursion A n = A n (L Dn 1/2 , Ξ Dn un+λ ). Using Lemma 4.2 we can couple everything on the same probability space so that the metric graph PPP and unions of clusters converge to their continuum counterparts. Now, define ∂ 2 A n to be the set of points on ∂ A n that are connected in A n to B 1,n and in D n \ A n to B 2,n . As before, it has the same law as ∂ 2 A un −λ .
As in the proof of Proposition 4.11, we also consider clusters of loops and excursions that have diameter larger than ε, denoted by A ε = A ε (L D 1/2 , Ξ D u+λ ) and A ε n = A ε n (L Dn 1/2 , Ξ Dn un+λ ) in the continuum and on the metric graph respectively. Define ∂ 2 A ε and ∂ 2 A ε n as above. From Corollary 5.3 in [vdBCL16], it follows that for fixed ε > 0, ∂ 2 A ε n converges as n → +∞ in Hausdorff topology to ∂ 2 A ε . Thus, as ∂ 2 A ε is on the B 1 side of η, we obtain that ∂ 2 A n is asymptotically "squeezed" between ∂ 2 A ε and η.
The result about its law just follows from the fact that level lines in simply-connected domains for piece-wise constant boundary conditions have the law of SLE 4 (ρ) processes [WW16].
Remark 5.14. Using absolute continuity of level lines, one can extend the convergence result above to the case where the boundary condition is not constantly equal to −λ on B 2 , but is less than or equal to −λ on B 2 and equal to −λ on B 2 \B 2 , where B 2 ⊂ B 2 and d(B 2 , {x 0 , y 0 }) > 0.
5.2.1.
A coupling with different boundary conditions and coinciding level lines. Finally, we will discuss how the representation of level lines as boundaries provides an explicit coupling of level lines for the GFF-s with different boundary conditions. Moreover, we also give an exact formula for the conditional probability that the two level lines agree in this coupling, conditioned on one of the level lines. In fact, in the non-boundary touching case, the existance of a coupling where level lines of two GFF-s with different boundary conditions agree with positive probability follows already from Proposition 13 in [ASW17]. Here, we provide an explicit such coupling with exact formulas.
Let D, x 0 , y 0 , B 1 , B 2 , u, η be as previously. Moreover let u * be another harmonic function that on the boundary is piecewise constant. Suppose u * ≥ u and let Let Φ * be a GFF. Then we can define η * , a generalized level line of Φ * + u * from y 0 to x 0 .
Corollary 5.15 (Coupling of level lines with different boundary conditions). Assume d(B 3 , B 2 ) > 0. Then, there is a coupling of random curves η and η * such that the event η = η * has positive probability. The conditional probability of this event given η is P(η * = η|η) = 1 η∩B 3 =∅ exp (−M(u, u * , η)) , where where H D i (dx 1 , dx 2 ) is the boundary Poisson kernel on ∂D i ×∂D i and µ D harm (x 2 , dx 3 ) is the harmonic measure on ∂D seen from x 2 . | 2018-05-23T14:49:32.000Z | 2018-05-23T00:00:00.000 | {
"year": 2018,
"sha1": "c8a35e8d84a6f26c28643cccf67aa2b83217caea",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1805.09204",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c8a35e8d84a6f26c28643cccf67aa2b83217caea",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
266763734 | pes2o/s2orc | v3-fos-license | Proteomic analysis shows decreased type I fibers and ectopic fat accumulation in skeletal muscle from women with PCOS
Background: Polycystic ovary syndrome’s (PCOS) main feature is hyperandrogenism, which is linked to a higher risk of metabolic disorders. Gene expression analyses in adipose tissue and skeletal muscle reveal dysregulated metabolic pathways in women with PCOS, but these differences do not necessarily lead to changes in protein levels and biological function. Methods: To advance our understanding of the molecular alterations in PCOS, we performed global proteomic and phosphorylation site analysis using tandem mass spectrometry, and analyzed gene expression and methylation. Adipose tissue and skeletal muscle were collected at baseline from 10 women with and without PCOS, and in women with PCOS after 5 weeks of treatment with electrical stimulation. Results: Perilipin-1, a protein that typically coats the surface of lipid droplets in adipocytes, was increased whereas proteins involved in muscle contraction and type I muscle fiber function were downregulated in PCOS muscle. Proteins in the thick and thin filaments had many altered phosphorylation sites, indicating differences in protein activity and function. A mouse model was used to corroborate that androgen exposure leads to a shift in muscle fiber type in controls but not in skeletal muscle-specific androgen receptor knockout mice. The upregulated proteins in muscle post treatment were enriched in pathways involved in extracellular matrix organization and wound healing, which may reflect a protective adaptation to repeated contractions and tissue damage due to needling. A similar, albeit less pronounced, upregulation in extracellular matrix organization pathways was also seen in adipose tissue. Conclusions: Our results suggest that hyperandrogenic women with PCOS have higher levels of extra-myocellular lipids and fewer oxidative insulin-sensitive type I muscle fibers. These could be key factors leading to insulin resistance in PCOS muscle while electric stimulation-induced tissue remodeling may be protective. Funding: Swedish Research Council (2020-02485, 2022-00550, 2020-01463), Novo Nordisk Foundation (NNF22OC0072904), and IngaBritt and Arne Lundberg Foundation. Clinical trial number NTC01457209.
Introduction
Polycystic ovary syndrome (PCOS) is a metabolic and endocrine disorder characterized by clinical signs of hyperandrogenism and reproductive dysfunction (Joham et al., 2022).Although not part of the diagnosis, insulin resistance and abdominal obesity are common and lead to an increased risk of type 2 diabetes (Kakoly et al., 2019).PCOS affects 8-17% of women worldwide.More than 50% of women with PCOS are overweight or obese, and obesity worsens all symptoms, including insulin resistance (Kakoly et al., 2019;Barber and Franks, 2021).In those with PCOS, obesity is associated with an altered adipose tissue function, increased visceral fat, adipocyte hypertrophy, and lower adiponectin levels compared with BMI-matched controls (Mannerås-Holm et al., 2011;Villa and Pratley, 2011;Mannerås-Holm et al., 2014).In recent independent bidirectional Mendelian studies, obesity is even thought to contribute to or cause the development of PCOS (Brower et al., 2019;Zhao et al., 2020;Liu et al., 2022).Adipose tissue dysfunction also leads to consequences in other metabolic tissues.Overweight/obese women with PCOS are at increased risk of developing lipotoxicity due to increased triglyceride and free fatty acid levels and increased fatty acid uptake into nonadipose cells, including skeletal muscle, which is exacerbated by increased intra-abdominal fat with high lipolytic activity (Dumesic et al., 2022).The lipotoxic state in skeletal muscle includes increased expression of genes involved in lipid storage and epigenetic changes, as demonstrated in a PCOSlike sheep model (Guo et al., 2020).Consequently, excessive uptake and extracellular storage of free fatty acid in skeletal muscle promotes insulin resistance.Moreover, skeletal muscle from women with PCOS exhibits transcriptional, epigenetic, and protein changes associated with insulin resistance, accompanied by an inflammatory, oxidative, and lipotoxic state (Corbould et al., 2005;Skov et al., 2007;Nilsson et al., 2018;Stepto et al., 2019;Manti et al., 2020;Stepto et al., 2020;Moreno-Asso et al., 2021).
Several studies show that defects in mRNA expression of mitochondrial function, fat oxidation, and immunometabolic pathways in adipose tissue and skeletal muscle contribute to insulin resistance in women with PCOS (Skov et al., 2007;Nilsson et al., 2018;Stepto et al., 2019;Manti et al., 2020;Stepto et al., 2020).Interestingly, the mechanism underlying the reduced insulin-stimulated glucose uptake in PCOS skeletal muscle likely differs from insulin resistance in BMI-matched controls (Corbould et al., 2005;Nilsson et al., 2018).This finding is consistent with the theory that PCOS is a disorder with genetically distinct subtypes of PCOS (Dapas et al., 2020).However, mRNA expression does not always translate to alterations in protein levels and changes in biological function.The first proteomics data on visceral fat and skeletal muscle from severely obese women with PCOS used mass spectrometry of selected protein spots on a 2D gel electrophoresis and were published a decade ago (Cortón et al., 2008;Montes-Nieto et al., 2013;Insenser et al., 2016).This method is limited as only a few proteins could be identified.Today, the combination of liquid chromatography with tandem mass spectrometry provides us with thousands of proteins.Recent publications map the proteome in serum, follicular fluid, ovary, and endometrium from women with PCOS, and the proteome in serum may provide new biomarkers for PCOS (Li et al., 2020;Abdulkhalikova et al., 2022;Wang et al., 2022).We here use a nontargeted quantitative proteomics approach to advance our understanding of the pathophysiology in adipose tissue and skeletal muscle from women with PCOS.
We have previously shown that electrically stimulated muscle contractions, known as electroacupuncture, improve glucose regulation and decrease androgen levels in overweight/obese women with PCOS (Stener-Victorin et al., 2016).When acupuncture needles are stimulated by low-frequency electrical stimulation, they cause muscle contractions similar to those that occur during exercise.Muscle contractions activate specific physiological signaling pathways, and electrical muscle stimulations and exercise act through partially similar signaling pathways to induce glucose uptake in the acute response to muscle contractions (Benrick et al., 2020).In addition, long-term intervention with both exercise and electrical stimulation has been shown to improve glucose regulation, promote ovulation, decrease muscle sympathetic nerve activity, and reduce circulating androgens in women with PCOS (Stener-Victorin et al., 2009;Hutchison et al., 2011;Jedel et al., 2011;Johansson et al., 2013;Stener-Victorin et al., 2016;Tiwari et al., 2019).The molecular mechanisms mediating the effect on adipose tissue and skeletal muscle in response to long-term electrical stimulation remain unclear.Therefore, the second aim is to use proteomics to provide mechanistic explanations for the improved glucose homeostasis in response to long-term electrical stimulation treatment.
Ethical approval
The study was conducted at the Sahlgrenska University Hospital and the Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden, in accordance with the standards set by the Declaration of Helsinki.Procedures have been approved by the Regional Ethical Review Board of the University of Gothenburg (approval number 520-11) and the study was registered at https://clinicaltrials. gov/ (NTC01457209).All women provided oral and written informed consent before participation in the study.
Participants and study protocol
Women with PCOS were diagnosed according to the Rotterdam criteria (Group. REA-SPCW, 2004).This cohort has previously been described in detail and included 21 overweight and obese cases and 21 overweight and obese controls matched for age, weight, and BMI (Kokosar et al., 2016;Stener-Victorin et al., 2016).Of these, subcutaneous adipose tissue and skeletal muscle biopsies from 10 women with PCOS and 10 controls were included in the proteomics analysis.Participants reported to the laboratory in the morning after an overnight fast on menstrual cycle days 1-10 or irrespective of cycle day in ameno/oligomenorrheic women.Anthropometrics were measured and BMI was calculated as previously described (Kokosar et al., 2016;Stener-Victorin et al., 2016) (Kokosar et al., 2016).Anthropometrics, reproductive and endocrine variables for those included in the proteomics analysis are given in Table 1.Needle biopsies from subcutaneous adipose tissue from the umbilical area and skeletal muscle tissue from vastus lateralis were obtained under local anesthesia (Xylocaine, Astra-Zeneca AB, Södertälje, Sweden), from cases and controls (Figure 1A).The fat biopsies were rinsed with saline before both tissues were snap-frozen in liquid nitrogen and stored at −80°C until further analysis.Thereafter, women with PCOS received low-frequency electrical stimulations causing muscle contractions, so called electroacupuncture.Acupuncture needles were placed in somatic segments corresponding to the innervation of the ovaries and pancreas, bilaterally in abdominal muscle, in quadriceps muscles, and in the muscles below the medial side of the knee.Needles were inserted to a depth of 15-40 mm with the aim of reaching the muscles.Needles were connected to an electrical stimulator (CEFAR ACUS 4; Cefar-Compex Scandinavia, Landsbro, Sweden) and stimulated with a low-frequency (2 Hz) electrical signal for 30 min.The intensity was adjusted every 10th minute due to receptor adaptation, with the intention to produce local muscle contractions without pain or discomfort.Treatment was given three times per week over 5 weeks, and the number of treatments varied from 11 to 19.Two sets of needle placements were alternated to avoid soreness (Figure 1B; Stener-Victorin et al., 2016).Baseline measurements were repeated after 5 weeks of treatment, within 48 hr after the last treatment, and new fat and muscle biopsies were collected.After all relevant clinical information was obtained, samples were coded and anonymized.
Sample preparation for global proteomic analysis
Aliquots containing 25 μg of each individual sample were digested with trypsin using the filter-aided sample preparation method.Briefly, protein samples were reduced with 100 mM dithiothreitol at 60°C for 30 min, transferred on 30 kDa MWCO Nanosep centrifugal filters (Pall Life Sciences, Ann Arbor, MI, USA), washed with 8 M urea solution and alkylated with 10 mM methyl methanethiosulfonate in 50 mM TEAB and 1% sodium deoxycholate.Digestion was performed in 50 mM TEAB, 1% sodium deoxycholate at 37°C in two stages: the samples were incubated with 250 ng of Pierce MS-grade trypsin (Thermo Fisher Scientific, Rockford, IL, USA) for 3 hr, then 250 ng more of trypsin was added and the digestion was performed overnight.The peptides were collected by centrifugation, labeled using TMT 10-plex isobaric mass tagging reagents (Thermo Scientific).Sodium deoxycholate was then removed by acidification with 10% trifluoroacetic acid.The mixed labeled samples were fractionated on the AKTA chromatography system (GE Healthcare Life Sciences, Sweden) using the XBridge C18 3.5 μm, 3.0×150 mm column (Waters Corporation, Milford, CT, USA) and 25 min gradient from 7% to 40% solvent B at the flow rate of 0.4 ml/min; solvent A was 10 mM ammonium formate in water at pH 10.00, solvent B was 90% acetonitrile, 10% 10 mM ammonium formate in water at pH 10.00.The initial 40 fractions were combined into 20 pooled fractions in the order 1+21, 2+22, 3+23, etc.The pooled fractions were dried on Speedvac and reconstituted in 20 μl of 3% acetonitrile, 0.1% formic acid for analysis.
Sample preparation for phosphoproteomic analysis
Aliquots containing 450 μg of each individual sample were digested with trypsin using the filter-aided sample preparation method.The phosphopeptides were enriched using Pierce TiO 2 Phosphopeptide Enrichment and Clean Up Kit (Thermo Fisher Scientific).The purified phosphopeptide samples were evaporated to dryness, reconstituted in 50 mM TEAB, and labeled using TMT 10-plex isobaric mass tagging reagents (Thermo Fisher Scientific).The TMT-labeled phosphopeptide samples were mixed into corresponding sets and purified using Pierce C-18 Spin Columns (Thermo Fisher Scientific).Purified samples were dried on Speedvac and reconstituted in 15 μl of 3% acetonitrile, 0.1% formic acid for analysis.
LC-MS/MS analysis
All samples were analyzed on Orbitrap Fusion Tribrid (Thermo Fisher Scientific) interfaced with Thermo Easy-nLC 1000 nanoflow liquid chromatography system (Thermo Fisher Scientific).Peptides were trapped on the C18 trap column (100 μm × 3 cm, particle size 3 μm) separated on the C18 analytical column (75 μm × 30 cm) home-packed with 3 μm Reprosil-Pur C18-AQ particles (Dr.Maisch, Germany) using the gradient from 5% to 25% B in 45 min, from 25% to 80% B in 5 min, solvent A was 0.2% formic acid and solvent B was 98% acetonitrile, 0.2% formic acid.Precursor ion mass spectra were recorded at 120,000 resolution.The most intense precursor ions were selected ('top speed' setting with a duty cycle of 3 s), fragmented using CID at collision energy setting of 30 spectra and the MS2 spectra were recorded in ion trap.Dynamic exclusion was set to 30 s with 10 ppm tolerance.MS3 spectra were recorded at 60,000 resolution with HCD fragmentation at a collision energy of 55 using the synchronous precursor selection of the five most abundant MS/MS fragments.The phosphopeptides were trapped on the NanoViper C18 trap column (100 μm × 2 cm, particle size 2 μm, Thermo Scientific) and separated on the home-packed C18 analytical column (75 μm × 30 cm) using the gradient from 7% to 32% B in 100 min, from 32% to 100% B in 5 min, solvent A was 0.2% formic acid and solvent B was 80% acetonitrile, 0.2% formic acid.The mass spectrometry settings were the same as described above for the global proteomic analysis, but HCD fragmentation at collision energy of 33 was used in MS2.
Proteomic data analysis
Identification was performed using Proteome Discoverer version 2.4 (Thermo Fisher Scientific).The database search was performed using the Mascot search engine v. 2.5.1 (Matrix Science, London, UK) against the Swiss-Prot Homo sapiens database.For phosphopeptide samples, phosphorylation on serine, threonine, and tyrosine was added as a variable modification.Quantification was performed in Proteome Discoverer 2.4.TMT reporter ions were identified with 3 mmu mass tolerance in the MS3 HCD spectra for the total proteome experiment and with 20 ppm mass tolerance in the MS2 HCD spectra for the phosphopeptide experiment, and the TMT reporter S/N values for each sample were normalized within Proteome Discoverer 2.4 on the total peptide amount.Only the unique identified peptides were considered for protein quantification.The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the Proteomics Identifications (PRIDE) (RRID:SCR_003411) (Deutsch et al., 2020) partner repository with the dataset identifier PXD025358.
The normalized abundance counts were used for the downstream analysis using the Differential Enrichment analysis of Proteomics data package (DEP) (Zhang et al., 2018).The counts were log 2 transformed and proteins that were quantified in less than 2/3 of the samples were removed.Missing values were imputed using random draws from a Gaussian distribution around the minimal value as the missing value was not random but concentrated to proteins with low intensities.The data was batch effect adjusted using ComBat (Johnson et al., 2007) with the LC-MS/MS run assigned as the batch covariate.Differential expression analysis was performed on the dataset with DEP's test_diff function which uses protein-wise linear models and empirical Bayes statistics using limma (Smyth, 2004).The differential expression analysis calculated the log 2 fold changes (FC), p-values and q-values between the three groups control and PCOS at baseline (W0), and PCOS after 5 weeks of treatment (W5) generating two different result datasets.Proteins and phosphorylation enrichment were determined to be significantly differentially expressed between the groups if the p-value <0.05, and log 2 FC ≥0.5 or log 2 FC ≤-0.5.mRNA expression and DNA methylation arrays mRNA was extracted from adipose tissue (n=17) and skeletal muscle (n=8) biopsies collected at steady state during the hyperinsulinemic-euglycemic clamp before and after 5 weeks of electrical stimulation in those with PCOS using the RNeasy Lipid Tissue Mini Kit for adipose tissue and RNeasy Fibrous Tissue Mini Kit for skeletal muscle (QIAGEN).Nucleic acid concentration was measured with a spectrophotometer (NanoDrop, Thermo Scientific), and RNA quality was determined with an automated electrophoresis station (Experion, Bio-Rad).A HumanHT-12 v4 Expression BeadChip array (Illumina) was used to analyze global mRNA expression.cRNA synthesis, including biotin labeling, was carried out using an Illumina TotalPrep RNA Amplification Kit (Life Technologies and Invitrogen).Biotin-cRNA complex was then fragmented and hybridized to the probes on the Illumina BeadChip array before being hybridized and stained with streptavidin-Cy3 according to the manufacturer's instructions.Probes were visualized with an Illumina HiScan fluorescence camera.The Oligo package from Bioconductor was used to compute robust multichip average expression measures (Bolstad et al., 2003).
For methylation array studies, DNA was isolated from adipose tissue (n=17) and skeletal muscle (n=9) biopsies taken at steady state during the hyperinsulinemic-euglycemic clamp before and after 5 weeks of electrical stimulation of women with PCOS using the QIAamp DNA Mini Kit (QIAGEN).Nucleic acid concentrations and purity were estimated with a NanoDrop spectrophotometer (Thermo Scientific, Wilmington, DE, USA), and DNA integrity was checked by gel electrophoresis.Genomewide DNA methylation was analyzed with the Infinium HumanMethylation450k BeadChip array (Illumina).The array contains 485,577 cytosine probes covering 21,231 (99%) RefSeq genes (Bibikova et al., 2011).A DNA Methylation Kit (D5001-D5002, Zymo Research) was used to convert genomic DNA to bisulfite-modified DNA.Briefly, gDNA (500 ng) of high quality was fragmented and hybridized on the BeadChip, and the intensities of the signals were measured with a HiScanQ scanner (Illumina).
Array data analysis
The bioinformatics analyses of DNA methylation array data were performed as described previously (Rönn et al., 2015).In brief, Y chromosome probes, rs-probes, and probes with an average detection p-value>0.01were removed.After quality control and filtering, methylation data were obtained for 298,289 CpG sites in adipose tissue and 298,332 CpG sites in skeletal muscle.Beta-values were converted to M-values, M=log 2 (β/(1β)), which were used for all data analyses.Data were then quantile-normalized and batch-corrected with COMBAT (Johnson et al., 2007).The differentially methylated sites were identified using a paired t-test (limma package, Bioconductor).To improve interpretation, after all the preprocessing steps, the data were reconverted to beta-values ranging from 0% (unmethylated) to 100% (completely methylated).
Pathway enrichment analysis
The tool Uniprot, Universal Protein Resource (RRID:SCR_002380), was used to retrieve protein names.We applied enrichment analysis to all differently expressed proteins and phosphorylation sites using Enrichr (RRID:SCR_001575) and STRING (RRID:SCR_005223).Ontology terms with a q-value <0.05 and including at least 3 proteins/phosphosites or an odds ratio >100 were considered as enriched.
Histological analyses and immunofluorescence
Skeletal muscle and adipose tissue biopsies were fixed in histofix (Histolab, Sweden) for >72 hr and then stored in 70% ethanol.Tissues were dehydrated and fixated in paraffin blocks.Paraffin-embedded adipose tissue and muscle tissue were cut into 7 μm sections using a rotary microtome (Leica Microtome) and mounted on Superfrost Plus Adhesion microscope glass slides (Epredia J1800AMNZ, #10149870, Thermo Fisher Scientific).Picrosirius red staining (cat#24901-250, Polysciences, Inc) was used to identify and quantify fibrillar collagen in adipose and muscle tissue.Adipose tissue quantification of picrosirius red staining before and after electrical stimulation treatment was performed using a semi-automatic macro in ImageJ software.This macro allows for calculation of the total area (μm 2 ) and the % of collagen staining from each area adjusting the minimum and maximum thresholds.Three different random pictures per section (4-5 sections/subject) were taken at ×10 or ×20 magnification using a regular bright-field microscope (Olympus BX60 and PlanApo, ×20/0.7,Olympus, Japan).All images were analyzed on ImageJ software v1.47 (National Institutes of Health, Bethesda, MD, USA) using this protocol with the following modification: threshold min 0, max 2. Skeletal muscle quantification of picrosirius red staining was performed using the same protocol described above.% of collagen staining was calculated on 8-10 images of different microscopic fields from each muscle sample.
For immunofluorescence, the muscle sections were deparaffinized twice in xylene (#534056, Sigma-Aldrich) for 5 min.Sections were rehydrated stepwise twice in 100% ethanol and once in 95%, 70%, 50% ethanol and in deionized water for 5 min each before a final rinse in PBS (#18912-014, Gibco, pH 7.4) for 5 min.The slides were subjected to heat-induced antigen retrieval by heating in antigen retrieval buffer (10 mM citric acid monohydrate, 0.05% [vol/vol] Tween-20, pH 6.0) until it reaches the boiling point and cooling to room temperature.The tissue sections were incubated in blocking buffer (3% normal donkey serum [NDS] [vol/vol] in PBS) at room temperature for 1 hr followed by overnight incubation at 4°C with primary antibodies (rabbit anti-perilipin-1, dilution 1:150 [Abcam Cat# ab3526] and mouse anti-myosin [skeletal slow], dilution 1:300 [MYH7 antibody, Sigma-Aldrich Cat# M8421]) diluted in incubation buffer (PBS containing 0.3% Triton X-100, 1% BSA, 1% NDS, and 0.01% sodium azide, pH 7.2).After rinsing the slides three times for 10 min each in PBS, the sections were incubated with fluorochrome-conjugated secondary antibodies (Donkey anti-Rabbit IgG [H+L] Highly Cross-Absorbed Secondary Antibody, Alexa Fluor 555, diluted 1:250 [Thermo Fisher Scientific Cat# A-31572] and Donkey anti-Mouse IgG [H+L] Highly Cross-Absorbed Secondary Antibody, Alexa Fluor Plus 448, diluted 1:250 [Thermo Fisher Scientific Cat# A32766]) diluted in incubation buffer.The muscle sections were rinsed three times for 10 min each in PBS.The slides were mounted with coverslips (#ECN 631-1574, VWR) using Vectashield antifade mounting medium with DAPI (#H-1200, Vectashield).Images were obtained using a Zeiss LSM 700, AxioObserver microscope with Plan-Apochromat ×10/0.45M27 objective lens.Argon lasers of 488 nm and 555 nm wavelengths were used to excite Alexa Fluor 488 (green) and Alexa Fluor 555 (red) respectively, and Laser Diode 405 nm to excite DAPI (blue).Quantification of perilipin-1 expression in skeletal muscle cells from control and PCOS groups was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA).The channels of the images were split and converted into 8-bit.The minimum and maximum thresholds were adjusted and kept constant for all the images.Regions of interest were drawn around the cells and empty space for background intensity measurement.The mean perilipin-1 intensity was measured and corrected by deducting the background.A total of 28 PCOS and 33 control cells were quantified.
Skeletal muscle fiber size and type were quantified in muscle biopsies frozen in Tissue-Tek O.C.T. Compound (Sakura Finetek, Gothenburg, Sweden).Cross sections (10 µm) were cryosectioned using an NX70-Epredia cryostat, moved onto glass slides (Expredia, J1800AMNZ) and stored at -20°C.The sections were subsequently immunohistochemically stained for type I fibers and fiber boundaries.In brief, the sections were dried at room temperature for 60 min and then fixed in 4% formaldehyde (Merck, 100496) for 30 min, permeabilized with 0.5% PBS-Triton X-100 (Sigma-Aldrich, 9036-19-5) for 20 min and thereafter incubated with 0.25% PBS-Triton X-100 with 10% goat serum for 30 min.The sections were then incubated with primary MYH7 antibody (1:25; DSHB, BA-F8) for type I fibers overnight at 4°C and subsequently secondary antibody Alexa Flour 568 (1:500; Thermo Fisher, A-11031) for 60 min at room temperature, both in 0.1% PBS-Triton X-100 with 1% BSA.Finally, the sections were incubated with WGA Oregon Green 488 (Invitrogen, W7024) for fiber boundaries for 3 hr, whereafter Fluoromount-G mounting medium (Thermo Fisher, 00-4959-52) and coverslips were applied.The slides were visualized using a Zeiss AxioScan.Z1 slide scanner.Fiber cross-sectional area was automatically determined using MyoVision v1.0 and the proportion of type I fibers was manually counted on ImageJ.A total of 579 fibers from seven controls (60-150 fibers per muscle section) and 177 fibers (15-80 fibers per muscle section) from women with PCOS were quantified.Data are graphically depicted with each individual fiber quantified.
Mouse study protocol and western blot analysis
All animal experiments were carried out in compliance with the ARRIVE guidelines.Three-week-old wild-type (wt) female mice on C57Bl/6J background were purchased from Janvier Labs (C57BL/6NRj, Le Genest-Saint-Isle, France).Female skeletal muscle androgen receptor knockout mice (SkMARKO) were generated by crossing ARflox mice with B6;C3-Tg(ACTA1-rtTA,tetO-cre)102Monk/J (HSA-rtTA/TRE-Cre) mice (Xiong et al., 2022).To induce Cre recombinase expression, at 3 weeks of age, SkMARKO mice were given a diet containing 200 mg/kg doxycycline (Specialty Feeds SF11-059) for the entire duration of the experiment.Mice were maintained under standard housing conditions; ad libitum access to food and water in a temperature-and humidity-controlled, 12 hr light/dark environment.Procedures were approved by the Sydney Local Health District Animal Welfare Committee within National Health and Medical Research Council guidelines for animal experimentation or the Stockholm Ethical Committee for Animal Research (approval number 20485-2020).At 4 weeks of age, wt and SkMARKO female mice received a subcutaneous silastic implant containing 5-10 mg DHT (5α-Androstan-17β-ol-3-one, A8380, Sigma-Aldrich, St. Louis, MO, USA) or empty implant (n=5-8/ group).A subset of DHT-exposed wt mice received a slow-release flutamide pellet (n=5, 25 mg flutamide/pellet, 90-day release, Innovative Research of America, Sarasota, FL, USA) (Figure 1C; Ascani et al., 2023).At 15-17 weeks of age, the mice were euthanized and gastrocnemius muscle tissue was dissected and snap-frozen.
15-20 mg of gastrocnemius muscle was homogenized in RIPA buffer along with protease inhibitors.Protein was quantified using Pierce BCA assay (Thermo Fisher Scientific, Cat# 23227).Diluted protein lysates were mixed with loading buffer containing β-mercaptoethanol, and heated at 65°C, before being loaded into polyacrylamide gel (handcast 10% or AnykD PROTEAN TGX Precast Protein Stain-Free Gel [Bio-Rad, CA, USA]) and electro-transferred to PVDF membranes using a PVDF Transfer Pack.The membranes were blocked with blocking solution (5% BSA or 5% skim milk in TBS containing 0.1% Tween 20) for 1 hr and incubated overnight with anti-myosin primary antibody, dilution 1:1000 (MYH7 antibody, Sigma-Aldrich Cat# M8421).After washing and incubation with secondary antibody, dilution 1:10,000 (rabbit anti-mouse IgG HRP, Abcam Cat# 97046), immunoreactive protein bands were visualized through enhanced chemiluminescence using ECL substrate.Bands were visualized with the ChemiDoc XRS system (Bio-Rad, CA, USA) and analyzed with the image analysis program Image Lab (Bio-Rad, CA, USA).After initial imaging, membranes were stripped in mild stripping buffer, blocked, and re-probed with GAPDH (Abcam, Cat# ab8245) or beta-actin antibody (Santa Cruz, Cat# 47778).
Statistics
Differences in clinical characteristics and histological quantification between women with PCOS and controls were assessed using the Mann-Whitney U test and data are presented as mean ± SD.Wilcoxon signed-rank test was used to analyze changes between measurements at baseline and after 5 weeks of treatment.Differences in protein expression were calculated on log 2 fold change.Proteins and phosphorylation enrichments were determined to be significantly differentially expressed between cases and controls, and after 5 weeks of treatment in women with PCOS, if the p-value <0.05, and log 2 FC ≥0.5 or log 2 FC ≤-0.5.Proteomic data are presented as log 2 fold change.Differences between wt controls and treated mice were assessed using one-way ANOVA with Dunnet's multiple comparisons test.A two-way ANOVA was used to analyze the effect of treatment and mouse genotype, and data are presented as mean ± SEM.No statistical methods were used to predetermine sample size; it was based on previous experience.Animals were allocated to experimental groups arbitrarily without formal randomization.
Clinical characteristics
Women with PCOS had more antral follicles <9 mm (22.7±7.9 vs 9.4±4.1,p=0.001), larger ovary volume (8.0±2.9 vs 5.0±2.7 ml, p=0.028), higher Ferriman-Gallwey score, and higher circulating testosterone than controls (Table 1).Six of the 10 women with PCOS met all three PCOS criteria; two had hyperandrogenemia and PCO morphology, one had hyperandrogenism and irregular cycles, and one had irregular cycles and PCO morphology.Five weeks of treatment lowered testosterone, HOMA-IR, and HbA1c levels, tended to decrease triglyceride levels but did not improve Ferriman-Gallwey score (Table 1).
Data are presented as mean ± SD.Differences between PCOS and controls were analyzed by Mann-Whitney U test.Wilcoxon signed-rank test was used to analyze changes between measurements at baseline and after 5 weeks of treatment.
Total protein expression and phosphorylation in skeletal muscle
In total, we identified 3480 proteins in skeletal muscle.58 unique proteins were found to be differentially expressed in skeletal muscle from women with PCOS versus controls (p<0.05,and log 2 FC ≥0.5 or ≤-0.5, Figure 2A, ). 25 proteins were upregulated and 33 were downregulated in women with PCOS and the log 2 fold change in expression ranged from -3.06 to 1.21 (Supplementary file 1a).We searched for enriched signaling pathways among the differently expressed proteins using STRING analysis.Our network had significantly more interactions than expected (enrichment p-value <1e -16 ).This means that the differently expressed proteins have more interactions with each other than what would be expected from a random set of proteins of the same size.Such an enrichment indicates that the proteins are biologically linked.Upregulated proteins were enriched in lipid metabolic pathways including negative regulation of cholesterol transport, regulation of lipoprotein lipase activity, and negative regulation of metabolic processes (Supplementary file 1b).This enrichment was driven by increased expression of apolipoproteins C-I, C-II, and C-III (Figure 2B), which are also enriched in the negative regulation of lipoprotein lipase activity (GO:0051005).Aldo-keto reductase family 1 members C1 and C3 (AKR1C1 and AKR1C3, Figure 2B), which have an androsterone dehydrogenase activity (GO:0047023), were also upregulated and AKR1C1 was strongly correlated to higher circulating testosterone levels (Spearman's rho = 0.65, p=0.002), suggesting that muscle may produce testosterone via the backdoor pathway.Moreover, perilipin-1 that typically coats the surface of lipid droplets in adipocytes (Gandolfi et al., 2011;Zhao et al., 2021), the so-called extra-myocellular adipocytes, was increased in PCOS muscle.The increased expression of perilipin-1 was confirmed by immunofluorescence staining and quantification of muscle biopsies (Figure 2C and D).
The downregulated proteins in PCOS were enriched in pathways involved in muscle contraction, actin filament organization, and transition between fast and slow fibers (Figure 3A).All significantly enriched pathways are listed in Supplementary file 1b.Expression of myosin heavy chain beta, which is specific for type I muscle fibers was decreased in PCOS (Figure 3B).Several proteins that are more highly expressed in type I muscle fibers consistently had a lower expression in women with PCOS, e.g., myosin heavy chain 7, myosin regulatory light chains 2 and 3, troponin I and troponin T in slow skeletal muscle, and the Ca 2+ pump sarcoendoplasmic reticulum calcium ATPase 2 (SERCA2a).These are proteins located in both the thick filaments (myosin regulatory light chains) and the thin filaments (troponins) of slow-twitch fibers (Figure 3C).A decrease in type I slow-twitch muscle fibers was also supported by staining of human myosin heavy chain 7 (MYH7) as a marker (Figure 3D).To further assess whether there was a reduced fiber size or decreased number of type I fibers, fiber crosssectional area of the fibers and the percentage of type I fibers were analyzed.The quality of muscle biopsies was impaired in the PCOS group; therefore, 60% fewer fibers from each individual were analyzed in the PCOS group compared with controls (p=0.02).There was no significant difference in the mean cross-sectional area of the fibers (4530±720 in controls versus 4281±902 μm in PCOS, p=0.64, n=5-7/group) or the percentage of type I fibers (48±12 vs 45±17 μm, p=0.69, n=4/group) in the relatively few individuals analyzed (Figure 3E).There was, however, a decrease in the individual fiber cross-sectional area in PCOS muscle versus controls (Figure 3F).
Then, an androgen-exposed PCOS-like mouse model was used to corroborate that androgen exposure leads to a shift in muscle fiber type.These PCOS-like mice have longer anogenital distance, are in a chronic diestrus phase, and have glucose intolerance (Xiong et al., 2022;Ascani et al., 2023).These effects are not present in DHT-exposed mice receiving the androgen receptor antagonist flutamide (Ascani et al., 2023).DHT-exposed PCOS mice had fewer type I muscle fibers compared to controls (Figure 4A-C).This effect was partly prevented in DHT-exposed mice receiving flutamide, supporting an effect of androgen receptor activation on muscle fiber type (Figure 4A and C).However, although flutamide treatment improves glucose sensitivity in PCOS-like mice, insulin resistance likely also contributes to loss in type I fibers.Moreover, DHT-exposed SkMARKO mice were used to further investigate the contribution of androgen receptor-mediated actions in skeletal muscle.While unchallenged SkMARKO mice had fewer type I muscle fibers compared to wt mice (p=0.033),they were protected against the androgen-induced type I muscle fiber loss (Figure 4B and C).These data suggest that androgens have direct effects on shifting the muscle fibers toward fewer type I fibers in adult females, which can be prevented by precluding signaling through androgen receptors.
We searched for overlap between the differentially expressed proteins in skeletal muscle in this study and the differentially expressed genes in our previous meta-analysis of gene expression array data (Manti et al., 2020).As suspected, the overlap between gene expression and protein levels was small.We found that 1 upregulated and 12 downregulated genes in muscle biopsies from women with PCOS were also differently expressed at the protein level in this study (Supplementary file 1c).Several proteins involved in skeletal muscle contraction were consistently downregulated at the mRNA expression level in muscle tissue from women with PCOS including MYL3, MYOZ2, TNNT1, LMOD2, NRAP, and XIRP1 (Supplementary file 1c).
We identified 5512 phosphosites in muscle, and 61 sites in 40 unique proteins were differentially phosphorylated in PCOS versus controls (Supplementary file 1d), suggesting different protein activity.Eleven of the differently expressed proteins had one or more differently phosphorylated sites, including increased phosphorylation of Ser 130, 382, and 497 in perilipin-1.Many of the proteins in the thick and thin filaments had one or more altered phosphorylation sites (Figure 3G).There were no significantly enriched pathways among the differently expressed phosphosites.
Total protein expression and phosphorylation in adipose tissue
In total, we identified 5000 proteins in adipose tissue but the difference between groups was modest.21 unique proteins were found to be differentially expressed in adipose tissue from women with PCOS versus controls (p<0.05,and log 2 FC ≥0.5 or ≤-0.5, Figure 5A).Six proteins were upregulated and 15 were downregulated in women with PCOS and the log 2 fold change in expression ranged from 2.1 to -1.6 (Figure 5B, Supplementary file 1c).Several of the upregulated proteins play a role in immune system processes including immunoglobulins, human leukocyte antigen (HLA) class I histocompatibility antigen, and sequestosome 1. Sequestosome 1 may also regulate the mitochondrial organization (Poon et al., 2021).Three mitochondrial matrix proteins -tRNA pseudouridine synthase A, Enoyl-CoA hydratase domain-containing protein 2, and NAD kinase 2 -had an altered expression (Figure 5B), possibly indicating mitochondrial dysfunction.There were three significantly enriched signaling pathways in adipose tissue based on a lower expression of leiomodin-1 and adseverin: actin nucleation (GO:0045010), positive regulation of cytoskeleton organization (GO:0051495), and positive regulation of supramolecular fiber organization (GO:1902905).
Both low-grade inflammation and transforming growth factor beta (TGFβ)-induced fibrosis have been suggested to play a role in the pathophysiology of PCOS (Mancini et al., 2021;McIlvenna et al., 2021).Dysregulated signaling of TGFβ has been linked to the development of ovarian fibrosis and reproductive dysfunction (Stepto et al., 2019;McIlvenna et al., 2021).Women with PCOS have elevated levels of circulating TGFβ1 (Raja-Khan et al., 2010), which is thought to trigger increased fibrotic mechanisms in other peripheral tissues.However, we did not detect an increased abundance of fibrous collagens in adipose tissue and the fibrillar collagen levels as judged by picrosirius red staining were similar between groups although the variability was higher in the PCOS group (Figure 5C).
We identified 5734 phosphosites in adipose tissue, of which 39 were differently phosphorylated sites in 34 unique proteins.Ten of these sites had lower phosphorylation (Supplementary file 1f).There was no overlap between differently phosphorylated proteins and differently expressed proteins.Perilipin-1 had two phosphorylation sites with higher phosphorylation: Ser 497 and 516 in PCOS adipose tissue compared to controls (Supplementary file 1f).Ser 497 showed increased phosphorylation in perilipin-1 in both muscle and adipose tissue.Under adrenergic stimulation, perilipin-1 is phosphorylated at Ser 497 by protein kinase A, which in turn triggers lipolysis by hormone sensitive lipase in adipocytes (Marcinkiewicz et al., 2006).
Skeletal muscle protein expression and phosphorylation changes in skeletal muscle after treatment with electrical stimulation
Since long-term electrically stimulated muscle contractions improve glucose regulation and lower androgen levels in women with PCOS (Stener-Victorin et al., 2016), we analyzed genome-wide mRNA expression from women with PCOS after treatment to identify changes in skeletal muscle gene expression in response to stimulation.None of the transcripts exhibited changes in expression after FDR correction after 5 weeks of treatment (q<0.05), but 12 transcripts had an FC >1.2 (p<0.05, ), of which 5 were different collagens.We also analyzed whether the response to electrical stimulation was associated with DNA methylation changes in skeletal muscle.We found that 41,186 (13.8%) of 298,332 analyzed CpG sites had differential methylation in skeletal muscle after treatment (p<0.05), which is almost three times more than expected by chance (p<0.0001,χ 2 test).Of these, 43 CpG sites remained significant after FDR correction (q<0.05, ).The majority of the sites (74%) showed decreased methylation in response to treatment.The absolute change in DNA methylation in response to treatment ranged from 3% to 14% points.
Since mRNA expression was not significantly regulated in response to repeated electrical stimulations, we investigated whether the effects were regulated at the protein level.We found that 376 unique proteins were changed in skeletal muscle after treatment with electrical stimulation (p<0.05, Figure 6A, Supplementary file 1h).Most proteins were upregulated in women with PCOS after treatment (98%), and the log 2 fold change in expression ranged from -1.37 to 2.15.The upregulated proteins were enriched in signaling pathways involved in extracellular matrix (ECM) organization, regulation of TGFβ production, neutrophil-mediated immunity, wound healing, and blood coagulation (Figure 6B, Supplementary file 1i).Proteins involved in ECM organization include eight different collagens, integrins, and TGFβ1 (Supplementary file 1i).Collagen 1A1, 1A2 and VCAN were increased after treatment at both gene and protein levels, implying that acupuncture needling elicits a wound healing response.There was a trend toward increased staining of fibrous collagen after treatment, but this was not significant, potentially due to the low number of analyzed good quality sections (Figure 6C).Other upregulated signaling pathways include exocytosis and vesicle transport along the actin filament, muscle contraction, and actin filament organization and negative regulation of adenylate cyclase-activating adrenergic signaling.Several effects previously shown to be upregulated after one bout of electroacupuncture were regulated in the opposite direction as pathways involved in negative regulation of angiogenesis, negative regulation of blood vessel morphogenesis, negative regulation of nitric oxide metabolic processes, and regulation of vasoconstriction were enriched.None of the 58 proteins that had a different expression in the PCOS group at baseline were reversed after 5 weeks of treatment.198 phosphosites in 152 unique proteins showed a changed phosphorylation in response to electrical stimulation.178 sites had higher phosphorylation, and 46 sites were less phosphorylated (Supplementary file 1j).There were no significant enriched pathways among the proteins with differently expressed phosphorylation sites.38 of the differently expressed proteins had one or more differently phosphorylated sites.These proteins, with changes in both total protein and phosphorylation levels, were enriched in actin filament organization (GO:0007015).
Total protein expression and phosphorylation in adipose tissue after treatment
Similar to skeletal muscle, long-term electrical stimulations had minimal effects on gene expression in adipose tissue.None of the transcripts exhibited changes in expression after FDR correction (q<0.05) after 5 weeks of treatment, or had a FC >1.2 (data not shown).We found that 23,517 (7.9%) of 298,289 analyzed CpG sites had differential methylation in adipose tissue after 5 weeks of treatment (p<0.05), which is more than expected by chance (p<0.0001,χ 2 test).The majority (63.5%) of these sites showed reduced methylation in response to treatment.One CpG site remained significant after FDR correction (q<0.05,-2.2% points reduced methylation of cg13383058 in the transcription start site of CD248).Therefore, we investigated whether the long-term effects were regulated at the protein level.61 unique proteins were found to be changed in adipose tissue after electrical stimulation treatment (Figure 7A, Supplementary file 1k).Most of the proteins were upregulated (85%) and nine were downregulated in women with PCOS after treatment, and the log 2 fold change in expression ranged from -0.89 o 1.41.The upregulated enriched signaling pathways include ECM organization and Fc-gamma receptor signaling (Supplementary file 1l).In accordance with these findings, 5 weeks of treatment increased the fibrillar collagen content in adipose tissue (Figure 7B).The expression of DNA topoisomerase 1 and leimodulin-1 was lower in women with PCOS than in controls but increased after treatment (Supplementary file 1k).Leimodulin-1 is required for proper contractility of smooth muscle cells by mediating nucleation of actin filaments, and myosin regulatory light polypeptide 9 plays an important role in regulating smooth muscle contractile activity.The enzyme prostacyclin synthase, a potent mediator of vasodilation, was also increased and could act on the smooth muscles in the vessel wall.49 phosphosites in 46 unique proteins showed altered phosphorylation in response to electrical stimulation.All except four sites had higher phosphorylation (Supplementary file 1m).There were no significantly enriched pathways among the proteins with differently expressed phosphosites.11 proteins showed higher expression in both skeletal muscle and adipose tissue after treatment: FCGR3A, FAP, PTGIS, COL1A2, COL14A1, COL1A1, MYL9, LMNB1, SIRPA, ARPC2, and RAP2A.These proteins are enriched in ECM organization (GO:0030198), negative regulation of nitric oxide biosynthetic/metabolic processes (GO:0045019, GO:1904406), and Fc-gamma receptor signaling pathways (GO:0038096, GO:0038094, GO:0002431).
Discussion
Proteome signature in PCOS We have profiled the proteome of skeletal muscle and adipose tissue to advance our understanding of the pathophysiology of PCOS.The changes in protein expression in adipose tissue were small, whereas in skeletal muscle of women with PCOS there was a clear downregulation of proteins involved in muscle contraction.The skeletal muscle contains a mixture of slow-twitch oxidative and fast-twitch glycolytic myofibers, which exhibit different physiological properties.Type I, or red fibers, are slow-twitch fatigue-resistant muscle fibers that have higher mitochondrial and myoglobin content, and are thus more aerobic than type II fast-twitch fibers.Several proteins specific to or known to be highly expressed in type I muscle fibers were consistently downregulated in women with PCOS.These proteins are located in both the thick and the thin filaments of slow-twitch fibers.These data suggest that there are fewer type I fibers, decreased number and/or smaller, in the PCOS muscle.Unfortunately, we were unable to quantify the number of type I fibers in the entire cohort because of the poor quality of the PCOS muscle biopsies and the unavailability of muscle tissue.We also identified several differently phosphorylated sites in proteins located in the thick and thin filaments, indicating a differential protein activity, since phosphorylation can change the activity of a protein.
Here, we show that the signaling pathway important for the transition between fast and slow fibers was downregulated and that individuals with PCOS had lower expression of myosin heavy chain beta (encoded by MYH7), which is specific for slow-twitch oxidative type I fibers (Schiaffino and Reggiani, 2011).A decrease in type I fibers has been shown in three different androgen-excess rodent models, including this study (Holmäng et al., 1990;Shen et al., 2019).This effect on type I fibers was partly protected by the coadministration of the androgen receptor antagonist flutamide.Moreover, mice that lack the AR specifically in skeletal muscle were completely protected against this effect.These findings suggest that exaggerated androgen signaling in skeletal muscle directly affects the muscle fiber-type composition.While we found a lower abundance of type I muscle fiber in muscle-specific AR knockout females, a recent study shows that depleting AR signaling in skeletal muscle does not lead to necrotic fibers, aberrant histology, or changes in key metabolic functions in females (Ghaibour et al., 2023).Thus, androgen signaling is likely to be important for normal muscle development but may play a different and dose-dependent role in adulthood.Impaired insulin sensitivity in hyperandrogenic animals is associated with fewer type I fibers and increased type II fibers in skeletal muscles (Holmäng et al., 1990;Shen et al., 2019), features expected to result in reduced insulin sensitivity in this tissue as a higher proportion of oxidative type I fibers leads to better insulin responsiveness (Stuart et al., 2013).In line with a higher mitochondrial content in type I fibers, there was a lower expression of several mitochondrial matrix proteins: enoyl acyl carrier protein reductase, 3-oxoacid CoA-transferase 1, enoyl-CoA delta isomerase 1, hydroxyacid-oxoacid transhydrogenase, and acetyl-CoA synthetase 2 (GO:0005759, q=0.004) in women with PCOS.Moreover, a lower expression of the mitochondrial acetyl-CoA synthetase 2 correlated with a higher HOMA-IR (Spearman's rho = −0.46,p=0.04), suggesting that an impaired mitochondrial function contributes to insulin resistance.Moreover, a lower proportion of type I fibers correlated with the severity of insulin resistance in subjects with the metabolic syndrome (Stuart et al., 2013).Androgens appear to decrease highly oxidative and insulin-sensitive type I muscle fibers and increase glycolytic and less insulin-sensitive type II fibers in non-athletes, further promoting the development of insulin resistance.However, whether these changes in muscle morphology precede or follow the development of insulin resistance in women with PCOS is not known.Moreover, the SERCA2a, which is expressed primarily in slow-twitch skeletal muscle, was less abundant in PCOS muscle.The SERCA2a pump transports calcium ions from the cytosol back to the sarcoplasmic reticulum after muscle contraction to keep the cytosolic Ca 2+ concentration at a low level.Its function is closely related to muscle health and function (Xu and Van Remmen, 2021).There were three sites with lower phosphorylation in SERCA2a, suggesting that lower SERCA2a expression is not only a reflection of decreased type I fibers but also an altered function.Impaired SERCA2 function may lead to increased cytosolic Ca 2+ concentration, which in turn can impair force production and mitochondrial function in type I fibers (Xu and Van Remmen, 2021).Although serum androgen levels are positively correlated with athletic performance in female athletes (Bermon et al., 2018), skeletal muscle contraction and filament sliding pathways were downregulated in muscle biopsies from hyperandrogenic women with overweight/obesity in this study.Thus, androgens may have differential actions on female skeletal muscle function in moderately physically active subjects and female athletes.
Androgens are mainly produced in the ovaries and adrenal glands but there is also a local production in adipose tissue.In overweight/obese women with PCOS, increased AKR1C3 levels mediated increased testosterone generation from androstenedione in subcutaneous adipose tissue, which enhanced lipid storage (O'Reilly et al., 2017).Higher expression of AKR1C1 and AKR1C3 in PCOS skeletal muscle in this study could increase local synthesis of androgens via the backdoor pathway, and increase androgenic signaling in skeletal muscle.
The expression of proteins involved in lipid transport and negative regulation of lipid metabolism was increased in the muscles of women with PCOS.Lipid transport was clustered around apolipoproteins C1, C2, and C3.Interestingly, a recent proteomics study shows that serum apolipoprotein C3 levels are higher in insulin-resistant women with PCOS compared to insulin-sensitive women (Li et al., 2020).Could apolipoproteins aid in the deposition of excess fat in skeletal muscle and contribute to lipotoxicity?Fatty acids in skeletal muscle that do not undergo beta-oxidation in the mitochondria inevitably contribute to lipid synthesis.There was indeed a lower expression of various mitochondrial proteins involved in mitochondrial fatty acid beta-oxidation, enoyl acyl carrier protein reductase, enoyl-CoA delta isomerase 1, and acyl-CoA thioesterase 11 (R-HSA-77289, q=0.0008) in PCOS muscle.The most important single fate for these fatty acids is triacylglycerol esterification and storage in lipid droplets.Perilipins act largely as a scaffold protein on lipid droplets.Perilipin-1 is localized in the periphery of intramuscular lipids, known as extracellular lipids (Gandolfi et al., 2011;Zhao et al., 2021).In pig, muscle perilipin-1 proteins are localized around lipid droplets in mature and developing adipocytes in correspondence with extra-myocellular lipids (Gandolfi et al., 2011;Zhao et al., 2021).Therefore, the high expression of peilipin-1 in muscle from women with PCOS is likely a sign of lipotoxicity.Taken together, intramuscular lipid accumulation and type I muscle fiber decline likely contribute to insulin resistance in PCOS muscle.
PCOS is associated with chronically elevated levels of inflammatory markers in the circulation (Orio et al., 2005) and low-grade inflammation has been suggested to play a key role in the pathophysiology (Mancini et al., 2021).We and others have previously shown that many of the significantly enriched gene expression pathways in PCOS muscle are involved in immune responses or are related to immune diseases in muscle (Skov et al., 2007;Nilsson et al., 2018).A distinct pattern was the downregulation of a family of genes named the HLA complex and downregulation of gene sets associated with inflammatory responses in the PCOS muscle (Manti et al., 2020).In this study, the HLA-B was the most downregulated protein, in line with lower gene expression of HLA-B in skeletal muscle (Nilsson et al., 2018).The HLA complex helps the immune system distinguish endogenous proteins from proteins made by foreign invaders by making peptide-presenting proteins that are present on the cell surface.When the immune system recognizes the peptides as foreign, this will trigger selfdestruction of the infected cell.Numerous HLA alleles have been identified as genetic risk factors for autoimmune thyroid disease, which is increased nearly threefold in women with PCOS (Zeber-Lubecka and Hennig, 2021).The co-occurrence of PCOS and autoimmune thyroid disease has led to the suggestion that PCOS itself may be an autoimmune disorder.At the same time, five different immunoglobulins were upregulated, which contributed to the enrichment of immune system pathways.
In adipose tissue, the difference between groups was small with relatively few differently expressed proteins but several of the proteins with a higher expression play a role in immune system processes including immunoglobulins and HLA-B.Sequestosome-1 is an immunometabolic protein that is both involved in the activation of nuclear factor kappa-B and regulates mitochondrial functionality by modulating the expression of genes underlying mitochondrial respiration (Poon et al., 2021).Alterations in the immune response and immunometabolic pathways are seen in both muscle and adipose tissue, but whether and how these alterations affect and potentially contribute to the pathophysiology of PCOS remains to be investigated.
Electrical stimulation-related changes
Regular physical exercise improves insulin sensitivity and is the first-line approach to manage both reproductive and metabolic disturbances in those with PCOS and overweight or obesity, but adherence is low.Electrical stimulations inducing contraction could therefore be an alternative way to ameliorate symptoms in women with PCOS, alongside exercise.Transcriptomic changes in response to one single bout of electrical stimulation mimic the response to one bout of exercise (Benrick et al., 2020).Therefore, we hypothesize that long-term treatment with electrical stimulation or exercise also triggers overlapping signaling pathways.Five weeks of repeated treatment with electrical stimulation did not alter mRNA expression but increased the expression of several hundred proteins in skeletal muscle and about 50 proteins in adipose tissue.The most pronounced changes in both tissues were mechanisms involved in wound healing such as the increase in ECM formation and enriched pathways for nitric oxide metabolic process and Fc-gamma receptor signaling pathways.Moreover, wound healing and blood coagulation pathways were upregulated in skeletal muscle, suggesting that repeated needling induces small damages in the tissues that trigger a wound healing response.However, increased expression of ECM-related structural components such as collagens and integrins are not only involved in wound healing, but are also increased in response to muscle contractions as previously shown in response to eccentric exercise and repeated electrical stimulation with electrodes (Mackey et al., 2011;Hyldahl et al., 2015).We propose that these changes are induced by contraction but we cannot rule out that the ECM-related changes occurred as a direct result of repeated needle insertion.The delayed synthesis of collagens and subsequent strengthening of the ECM structural matrix in response to both exercise and electrical stimulation without needle insertion (Mackey et al., 2011;Hyldahl et al., 2015) supports the idea that contractions induce remodeling of the ECM that may provide protective adaptation to repeated contractions.
We hypothesized that electrical stimulations mimic the response to repeated exercise.Therefore, we searched for overlap between changes in the proteome following electrical stimulation treatment in this study and a meta-analysis on the exercise-induced proteome in skeletal muscle (Padrão et al., 2016).Seven proteins were changed in both conditions: fibrinogen beta-chain, actin, vimentin, annexin A2, moesin, gelsolin, and hypoxanthine-guanine phosphoribosyl-transferase.Actin, vimentin, and moesin make up structural parts of the cytoskeleton and all proteins are enriched in the GO terms immune system processes (GO:0002376) and positive regulation of metabolic processes (GO:0009893).Current evidence suggests that there are immunometabolic alterations in skeletal muscle from women with PCOS (Skov et al., 2007;Nilsson et al., 2018;Manti et al., 2020;Stepto et al., 2020), and increased expression of the abovementioned proteins could cause contractioninduced changes in the immunometabolic response.
Repeated treatment with electrical stimulation improves glucose homeostasis and lowers Hba1c in women with PCOS (Stener-Victorin et al., 2016).A not so well-characterized, upregulated pathway involves exocytosis and protein transport, which is of interest with regard to glucose transporter 4 (GLUT4) translocation stimulated by AMP kinase (AMPK).The activity of AMPK in muscle increases substantially during contraction and increases glucose transport.AMPK has been linked to at least two mechanisms for the control of vesicle trafficking, namely the regulation of Rab and the generation of phosphatidylinositol 3,5-bisphosphate in the control of GLUT4 translocation (Sylow et al., 2017).The Rab GTPase-activating proteins TBC1D1 and TBC1D4 are thought to mediate the effects of AMPK on GLUT4 translocation and glucose transport.At present, it is unclear which specific Rab proteins might be regulated by AMPK downstream of TBC1D1 and TBC1D4, but Rab-13 appears to act downstream of TBC1D4 (Sylow et al., 2017).Six Rab proteins showed higher expression after electrical stimulation, including Rab-13, making these proteins potential candidates for regulating glucose uptake by electrically stimulated contractions.The proteome in adipose tissue and skeletal muscle did not show differential expression of proteins with well-known metabolic effects that can easily explain the improvement in Hba1c.This could be due to the timepoint when the biopsies were collected, which was within 48 hr of the last treatment when the acute effects of electrical stimulation are no longer present and the long-term changes may be more subtle.
Limitations
The interpretation of the protein expression in this study is limited by the relatively small number of women (n=10/group), the inclusion of only Caucasian women, and a mix of PCOS phenotypes since we used the diagnostic criteria.Moreover, we cannot distinguish whether the identified dysregulated pathways are responses to PCOS or causal effectors.Some may argue that another limitation is the fact that we cannot identify the changes in protein expression in specific cell types, i.e., adipocytes and myocytes, as the biopsies consist of many different cell types and structures, e.g., nerves, immune cells, vessels, and connective tissue.However, this can also be seen as a strength, as no cell acts independently of the cells surrounding it.
Conclusions
Our findings suggest that highly oxidative and insulin-sensitive type I muscle fibers are decreased in PCOS which in combination with more extra-myocellular lipids may be key factors for insulin resistance in PCOS muscle.In adipose tissue, the difference between groups was small.A 5-week treatment with electrical stimulation triggered a wound healing response in both adipose tissue and skeletal muscle.In addition, remodeling of the ECM can provide protective adaptation to repeated skeletal muscle contractions.
• Supplementary file 1. Differently expressed proteins, phosphorylation sites, and transcripts, and their respective enriched pathways, in skeletal muscle and adipose tissues.(a) Differentially expressed proteins in skeletal muscle from women with polycystic ovary syndrome (PCOS) compared with controls (n=10/group).p<0.05, log 2 fold change ± 0.5.(b) Gene ontology pathway analysis of differentially expressed proteins in skeletal muscle from women with PCOS compared with controls.(c) Overlap between differentially expressed proteins in skeletal muscle from women with PCOS compared with controls and differentially expressed genes.(d) Phosphorylation sites with a change in phosphorylation in skeletal muscle from women with PCOS compared with controls.(e) Differentially expressed proteins in adipose tissue from women with PCOS compared with controls.(f) Phosphorylation sites with a change in phosphorylation in adipose tissue from women with PCOS compared with controls.(g) Gene expression and methylation changes in skeletal muscle after treatment with electrical stimulation.(h) Changed proteins in skeletal muscle from women with PCOS after treatment with electrical stimulation.(i) Gene ontology pathway analysis of changed proteins in skeletal muscle from women with PCOS after treatment with electrical stimulation.(j) Phosphorylation sites with a change in phosphorylation in skeletal muscle from women with PCOS after treatment with electrical stimulation.(k) Changed proteins in adipose tissue from women with PCOS after treatment with electrical stimulation.(l) Gene ontology pathway analysis of changed proteins in adipose tissue from women with PCOS after treatment with electrical stimulation.(m) Phosphorylation sites with a change in phosphorylation in adipose tissue from women with PCOS after treatment with electrical stimulation.
Data availability
The study was registered ClinicalTrials.gov (NTC01457209).The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium via the Proteomics Identifications (PRIDE) (RRID:SCR_003411SCR_003411) partner repository with the dataset identifier PXD025358.The protein expression analysis is published at https://github.com/GustawEriksson/FAT-MUS-Proteomics,(copy archived at Eriksson, 2023).Individual-level methylation and mRNA expression data are not publicly available due to ethical and legal restrictions related to the Swedish Biobanks in Medical Care Act, the Personal Data Act and European Union's General Data Protection Regulation and Data Protection Act.All other data generated or analyzed during this study are included in the manuscript and supporting files; a Source Data file provided for Figure 4-source data 1 contains the raw unedited uncropped blots used to generate the figure, and raw data can be found at Dryad https://doi.org/10.5061/dryad.wwpzgmsr7.
The following datasets were generated:
Figure 1 .
Figure 1.Study design.(A) Muscle and fat biopsies collected from 10 controls and 10 women with polycystic ovary syndrome (PCOS) at baseline and after treatment with electrical stimulations.Electrical stimulations were given 3 times/week for 5 weeks.(B) The electrical stimulation protocol alternating between protocol 1 in red dots and protocol 2 in blue dots.Acupuncture points not connected to the stimulator were stimulated manually.(C) A PCOSlike mouse model treated with the androgen receptor blocker flutamide or lacking androgen receptors in skeletal muscle (SkMARKO).Created with https://www.biorender.com/.
Figure 2 .
Figure 2. Protein expression and upregulated proteins in skeletal muscle.(A) Volcano plot showing the mean protein log 2 fold change in skeletal muscle (polycystic ovary syndrome [PCOS] vs controls) using limma method, and plotted against the -log 10 p-value highlighting significantly regulated proteins in black (p<0.05,log 2 fold change ± 0.5), n=10/group.(B) Increased protein expression of apolipoproteins C1 and C2, aldo-keto reductase (AKR) family 1 C1 and C3, and perilipin-1 in those with PCOS, (C) staining of perilipin-1 and DAPI in skeletal muscle, (D) quantification of perilipin-1 staining in skeletal muscle cells from control (n=33) and PCOS (n=28).Difference is based on Mann-Whitney U test and data are presented as mean ± SD.
Figure 3 .
Figure 3. Enriched downregulated pathways involved in muscle contraction and transition between fast and slow fibers in PCOS.(A) Protein network on proteins with a lower expression in polycystic ovary syndrome (PCOS) skeletal muscle vs. controls.Lines indicate protein-protein associations.(B) Decreased expression of the slow type I skeletal muscle fibers myosin heavy chain beta (MYH7) in those with PCOS (n=10/group), differences are based on the limma method and presented as mean ± SD. (C) Lower expression of proteins in slow-twitch type I muscle fibers in PCOS vs controls (p<0.05,log 2 fold change <-0.5).(D, E) Immunofluorescent staining of type I muscle fibers with myosin heavy chain beta, and (E) the cell membrane with WGA.(F) Quantification of fiber cross-sectional area (CSA) in (E), difference is based on Mann-Whitney U test and data are presented as mean ± SD. (G) Differently phosphorylated sites in proteins expressed in muscle filaments (p<0.05,log 2 fold change ± 0.5).
Figure 4 .
Figure 4. Androgen exposure leads to a shift in muscle fiber type in mice.(A) Decreased expression of the slow type I skeletal muscle fibers myosin heavy chain beta (MYH7) in dihydrotestosterone (DHT)-exposed polycystic ovary syndrome (PCOS)-like mice.This effect was partly blocked by the androgen receptor antagonist flutamide (n=5-6/group).(B) Decreased expression of slow type I skeletal muscle fibers (MYH7) in skeletal muscle-specific androgen receptor knockout mice (SkMARKO) compared to wild type (wt) (p=0.033).DHT exposure did not alter the number of type I fibers in SkMARKO (n=6-8/group).(C) Representative expression of myosin heavy chain beta.Differences in (A) are based on one-way ANOVA with Dunnets multiple comparisons test and (B) on two-way ANOVA, and presented as mean ± SEM.The full raw unedited uncropped blots with the relevant bands clearly labeled are provided as Figure 4-source data 1.The online version of this article includes the following source data for figure 4: Source data 1.Contains the raw unedited uncropped blots used to generate the figure.
Figure 5 .
Figure 5.Protein expression and differently expressed proteins in adipose tissue.(A) Volcano plot showing the mean protein log 2 fold change in adipose tissue (polycystic ovary syndrome [PCOS] vs controls) using limma method, and plotted against the -log 10 p-value highlighting significantly regulated proteins (black; p<0.05, log 2 fold change ± 0.5).n=10/group.(B) All differentially expressed proteins in adipose tissue from women with PCOS.(C) Picrosirius red staining of s.c.adipose tissue.The difference between women with PCOS (n=7) and controls (n=4) was based on Mann-Whitney U test and is presented as mean ± SD.
Figure 6 .
Figure 6.Protein expression and enriched signaling pathways in skeletal muscle after treatment with electrical stimulation.(A) Volcano plot showing the mean protein log 2 fold change in skeletal muscle (treatment vs baseline in polycystic ovary syndrome [PCOS]) using limma method, and plotted against the -log 10 p-value highlighting significantly regulated proteins (black; p<0.05, log 2 fold change ± 0.5).n=10/group.(B) GO terms for biological function of the changed proteins.(C) Representative pictures and quantification of picrosirius red staining of skeletal muscle before and after treatment with electrical stimulation in the same individual (n=4).Change between baseline and after treatment was based on Wilcoxon signed-rank test.
Figure 7 .
Figure 7. Protein expression and collagen quantification in adipose tissue after treatment with electrical stimulation.(A) Volcano plot showing the mean protein log 2 fold change in adipose tissue (treatment vs baseline in polycystic ovary syndrome [PCOS]) using limma method, and plotted against the -log 10 p-value highlighting significantly regulated proteins (black; p<0.05, log 2 fold change ± 0.5).n=10/group.(B) Representative pictures and quantification of picrosirius red staining of adipose tissue before and after treatment with electrical stimulation (n=6).Changes between baseline and after treatment were based on Wilcoxon signed-rank test.
Table 1 .
. Fasting Anthropometric and biochemical analyses in study participants.
Data are presented as mean ± SD.Differences between PCOS and controls were analyzed by Mann-Whitney U test.Wilcoxon signed-rank test was used to analyze changes between measurements at baseline and after 5 weeks of treatment.p-value < 0.05 was considered significant.* blood samples were taken and HOMA-IR (fasting insulin [mU/ml] × fasting glucose [mM]/22.5) was calculated.Circulating testosterone was measured by GC-MS/MS | 2024-01-06T06:17:37.670Z | 2024-01-05T00:00:00.000 | {
"year": 2024,
"sha1": "f68f39a2599fc3f491f6318a19a0e8f701beae84",
"oa_license": null,
"oa_url": "https://doi.org/10.7554/elife.87592",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "838a1d8044cf537b05582e41667412db89fa9f00",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
252755593 | pes2o/s2orc | v3-fos-license | Host transcriptional responses in nasal swabs identify potential SARS-CoV-2 infection in PCR negative patients
Summary We analyzed RNA sequencing data from nasal swabs used for SARS-CoV-2 testing. 13% of 317 PCR-negative samples contained over 100 reads aligned to multiple regions of the SARS-CoV-2 genome. Differential gene expression analysis compares the host gene expression in potential false-negative (FN: PCR negative, sequencing positive) samples to subjects with multiple SARS-CoV-2 viral loads. The host transcriptional response in FN samples was distinct from true negative samples (PCR & sequencing negative) and similar to low viral load samples. Gene Ontology analysis shows viral load-dependent changes in gene expression are functionally distinct; 23 common pathways include responses to viral infections and associated immune responses. GO analysis reveals FN samples had a high overlap with high viral load samples. Deconvolution of RNA-seq data shows similar cell content across viral loads. Hence, transcriptome analysis of nasal swabs provides an additional level of identifying SARS-CoV-2 infection.
INTRODUCTION
The SARS-CoV-2 pandemic continues to disrupt everyday life, with over 296,496,809 confirmed cases and over five million deaths (WHO 07-JAN-2022) (Organization, 2020). One of the initial responses in the biomedical field was the development of diagnostic tests for the rapid and sensitive detection of SARS-CoV-2 infection, resulting in various molecular PCR and antigen-based tests (Islam and Iqbal, 2020;Liu et al., 2020;Okamaoto et al., 2020). Many of these tests continue to operate under Emergency Use Authority regulations from the FDA in the US. However, many anecdotal reports of people with COVID-19 symptoms with negative tests are common. Here, we investigated the host transcriptional response of potential PCR false-negative subjects using RNAseq data from a shotgun transcriptome sequencing study . Indeed, multiple studies from biospecimens show activation of immune responses (alpha and gamma interferon responses) and bioenergetic responses to (Okamaoto et al., 2020) SARS-CoV-2 infection, as well as viral load-dependent changes in the host transcriptome Xiong et al., 2020;Sajuthi et al., 2020). Studies of SARS-CoV-2 and other viruses suggest that the host immune response to a virus may be sufficient to assist in the diagnosis of an infection (Zhang et al., 2021). We used published sequencing data from clinical specimens obtained from 670 patients tested for SARS-CoV-2 in the New York area in early 2020 (first wave), resulting in 192 positive and 389 negative test results via quantitative PCR assay (phs002258.v1.p1). Here, we show evidence for false-negative SARS-CoV-2 detection in 42 patients based on RNA-sequencing coverage of the SARS-CoV-2 genome and the host transcriptome.
RESULTS
Identifying potential false-positive SARS-CoV-2 infections using RNA-seq Subjects at New York Presbyterian Hospital-Weill Cornell Medical Center were clinically assessed for SARS-CoV-2 infection using a quantitative PCR (qPCR) test. The qPCR test included an amplicon for the E (envelope) gene to detect B lineage beta-coronaviruses and a second amplicon for the S (spike) gene that uniquely detects SARS-CoV-2 . The qPCR cycle threshold (Ct) value of the S amplicon is inversely related to the sample's amount of starting viral material, with a limit of detection cutoff at Ct R 40. Using RNA-seq data generated from the same nasal swabs, we measured the association between RNA-seq reads aligned to the SARS-CoV-2 genome and Ct values ( Figure 1A). We found an inverse linear relationship, r 2 = 0.69, between Ct values and SARS-CoV-2 reads in subjects deemed positive by PCR (166). In contrast, of the 317 PCR-negative subjects, Ct R 40, we found 42 subjects with over 100 SARS-CoV-2 aligned reads that represent potential false negatives.
We checked the mapping of our data by comparing the relative STAR aligned counts to the human and SARS-CoV-2 genome with a parallel analysis using Kraken2. We observed a similar correlation of reads mapping to both human ( Figure 1B) and SARS-CoV-2 ( Figure 1C) genomes using both the STAR alignment and Kraken2 tools (Pearson correlation of raw data r 2 = 0.49 and 0.99, respectively). There was no correlation between the number of SARS-CoV-2 aligned reads and human aligned reads ( Figure 1D) or the total number of sample reads ( Figure 1E and 1F). The number of gene counts aligned to the SARS-CoV-2 genome was proportional to the number of gene counts as a fraction of the total generated RNA-seq reads (data not shown).
We then classified the subjects into viral load groups based on qPCR Ct and the number of SARS-CoV-2 reads via RNA-seq (Table 1). An additional six samples were found to be false positives and 72 samples tested positive for other respiratory virus based on Kraken2 analysis; we removed these samples from subsequent analyses ( Figure 1G).
Next, we used Bedtools genomecov to investigate the relative coverage of the SARS-CoV-2 genome (Figure 2A) in each group. In high and medium viral load samples, we observed a consistent 5' to 3' coverage of the SARS-CoV-2 genome, whereas in low viral load samples, we observed more coverage variation (Figure 2B). For the false-positive samples, we observed variable coverage of the genome (only 7 samples had high enough coverage for strain analysis). We also investigated the relative coverage of the SARS-CoV-2 transcripts in each group using RSeQC geneBody coverage. In high and medium viral load samples, we observed a consistent 5' to 3' coverage of SARS-CoV-2 transcripts, whereas in low viral load samples, we observed more coverage variation ( Figure 2C). Finally, for the false-positive samples, we observed variable coverage of the transcripts. The low fragmented coverage of false-positive samples is suggestive of low viral load levels or potentially viral fragments indicative of potential post-infection shedding.
Next, we focused on the 42 false-negative samples and analyzed the number of SARS-CoV-2 aligned transcripts for each annotated SARS-CoV-2 gene. We detected only a few reads aligning to Orf1ab.1, Orf6, and Orf7b; in contrast, we observe a range of expression values of E, M, N Orf10, Orf1ab, Orf3a, and S genes. In the majority of the false-negative samples, the relative abundance of the highest expressed viral genes varied within each sample ( Figure 2D). Together, our analyses clearly identified viral RNA from multiple genomic regions of SARS-CoV-2 in 42 subjects, who were determined uninfected by PCR.
Viral clade analysis
We identified 42 potential false-negative patient samples. From these, we were able to assemble genomes for strain analysis from seven samples which were submitted to GISAID. Nextclade analysis identified 19B (1), 20B (1), and 20C (5) strains of the SARS-CoV-2 virus. Compared to the PCR-positive samples (148), we observe relatively higher numbers of 19B and 20B and a similar proportion of 20C strains (71% vs 76%); however, the proportions are likely influence by the low number of samples passing QC ( Table 2). The other false-negative samples did not pass QC for genome analysis (data not shown). Due to the low numbers, we were unable to determine whether the strain of virus had an impact on the ability of PCR to detect its presence or not.
Directional response of host gene expression identified viral load-dependent signatures and similarities between low viral load and false-negative subjects We hypothesized that false-negative samples likely reflect a low viral load condition. To test our hypothesis, we used the nasal swab RNAseq data to both profile the host gene expression response and compare the response across our subject groups (Table 1). We defined differential gene expression as an absolute fold change > 1.2 with a false discovery rate <0.05 using negative samples (both by qPCR and RNAseq) as the comparator group Law et al., 2014). In the host RNA from high viral load subjects, we observed a skewed signature of differential gene expression with over 10-fold more upregulated to downregulated genes (518 up / 47 down, Figures 3A and 3B). This pattern was reversed in both medium viral Figures 3C-3H). Next, we compared the commonality of the differentially expressed genes and found that false-negative samples had the most overlap with low viral load subjects (28 genes), followed by medium viral load (27 genes), and only one gene in common with high viral load subjects (Figure 4). Among the PCR-positive groups, 59 genes were commonly regulated across viral loads, 197 genes were shared between high and medium viral load, whereas 275 were shared between medium and low viral load. In contrast, only two genes were uniquely shared between high and low viral load ( Figure 4). These data suggest that subjects with a high viral load, as determined by nasal swab qPCR, have a distinct directional host transcriptional response compared to other SARS-CoV-2-positive subjects.
Functional analysis of host gene expression identified host immune and inflammatory responses in false-negative samples
Using our viral load classification, we found the directionality and uniqueness of host expression changes at the individual gene level (Figures 3 and 4). Functional analysis of RNAseq data leverages collections of genes that are related to common pathways, functions, and cellular localizations to identify specific gene sets that are enriched across the transcriptome. We hypothesized that functional analysis of differential gene expression would identify host responses common to SARS-CoV-2 infection as well as host pathways sensitive to viral load. We used fGSEA and topGO to compare the differential gene expression patterns at gene ontology (GO) level.
Our fGSEA analysis ( Figure 5A) found that the high viral titer group had the largest enrichment of GO terms, 2,606. The medium titer group had 766 enriched GO terms, 66% of which overlapped with the high titer group. The low titer and false-negative groups had the fewest number of enriched GO terms, 66 and 390, respectively. We identified 23 common GO pathways enriched across all subjects with detectable SARS-CoV-2, via RNAseq, compared to negative subjects ( Figure 5A). Remarkably, the 23 common pathways were all upregulated compared to the negative controls and shared inflammatory and immune response pathways observed in viral infections, including in patients with COVID-19 ( Figure 5B) Xiong et al., 2020;Sajuthi et al., 2020). In addition to the 23 common pathways, there were 188 common pathways between high load, medium load, and false-negative subjects. Surprisingly, there were only six common pathways enriched in both the false-negative and low viral load groups.
Viral load linked to unique functional host responses at the transcriptional level
Differences in viral load may reflect the infection time course, the effectiveness of host response, or the effectiveness of treatments. Regardless, it is clear from both linear (PCA) and non-linear (tSNE) Figure S1A). iScience Article dimensionality reduction of the host RNAseq data ( Figure S2) that the high load samples are the most homogeneous compared to the other groups. As expected, the fGSEA analysis of high viral load samples identified pathways associated with defense responses to viral infections, but also protein targeting to iScience Article membranes, and mRNA catabolic processes. We also used topGO, a similar enrichment analysis of GO terms, that considers the hierarchical structure of ontologies to increase accuracy. For high viral load samples, topGO analysis shows enrichment of pathways associated with sensory perception signaling pathways followed by RNA and DNA processing (supplementary datas). Our analysis of medium and low viral responses using topGO revealed similar sensory perception and RNA silencing pathways.
Next, we investigated the chromosomal enrichment of genes in the false-negative (PCR negative-sequence positive) samples and observed two significant enrichments sites on chr1p21 and chr8q21, which were also observed in the medium dataset, but not the high dataset (supplementary datas). Gene set enrichment analysis with topGO again reveals enrichment in sensory perception pathways and RNA/ DNA regulatory mechanisms in PCR negative-sequence positive samples (supplementary datas).
Cell population mixtures were consistent across nasal swab samples
Differences in cell populations can influence differential gene expression profiles in biosamples (Bruning et al., 2016). Given the differential gene expression profiles and the results of our functional analyses, we used the RNA-Seq deconvolution tool MuSiC (Wang et al., 2019) and a single-cell airway dataset (Lukassen et al., 2020) to deconvolve our data to predict cell type proportions. We identified cilliated1, cilli-ated2, goblet, FoxN4, and basal3 cell types as the largest contributors ( Figure 6A). We analyzed the cell populations across all samples using principal component analysis ( Figure 6B), hierarchical clustering ( Figure 6C), and tSNE ( Figure S2). Neither method revealed patterns of cell proportions that correspond to viral load status. Finally, we used 2-way ANOVA with cell type and viral load as main factors and did not observe any interaction between cell type and viral load (interaction between PCR status and cell type = 0.06, no group show significant interaction, p = 1.0: post-hoc Tukey test: Figure 6D and S1). These data suggest differences in cell type populations do not explain the differential gene expression we observed.
DISCUSSION
Analysis of RNA sequencing data obtained from nasal-pharyngeal samples collected during SARS-CoV-2 testing in the New York Region revealed 42 out of 317 (13%) PCR-negative samples had detectable SARS-CoV-2 genomic material, suggesting they were false negatives (F-N) (Figure 1). RNA sequencing data from these potential F-N samples aligned to multiple SARS-CoV-2 genes across the SARS-CoV-2 genome (Figure 2B), suggesting this was not just a single region being detected (erroneously). Gene expression analysis of F-N samples shows a downregulation of gene expression response that was similar to the response of patients with low and medium viral loads (Figure 3), although the genes were different between groups (Figure 4). Gene Ontology analysis showed similar biological pathways are regulated by F-N samples and all SARS-CoV-2 positive patients ( Figure 5). Finally, the cellular content of the swabbed samples (as determined by deconvolution) was not different between false-positive samples and other SARS-CoV-2positive samples ( Figure 6). Together, these data support our observation that 13% of PCR-negative samples were false negatives.
Patients showed different host transcriptome responses depending on viral load (Figure 3). Overall responses changed from increased gene expression to decreased gene expression as the viral load reduced (Figure 3). The host response in the F-N samples was most similar to the Med and Low viral response groups (downregulation). However, there were few common regulated genes overlapping between all three groups ( Figure 4); out of over 1,000 genes regulated in Med and Low samples, only 22 were common to iScience Article Med and F-N samples and only 21 were common to Low and F-N samples. Despite few genes overlapping between samples of varying vial load, the biology of regulated genes appears similar among samples (Figure 5). Using both FGSEA and TopGO tools (supplementary datas), we see a concordance of biological pathways regulated by all viral conditions. Interestingly, we see the highest (2.4% of all terms from all samples) overlap with the high viral load group (High, 70), followed by the medium and low viral load group iScience Article ( Figure 5A). This may suggest a biphasic gene response to viral load, or that different genes converge to regulate the same biological pathways.
Different cell proportions may influence gene expression profiles from bulk tissues (Bruning et al., 2016). Deconvolution of RNA seq data using MuSiC did not identify any effects of viral status on the cell proportions of the samples (Figure 6). Attempts to predict the viral status of F-N samples using Random Forest and Neural Network models of top expressed genes were unsuccessful (best models were 5/42 called, data not shown, script provided). However, this may not be that surprising given the variability of gene expression in the samples using PCA and tSNE, even following filtering for the top 5, 10, and 100 most significant DEGs (based on LIMMA adj. p values) between groups does not cleanly resolve the viral status samples (High, Med, Low, F-N from Negative sequencing samples; Figure S2). Together, these data suggest that SARS-CoV-2 infection can be identified by host tissue responses to infection, and that gene expression patterns regulate common biological response pathways to viral infection, but these responses are not due to cell composition of the samples.
The rapid development of PCR-based testing for SARS-CoV-2 contributed to helping monitor and control the disease (Islam and Iqbal, 2020;Liu et al., 2020;Okamaoto et al., 2020). PCR is commonly referred to as the gold standard for SARS-CoV-2 testing (Brooks and Das, 2020;Drame et al., 2020). Any shortcomings of PCR testing are usually limited to failure to detect past infection (Yong et al., 2020), although concerns about the accuracy of early PCR tests were raised (Fang et al., 2020;Drame et al., 2020). This study suggests that patients who were tested for SARS-CoV-2 but received a negative PCR test may have been infected with SARS-CoV-2. While we do not know how many of the F-N patients were symptomatic, or asymptomatic, our data show that prior to filtering, 11% of PCR-negative samples showed evidence of SARS-CoV-2 infection using RNA sequencing ( Figure S1A). Earlier tests relied on a single SARS-CoV-2 gene for identification, whereas later tests used multiple genes. However, even multi-gene PCR tests are vulnerable to mutations resulting in suboptimal amplification of the target amplicon (such as the S-gene drop out observed in some tests (Volz et al., 2021)). Based on calculations of true positive rates taking sequencing to be the true value, we calculate the accuracy of PCR to be 90% (specificity 97% and sensitivity 79%) ( Figure S1C) (Trevethan, 2017); this suggests that PCR is better at detecting if a person does not have SARS-CoV-2 infection. This compares close to a recent review (Zitek, 2020) of nasal-pharyngeal RT-PCR tests (reporting a specificity and sensitivity of 98.8% and 78.2%, respectively based on data in ). Conversely, if we consider PCR to be the true standard, then sequencing in this study had a sensitivity of 97% and specificity of 87%. Therefore, sequencing could be deemed a better test for detecting infected people, but more costly and too slow for widespread use. It is not clear whether this analysis extends to other PCR assays but warrants further investigation. Collectively, these data suggest the number of reported COVID-19 cases is likely lower than the actual number of individuals who have been infected with the virus. iScience Article Limitations of the study Our analysis suggests that over 10% of patients with negative PCR results may have been infected with SARS-CoV-2. Clearly, the impact of this estimate on positivity rates would depend on whether the testing was performed as a clinical assay or for surveillance. Furthermore, it is not clear whether people tested were symptomatic, at what stage of the infection they were at, severity, or their outcome. All these confounding factors may affect the expression of genes and may account for the high variability in expression profiles we observe ( Figure 3) and the challenge of predicting COVID status based on gene expression alone (Figure S2). Even when we compare control (PCR negative) samples, we observe high variability in the data (see Figure S2). Nasal-pharyngeal transcriptomes are less studied than other tissues, such as blood, hence the natural variation of this tissue sample is less well understood. Regardless of absolute gene expression, it iScience Article was clear that biologically related genes were regulated in the disease (Figure 5), and sequencing enabled us to identify those signatures, as well as reads from across the SARS-CoV-2 genome. Taken together, this study shows over 10 % of PCR-negative samples were likely positive for SARS-CoV-2 suggesting estimate of prevalence of COVID-19 may be underestimated worldwide. More large-scale clinical data to validate these approaches are recommended in future work.
STAR+METHODS
Detailed methods are provided in the online version of this paper and include the following:
INCLUSION AND DIVERSITY
One or more of the authors of this paper self-identifies as an underrepresented ethnic minority in their field of research or within their geographical location. One or more of the authors of this paper self-identifies as a gender minority in their field of research. One or more of the authors of this paper self-identifies as a member of the LGBTQIA+ community. iScience 25, 105310, November 18, 2022 iScience Article iScience Article | 2022-10-08T13:02:31.164Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "80beb79ff56d1f6ff8015c5e82dfda464c69efbb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.isci.2022.105310",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e090283cc161293376e5b0ce5c41bf6155713e6",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
271378060 | pes2o/s2orc | v3-fos-license | Allergenic response induced by Pterobothrium crassicolle (Cestoda: Trypanorhyncha) extracts in murine model
Abstract The aim of this study was to determine the allergenic activity of components present in crude extracts of Pterobothrium crassicolle plerocerci (CPE) and blastocysts (CBE) obtained from Micropogonias furnieri in a murine model. Two groups of seven animals each received 50 µg of CPE or CBE on days 1, 35 and 120. Serum samples were tested by ELISA and Immunoblotting. Specific IgG and IgE levels were detected by ELISA, showing specific humoral responses for the primary immunization for both immunoglobulins and continuously growing titers for IgE. Positive Passive Cutaneous Anaphylaxis tests in rats sensitized with anti-CBE sera and tested by CBE, showed biologically, the allergenic activity of the extracts. The CPE and CBE showed some different recognition regions but both experimental groups recognized all regions of the extracts when tested for cross reactions, showing that CPE and CBE could share antigenic recognition sites.
Due to the increasing worldwide consumption of raw, undercooked or poorly processed fish, human accidental infections with fish parasites and some allergic related reactions have represented a serious public health hazard, with increasing medical concern in several countries (Chai et al., 2005;Audicana & Kennedy, 2008;Dorny et al., 2009;Broglia & Kapel, 2011).Human parasitism by trypanorhynch cestodes is extremely rare (Kikuchi et al., 1981;Fripp & Mason, 1983), however Pelayo et al. (2009) showed the seroprevalence of an immune response against the trypanorhynch Gymnorhynchus gigas in a Spanish population.According to Deardorff et al. (1984), the metacestode toxins are gradually released to the fish tissues, mostly flesh, which could represent a hazard for human health, and experimental studies have highlighted the risk of allergic reactions by trypanorhynchs (Rodero & Cuéllar, 1999;Vázquez-López et al., 2001, 2002;Gòmez-Morales et al., 2008;Mattos et al., 2015).
Considering the lack of data about the allergenic potential of Pterobothriidae trypanorhynchs, the aim of the present study was to determine if crude extracts of Pterobothrium crassicolle (Diesing, 1850) plerocercoids and blastocysts have antigenic compounds able to induce specific allergic responses in experimental murine model.
Material and Methods
A total of 107 specimens of M. furnieri (24.0-65.0cm) were obtained from fish markets and fishermen in the municipalities of Niterói and Cabo Frio, Rio de Janeiro State, Brazil, between March/2009 and March/2012.They were collected and transported on ice in isothermic bags for examination at the Laboratório de Inspeção e Tecnologia de Pescado, Faculdade de Veterinária (Fish Inspection and Technology Laboratory, Faculty of Veterinary), Universidade Federal Fluminense (UFF).The fish specimens were identified according to Menezes & Figueiredo (1980) and submitted to necropsy at the laboratory.Parasite recovery was carried out according to the methodology proposed by Eiras et al. (2006).The taxonomic identification of trypanorhynch cestodes was based on Campbell & Beveridge (1996) and identified as P. crassicolle metacestode.The plerocerci of P. crassicolle and its blastocysts were manually collected from the fish with the aid of scissors and forceps.
The metacestodes were transported on ice inside isothermic bags to the Laboratório de Imunobiologia das Doenças Infecciosas e Granulomatosas, Departamento de Imunologia, Instituto de Biologia (Department of Immunobiology, Institute of Biology), UFF, where immunological analyses were carried out.The crude plerocerci extract (CPE) and the crude blastocysts extract (CBE) were obtained after separation of the metacestode parts in different containers, followed by extensive washing using sterile 0.1M phosphate-buffered saline (PBS), pH 7.3, supplemented with 5% penicillin and 5% streptomycin.The metacestode parts were homogenized singly in a Potter-Elvehjem homogenizer (Thomas Scientific, PA, USA) after a final wash with non-supplemented, sterile PBS.The homogenate was then submitted to six 30-s cycles using the Tissue Ruptor (Qiagen Instruments AG, Zurich, Switzerland), the suspension obtained centrifuged at 60.000 g at 4ºC for 30 minutes and the supernatant filtered through a 0.22 µm MillexGV Millipore filter (Millipore, France).
The same protocol was used to prepare the crude fish protein extract (CFE) of M. furnieri, which was used as the control antigen for the serological assays.The protein contents of the CPE, CBE and CFE were estimated according to Lowry et al. (1951).
To determine the molecular weight range of the CPE, 0.03mg of the extract was submitted to SDS-PAGE (sodium dodecyl sulphate-polyacrylamide gel electrophoresis) using a 12%, 100 x 100mm gel (Vertical System, Bio-Rad, Hercules, California, USA) for 2h at 140V (Laemmli, 1970).
Ten-week-old female BALB/c mice were maintained in separate cages according to their experimental group (two experimental groups [n=7] and control group [n=5]), receiving distilled water and food (Nuvilab CR-1, Nuvital Nutrientes S/A, Brazil) ad libitum.All animals were injected with xylazine (200 μg/kg g/kg body weight) intramuscularly associated with ketamine (10 mg/kg body weight) before invasive procedures.Euthanasia was applied using an overdose of anesthetic drugs.The study was approved by the Animal Research Ethics Committee of the UFF Centre for Laboratory Animals (038∕2009).
Each experimental group was immunized intraperitonally (i.p.) on days 1, 35 and 120, with a suspension containing 50 µg of CPE or CBE and 2.0 mg of commercial aluminum hydroxide solution, Al(OH) 3, in a final volume corresponding to 200 μl of suspension.At the same times the control group was injected with a suspension containing only sterile saline and aluminum hydroxide.
Six female Lou-M adult rats each weighing 150g were reared in the animal house of the UFF and tested using the Passive Cutaneous Anaphylaxis (PCA) assay.This technique, as described by Braga & Mota (1976), uses a 72 h sensitization period for the IgE antibody.Briefly, a shaved dorsal area was injected intradermally with 30 μL of mice sera from the CPE, CBE or control groups (days 56, 120, 127 and 135) diluted 1:40.After the sensitization period, PCA reactions attributable to the IgE class were elucidated by the rats by the intravenous administration of 500 μg of CPE, CBE or CFE in 0.5 mL saline mixed with 1% Evans blue dye.Saline (0.5 mL) was used as the negative control.Thirty minutes later, the rats were euthanized by an overdose of anesthetic drugs.The dorsal skin was removed and inverted to observe and measure any pigmented area and the reactions considered positive for spots larger than 5 mm in diameter.
The recognition of immunogenic proteins by Immunoblotting (Western Blot) was used to determine the reactivity profile of specific IgG and IgE.For the western blot, 0.3 mg of CPE and CBE were submitted to the same SDS-PAGE conditions, followed by transfer of the protein bands from the separating gel to the nitrocellulose membrane using a Semi-dry blotter (Bio-Rad, CA, USA).Subsequently, the membranes were blocked overnight with 5% fatfree milk (Nestle) in PBS solution, washed with 0.05% PBS-Tween, dried at room temperature (RT) and cut into strips.Two strips were incubated overnight at RT with each serum sample diluted 1:100 v/v in blocking buffer, Immunogenic activity of Pterobothrium crassicolle with constant rocking.After washing four times with TBS-(Tris-buffered saline) Tween, one membrane strip of each serum was incubated with peroxidase-labelled goat anti-mouse IgG (Bio-Rad) for 2h and the other exposed to rat anti-mouse IgE (Invitrogen) for 3h, followed by HRP-goat anti-rat IgG (H + L, Bio-Rad, CA, USA) for 2h at RT with constant rocking.After at final wash, the peroxidase substrate (3.3'-diaminobenzidine,Sigma-Aldrich, USA) was added to develop the Ag/IgG or Ag/IgE interaction.All antibodies were used according to the manufacturer's recommendations.
The Shapiro-Wilk test was used to assess normality.Data were evaluated using the General Linear Model, with repeated measures ANOVA and Bonferroni post-hoc.The software used was SPSS (IBM, version 24).In the statistical analysis of experimental data, the values were considered significant at p< 0.05.
Results
After the primary immunization, specific IgG and IgE were detected in the serum samples of the experimental groups as from day 14, with statistically significant increasing levels (p<0.001 for all, except for IgE of the CBE group, which was p<0.01 at day 14) when compared with the control group sera.The highest IgG level was observed for the samples collected on day 42 from animals immunized with 50µg of CBE.The titers of specific IgE increased continuously for both the CPE and CBE groups during the experimental period (Figure 1).Cross-reactions between the immunized groups (CPE and CBE) versus the CFE antigens were not observed by the ELISA assay, and no specific humoral response was detectable in the serum of the animals before the prime immunization or in the control group.However, the serum samples of both experimental groups showed statistically the same recognition of both the parasite extracts.
The evaluation of the allergenic properties by Passive Cutaneous Anaphylaxis (PCA) assay allowed for the visualization of profound localized allergic reactions triggered by allergen-induced cross-linking of the FcRI by the binding of allergen-specific IgE located just beneath the skin.The extravasation of Evans blue dye reflected the increase in local vascular permeability, a process that depends on the release of histamine and serotonin mast cell degranulation.The PCA tests were tested by CBE in rats sensitized with anti-CBE serum to indicate the allergenic property of this parasite extract (Figure 2).No reactions were observed in the control serum area or in the CFE tested rat.In the recognition of immunogenic proteins by immunoblotting most bands were observed between 80 and 15 kDa (CPE) or 70 and 10 kDa (CBE) in the SDS-PAGE.The sharpest CPE band was near to 80 kDa.However, specific IgG recognized CPE proteins with 120 kDa or more, near to 80kDa (sharpest band), 60 kDa, 50 kDa, near 32k, 30 kDa and 25 kDa (Figure 3).However, specific IgG different recognized CBE bands from 120 kDa to 24 kDa with the sharpest bands at 85 kDa, near to 57 kDa, 35kDa and 24kDa.No reactivity was observed in the control serum.
The immunogenic capacities of the CPE and CBE after the first, second and third i.p. inoculations were shown by ELISA, with detectable high levels of specific IgG and continuously increasing IgE up to the end of the experiment.Our results corroborate previous data that indicate the use of the murine model with BALB/c mice, testing by i.p. antigenic administration, as appropriate for identifying and characterizing allergens with a protein nature (Rodero & Cuéllar, 1999;Dearman & Kimber, 2001;Vázquez-López et al., 2001;Martínez de Velasco et al., 2002;Gòmez-Morales et al., 2008;van der Ventel et al., 2011).
The cross-reactions observed by ELISA for the CPE and CBE antigens suggest that the two extracts share antigenic recognition sites.Previous studies had discarded the blastocysts and used only the plerocerci, but natural exposure may involve both portions of the metacestodes.The difference between the CPE and CBE antigens with respect to the responses induced was only statistically significant on days 14, 21 (IgE), 42, 49 (IgG) and 120 (IgG and IgE) after the first immunization.In general, CBE induced higher titers of IgG and IgE, but all immunized groups were statistically extremely different from the control group as from 14 days after the first immunization (p<0.001) for both immunoglobulins.
The ELISA and PCA results indicated the allergenic nature of CPE and CBE, since high IgE and IgG (mainly IgG1) levels are known to be related to the regulation of hypersensitivity reactions (Rodero & Cuéllar, 1999;Vázquez-López et al., 2001;Martínez de Velasco et al., 2002).
The SDS-PAGE and Western blot profile of P. crassicolle showed similar aspects when compared with other Trypanorhyncha cestodes such as G. gigas and M. horridus, which also presented IgG binding proteins with similar weight.Vázquez-López et al. (2002) observed a 24 kDa collagenase of G. gigas which as recognized by the humoral response of the experimental animals, and Gòmez-Morales et al. (2008) reported IgG binding proteins from M. horridus with 26 and 75 kDa.These proteins could be closely related to the IgG binding proteins of P. crassicolle.Since our results indicated the allergenic activity of P. crassicolle antigens in murine models, complementary clinical trials are required to elucidate their implications with respect to human health.
Figure 1 .
Figure 1.Dynamics of the specific IgG (A) and IgE (B) serum levels.Two groups, each with 7 mice, received intraperitoneally 50 µg of crude extract of Pterobothrium crassicolle plerocerci -CPE (square) or blastocysts -CBE (triangle) associated with 2 mg Al(OH) 3 , on days 0, 35 and 120 (arrow).A control group (circle) with 5 animals, received saline solution with 2 mg Al(OH) 3 on the same days using the same pathway.The values indicate the means of the sums of the optical densities (OD).+/-standard error of the mean of each group.As from day 14, the IgG and IgE levels of both experimental groups were p<0.001 (exception *p<0.01) when compared to the control.a p<0.05, b p<0.01, c p<0.001 between groups.
Figure 2 .
Figure 2. The passive cutaneous anaphylaxis assay (PCA).PCA reaction using Lou-H rat as the receptor of anti-CBE sera from BALB/c mice.Positive PCA reactions for mice sera after 127 (a) and 135 (b) days induced by the CBE.Increased blood influx (arrows); Sera without reaction (asterisks); Crude extract of Pterobothrium crassicolle blastocysts (CBE).Bar = 10 mm. | 2024-07-24T15:11:39.344Z | 2024-07-22T00:00:00.000 | {
"year": 2024,
"sha1": "492c13c2a2c4fbe33cb4c95b60b5b321fafd0e74",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7f29523399657f8e0a786b4b91fc7666726c0c8c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27770240 | pes2o/s2orc | v3-fos-license | α1 Soluble Guanylyl Cyclase (sGC) Splice Forms as Potential Regulators of Human sGC Activity*
Soluble guanylyl cyclase (sGC), a key protein in the NO/cGMP signaling pathway, is an obligatory heterodimeric protein composed of one α- and one β-subunit. The α1/β1 sGC heterodimer is the predominant form expressed in various tissues and is regarded as the major isoform mediating NO-dependent effects such as vasodilation. We have identified three new α1 sGC protein variants generated by alternative splicing. The 363 residue N1-α1 sGC splice variant contains the regulatory domain, but lacks the catalytic domain. The shorter N2-α1 sGC maintains 126 N-terminal residues and gains an additional 17 unique residues. The C-α1 sGC variant lacks 240 N-terminal amino acids, but maintains a part of the regulatory domain and the entire catalytic domain. Q-PCR of N1-α1, N2-α1 sGC mRNA levels together with RT-PCR analysis for C-α1 sGC demonstrated that the expression of the α1 sGC splice forms vary in different human tissues indicative of tissue-specific regulation. Functional analysis of the N1-α1 sGC demonstrated that this protein has a dominant-negative effect on the activity of sGC when coexpressed with the α1/β1 heterodimer. The C-α1 sGC variant heterodimerizes with the β1 subunit and produces a fully functional NO- and BAY41-2272-sensitive enzyme. We also found that despite identical susceptibility to inhibition by ODQ, intracellular levels of the 54-kDa C-α1 band did not change in response to ODQ treatments, while the level of 83 kDa α1 band was significantly affected by ODQ. These studies suggest that modulation of the level and diversity of splice forms may represent novel mechanisms modulating the function of sGC in different human tissues.
Since the studies of the late 1970s and early 1980s, which underlined the obligatory role of the endothelium in mediating acetylcholine-induced vasodilatation, nitric oxide (NO) has been recognized as an endogenous nitrovasodilator that medi-ates the local regulation of basal arterial tone (1)(2)(3)(4). Many of the physiological functions of NO in the cardiovascular, neuronal, gastrointestinal, and other systems are mediated through its primary receptor, soluble guanylyl cyclase (sGC). 3 The hemecontaining sGC heterodimer converts guanosine triphosphate into the secondary messenger guanosine 3Ј:5Ј-cyclic monophosphate (cGMP). The sGC activity increases more than 200fold in response to NO (5,6). High concentrations of cGMP produced by activated sGC modulate functions of numerous enzymes, such as cyclic nucleotide phosphodiesterases, cGMP-gated ion channels, and cGMP-dependent protein kinases (PGKs). Recently, the vital importance of sGC for mammalian physiology was directly confirmed by generation of sGC knock-out mice (7)(8)(9). The absence of sGC protein resulted in a significant increase in blood pressure, complete loss of NO-dependent aortic relaxation, and inhibition of platelet aggregation in knock-out animals, which died prematurely at the age of 4 weeks because of severe gastrointestinal disorders (7).
Four sGC isoforms, products of four genes, have been identified so far: ␣ 1 , ␣ 2 ,  1 , and  2 . Only ␣ 1 / 1 and ␣ 2 / 1 heterodimers are activated by NO (10). The ␣ 1 / 1 sGC is the most abundant isoform and is distributed ubiquitously in mammalian tissues with the highest levels of mRNA in brain, lung, heart, kidney, spleen, and muscle (11). Vascular smooth muscle and endothelial cells express predominantly ␣ 1 -and  1 -subunits (12). The functional importance of ␣ 1 / 1 sGC was demonstrated by the significantly decreased relaxing effects of major vasodilators (acetylcholine, NO, YC-1, and BAY41-2272) in the ␣ 1 sGC knock-out mice of both genders (9). sGC function is affected not only by NO, but also by regulation of the expression of sGC subunits at transcriptional and post-transcriptional levels. The steady state mRNA levels of ␣ 1and  1 -subunits decrease with hypertension, aging and vary during embryonic development (13). The expression of sGC subunits is regulated by estrogen (14), cAMP-elevating compounds (15,16), cytokines (NGF, LPS, IL-1) (17) and NO donors (18). Subcellular localization of sGC, and its activity can also be affected in proliferating tissue (19) by protein interactions and phosphorylation (13). In mammals, the alternative splicing for the ␣ 2 -subunit generates a dominant-negative variant (20). Splice forms for  1 -and  2 -subunits have been also demonstrated (21)(22)(23). Recently, a shortened ␣ 1 sGC transcript, which lacks the predicted translation site in exon 4, has been found, and its expression was correlated with lower sGC activity in several cell lines (24). However, splice variants of ␣ 1 sGC have not been described previously.
Here we report the isolation and characterization of three new ␣ 1 sGC splice forms encoding N-and C-terminal-truncated proteins. We demonstrate that the N-terminal-truncated C-␣ 1 splice form heterodimerizes with the  1 sGC subunit to create an active NO-sensitive enzyme both in Sf9 and human neuroblastoma BE2 cells. Moreover, this splice variant is more resistant to ODQ-induced protein degradation than the wildtype sGC. N1-␣ 1 sGC splice form lacking the C-terminal catalytic domain has a dominant-negative effect when co-expressed with ␣ 1 / 1 sGC in Sf9 or BE2 cells. The functional role of N2-␣ 1 sGC splice variant is yet to be determined. Q-PCR and semiquantitative RT-PCR analyses of different human tissues demonstrate tissue-specific expression of the identified splice forms. Together our data suggest that alternative splicing of the ␣ 1 sGC subunit may be a novel mechanism that regulates sGC function and activation in some human tissues.
Primers, RNA, and RT-PCR-All primers were custom synthesized by Integrated DNA Technologies (Coralville, IA). To subclone N1-␣ 1 sGC we used the upstream primer PR1, 5Ј-318 CAACACCATGTTCTGCACGAAGC-3Ј and the downstream primer PR2, 5Ј-1411 GCTTTCATATTCAAGATAG-TATTATG-3Ј (numbering according to sequence with Gen-Bank TM no. CR618242). To subclone the N2-␣ 1 sGC, we used the upstream primer PR3, 5Ј-192 CAACACCATGTTCTGCA-CGAAGC-3Ј and the downstream primer PR4, 5Ј-611 GTATC-ACTCTCTTTGTGTAATCC-3Ј (GenBank TM no. BC012627). To detect the deletion of exon 4 in C-␣ 1 sGC we used the upstream primer PR5, 5Ј-120 GCTAGAGATCCGGAAGC-ACA-3Ј and the downstream primer 5Ј-317 TTGCAAATACT-CTCTGCCAAA-3Ј (GenBank TM no. AK226125). To detect the deletion of in the exon 7 of C*-␣ 1 sGC we used the upstream primer PR7, 5Ј-561 GAA CGG CTG AAT GTT GCA CTT GAG-3Ј and the downstream primer PR8, 5Ј-922 GTA GGG CTG ATT CAC AAA CTC G-3Ј (GenBank TM no. BX649180). Total RNA from BE2 cells was isolated using the RiboPure kit (Ambion, TX). The panel of total RNA from human tissues was purchased from Ambion (FirstChoice Human Total RNA Survey Panel, Lot 08608142). 5 g of total RNA was used for the RT reactions, which was performed with a mixture of oligo(dT) and Random Hexamer primers using SuperScript III RT (Invitrogen) according to the manufacturer's protocol. PCR reactions with PfuUltra DNA polymerase (Stratagene) were performed for 35 cycles at the T a of 55°C. PCR products were separated on an agarose gel, purified using the QIAEX II gel extraction kit (Qiagen), and sequenced using PCR primers. All sequencing was performed by the Nucleic Acid Core Facility at the Medical School of University of Texas in Houston.
Real-Time Quantitative RT-PCR-Real-time quantitative RT-PCR (RT-qPCR) was performed utilizing the 7700 or 7900 Sequence Detector instrument (Applied Biosystems, Foster City, CA) (25,26). Specific quantitative assays for ␣ 1 sGC, N1-␣ 1 sGC, and N2-␣ 1 sGC were developed using the Primer Express software version 1.0 for Macintosh (Applied Biosystems), Beacon Designer (Premier Biosoft), or RealTimeDesign (Biosearch Technologies) based on sequences from Gen-Bank TM . The assays are listed in supplemental Table S3. cDNA was synthesized in 10 l (96-well plate) or 5 l (348-well plate) total volume by the addition of 6 l or 3 l/well of RT master mix consisting of: 400 nM assay-specific reverse primer, 500 M deoxynucleotides, Superscript II buffer, and 10 units of Superscript II reverse transcriptase (Invitrogen) to a 96-(ISC Bioexpress, Kaysville, UT) or 384-well plate (Applied Biosystems) and followed by a 4-l or 2-l volume of sample (25 ng/l), respectively. Each sample was determined in triplicate, plus a control without reverse transcriptase to access DNA contamination levels. Each plate also contained an assay-specific sDNA (synthetic amplicon oligo) standard spanning a 5-log template concentration range and a no template control. Each plate was covered with Biofilm A (Bio-Rad) and incubated in a PTC-100 (96) or DYAD (384) thermocycler (Bio-Rad) for 30 min at 50°C, followed by 72°C for 10 min. Subsequently, 40 l or 20 l of a PCR master mix (400 nM forward and reverse primers (IDT, Coralville, IA), 100 nM fluorogenic probe (Biosearch Technologies, Novato, CA), 5 mM MgCl 2 , 200 M deoxynucleotides, PCR buffer, 150 nM SuperROX dye (Biosearch Technologies, Novato, CA), and 1.25 units of Taq polymerase (Invitrogen) were added directly to each well of the cDNA plate. RT master mixes and all RNA samples were pipetted by a Tecan Genesis RSP 100 robotic work station (Tecan US, Research Triangle Park, NC); PCR master mixes were pipetted utilizing a Biomek 2000 robotic work station (Beckman, Fullerton, CA). Each assembled plate was then covered with optically clear film (Applied Biosystems) and run in the 7700 or 7900 real-time instrument using the following cycling conditions: 95°C, 1 min; followed by 40 cycles of 95°C, 12 s, and 60°C, 30 s. The resulting data were analyzed using SDS 1.9.1 (7700) or SDS 2.3 (7900) software (Applied Biosystems) with ROX as the reference dye.
Synthetic DNA oligos used as standards (sDNA) encompassed the entire 5Ј-3Ј amplicon for the assay (Invitrogen). Each oligo standard was diluted in 100 ng/l yeast or Escherichia coli tRNA-H 2 O (Invitrogen or Roche Applied Sciences) and span a 5-log range in 10-fold decrements starting at 0.8 pg/reaction. It has been shown for several assays that in vitro transcribed RNA amplicon standards (sRNA) and sDNA standards have the same PCR efficiency when the reactions are performed as described above. 4 Because of the inherent inaccuracies in quantifying total RNA by absorbance, the amount of RNA added to an RT-PCR 4 G. L. Shipley, personal communication.
from each sample was more accurately determined by measuring a housekeeping transcript level in each sample. The final data were normalized to 36B4 (500-fold dilution of sample in tRNA-H 2 O).
Cell Culture-BE2 human neuroblastoma cell line (American Type Culture Collection) was cultured in 1:1 mixture of Dulbecco's modified Eagle's medium/F12K media supplemented with 10% fetal bovine serum, 0.1 mM MEM nonessential amino acids, penicillin-streptomycin mixture (50 units/ml and 50 g/ml), 10 mM Hepes (pH 7.4), 1 mM sodium pyruvate, 2 mM L-glutamine (all from Invitrogen), and maintained at 37°C and 5% CO 2 . For in vivo ODQ treatments 80% confluent neuroblastoma cell cultures were treated with 20 M ODQ for up to 24 h. To prepare lysates, the cells were collected by trypsinolysis, washed twice with phosphate-buffered saline, resuspended in 40 mM TEA (pH 7.4) containing protease inhibitor mixture (Roche Applied Science), and disrupted by sonication. The lysates were centrifuged at 15,000 ϫ g for 30 min to prepare the cleared supernatant fractions, which were used for Western blotting, immunoprecipitation, or activity measurements.
Generation of BE2 Stable Transfectant Lines-Coding sequences of N1-, N2-, and C-type ␣ 1 sGC variants obtained from BE2 total RNA as described above were first cloned into pCR-Blunt vector (Invitrogen). For N1-and N2-type ␣ 1 forms, the coding sequence of the FLAG peptide was inserted by PCR in front of the stop codon and recloned into pCR-Blunt. The coding regions of N1-, N2-, and C-type ␣ 1 sGC variants were then isolated by restriction with NsiI/XbaI enzymes and subcloned into PstI/XbaI sites under the control of the CMV promoter of the mammalian expression vector pMGH2 (Invitrogen). These pMG-C␣F, pMG-N1␣F, and pMG-N2␣F plasmids were transfected into BE2 cells by the Lipofectamine reagent (Invitrogen) according to the manufacturer's protocol. 48-h post-transfection, the cells were plated on 96-well plates at 2500 cells/ml density and selected by 380 g/ml hygromycin (Sigma). Two weeks later, individual hygromycin-resistant colonies were collected and expanded on 100-mm 2 tissue culture dishes. The clones stably transfected by N1-or N2-type ␣ 1 sGC were identified by anti-FLAG Western blotting of lysates prepared from the hygromycin-resistant cultures. The clones stably expressing C-type ␣ 1 sGC were identified using anti-␣ 1 sGC Western blotting.
Co-immunoprecipitation-BE2, BE-C␣F, and BE-N1␣F cells collected from confluent 10-cm culture dishes, were washed twice with phosphate-buffered saline, resuspended in 500 l of phosphate-buffered saline containing protease inhibitor mixture, disrupted by sonication, spun down at 15,000 rpm for 30 min at 4°C, and supernatants were collected. Then polyclonal anti- 1 -sGC antibodies or prewashed anti-FLAG M2 affinity resin (Sigma) were added and tumbled for 1.5 h or overnight, respectively, at 4°C. The lysate mixture with anti- 1 -sGC antibodies was then combined with 100 l of prewashed protein A-agarose beads (Upstate) and further incubated for 1.5 h. The protein A-agarose beads were washed three times with 40 mM TEA, 200 mM NaCl, 1% Nonidet P-40, pH 7.4, and bound proteins eluted by boiling in 100 l of Laemmli buffer. The anti-FLAG M2 affinity resin was washed three times with ice-cold TBS buffer, and bound proteins were eluted with high salt buffer or with buffer containing the FLAG peptide, according to the manufacturer's instructions. The Western blot was probed for the ␣ 1 -and  1 -subunits of sGC using polyclonal anti-␣ 1 -sGC and monoclonal anti- 1 -sGC antibodies.
Expression in Sf9 Cells-To express the ␣ 1 splice forms in Sf9 cells, the coding regions of N1, N2, or C-␣ 1 sGC were subcloned into the pVL1392 transfer vector. The baculoviruses producing the ␣ 1 splice forms were generated by recombination with BaculoGold DNA using the manufacturer's protocol (BD Biosciences). To obtain sGC enzyme containing the splice form ␣ 1 -subunits, Sf9 cells at 1.8 ϫ 10 6 cells/ml were infected with the baculoviruses expressing full-length  1 sGC subunit and the corresponding splice form at a multiplicity of infection of 2.
Assay of sGC Activity-Soluble guanylyl cyclase activity in lysates of BE2 or Sf9 cells were assayed by formation of [ 32 P]cGMP from [␣-32 P]GTP at 37°C as described previously (27). The concentration of DMSO used as a vehicle for BAY41-2272 did not exceed 0.1% and alone had no effect on sGC activity.
Statistical Analysis-All data are presented as means Ϯ S.E. or S.D. Statistical comparisons between groups were performed by Student's t test. Nonlinear regression and calculations of EC 50 and IC 50 were performed using Graph Pad Prism 3.0 software (GraphPad Software).
Identification of ␣ 1 sGC Alternative Splice Variants in the NCBI Data
Base-A previous report demonstrated the existence of alternative splicing for the ␣ 1 sGC in human tissues (24). We used the ␣ 1 sGC cDNA (GenBank TM accession number Y15723) to screen a human RNA data base available on the Human Genome web page (NCBI) for additional splice variants of sGC. This in silico analysis identified twelve unique ␣ 1 sGC cDNA sequences cloned from different human tissues (supplemental Table S1). Their comparison with human genome sequences revealed that they are all generated by alternative splicing from the ␣ 1 sGC gene.
Seven of these sequences encode the full size ␣ 1 sGC protein, while five of them encode truncated ␣ 1 proteins (supplemental Table S1). With alternative splicing, one of the identified ␣ 1 sGC mRNAs (GenBank TM accession numbers CR618242, CR614534) lost the non-coding exon 2 and the coding exons 8 through 10, but acquired additional 131 bp at the end of exon 7. This mRNA encodes a protein that maintained 363 N-terminal amino acids of ␣ 1 sGC, but lost the catalytic domain because of a splice-generated frameshift (Fig. 1, A and B). Three new splice-specific amino acid residues and a premature stop codon were acquired (supplemental Table S2). We named this splice variant N1-␣ 1 sGC (Fig. 1).
In N2-␣ 1 sGC mRNA (GenBank TM accession number BC012627), splicing eliminates exons 7 and 8 and introduces an additional 54 base pairs and a premature stop codon in exon 9. The N2-␣ 1 protein retains only the first 126 residues of the ␣1 sequence, but acquires an additional 17 amino acids at the C terminus (Fig. 1, A and B and supplemental Table S2).
The third identified ␣ 1 sGC splice variant, termed C-␣ 1 sGC, lost 240 N-terminal amino acids, but maintained part of the regulatory and the complete catalytic domains (Fig. 1B). Interestingly, the same C-␣ 1 protein is encoded by two differentially spliced species of mRNA (Fig. 1A). In one sequence (Gen-Bank TM accession number AK226125, C-␣ 1 in Fig. 1A), the alternative splice acceptor in intron 3 generates a 179-bp deletion in exon 4, eliminating the translation start site. The open reading frame (ORF) starts at the alternative methionine located in exon 7. In another splice variant (GenBank TM accession number BX649180, C*-␣ 1 in Fig. 1A), the alternative acceptor site results in a 140-bp deletion and a premature stop codon in exon 7. The alternative start codon in exon 7 restores the ORF. Thus, both C-␣ 1 andC*-␣ 1 splice mRNA species encode the same C-␣ 1 sGC protein.
Alternative Splicing of ␣ 1 sGC in Human BE2 Neuroblastoma-We have previously demonstrated that the human BE2 neuroblastoma cell line expresses high levels of functional sGC enzyme (28). Using the information on the structure of ␣ 1 spliced mRNA, we designed pairs of primers to amplify the fragments specific for N1-␣ 1 , N2-␣ 1 , C-␣ 1 , and C*-␣ 1 splice forms (Fig. 1A). As depicted in supplemental Fig. S1, RT-PCR analysis showed that BE2 cells express all found ␣ 1 splice mRNA except C*-␣ 1 . Each of the splice-specific fragments was purified, and the identity of the splice form confirmed by sequencing.
Expression of ␣ 1 sGC Alternative Splice Variants in Different
Human Tissues-We next investigated the tissue-specific expression of the ␣ 1 sGC splice variants. We designed a Q-PCR assay for N1-and N2-␣ 1 sGC splice forms based on their unique sequences inserted by alternative splicing (supplemental Table S3). We quantified the abundance of the full-length and spliced mRNA in various human tissues. As expected, full-length ␣ 1 sGC mRNA was detected in RNA of all tested tissues. N1-␣ 1 sGC was observed at detectable levels in all human organs, except bladder, testis, thyroid, placenta, and skeletal muscle (Fig. 2). N2-␣ 1 sGC was present in all tissues, but at significantly lower levels than ␣ 1 sGC.
The absence of sequences specific only for C-␣ 1 or C*-␣ 1 mRNA and the insignificant difference in size from the fulllength sGC transcript (supplemental Table S1) precluded us from using Q-PCR or Northern blotting to estimate their levels and tissue distribution. Therefore, we used a semi-quantitative RT-PCR method using primers designed to detect the deletions specific for both C-and C*-␣ 1 sGC RNA species. The identity of amplified fragments was confirmed by sequencing. The full size ␣ 1 sGC transcript was detected in all tissues together with C-␣ 1 sGC, albeit at different ratios (supplemental Fig. S2). However, C*-␣ 1 sGC mRNA was not detected in esophagus, heart, kidney, liver, lung, bladder, brain, cervix, and colon (supplemental Fig. S3).
Effect of Expression of ␣ 1 sGC Alternative Splice Variants on sGC Activity-Next we tested whether ␣ 1 sGC splice variants possess any catalytic activity or whether they affect the function of full-length ␣ 1 / 1 sGC in BE2 cells. We selected several stable clones expressing N1-␣ 1 or N2-␣ 1 sGC, which were tagged with the FLAG epitope at the C-terminal end. The stable clone expressing 41-kDa N1-␣ 1 sGC protein was identified using anti-FLAG antibodies (BE2-N1␣F in Fig. 3A). We measured the rate of cGMP production in the lysates of these cells in response to the NO donor, DEA-NO, or DEA-NO in the presence of the heme-dependent allosteric regulator BAY41-2272. As demon- MAY Fig. 3B, the BE2-N1␣F clone showed a significant decrease of sGC-dependent cGMP synthesis in comparison with parental BE2 cells. This lower sGC activity cannot be attributed to a decreased expression of sGC, because Western blotting showed no significant decrease in the levels of endogenous full-length ␣ 1 -and  1 -subunits (Fig. 3B). The lower cGMP synthesis was observed in all tested BE-N1␣F clones (results not shown), suggesting that the N1-␣ 1 sGC splice form acts as an inhibitor of sGC function. This conclusion was confirmed when the N1-␣ 1 splice form was co-expressed together with the full-length ␣ 1 -and  1 -subunits in Sf9 cells. The addition of increasing amounts of virus producing N1-␣ 1 sGC protein correlated with the decrease of DEA-NO, BAY41-2272-, and DEA-NO/BAY41-dependent ␣ 1 / 1 sGC activity (supplemental Fig. S4). The lysates from stable clones overexpressing N2-␣ 1 sGC did not show significant changes of the NO and/or BAY41-2272-dependent cGMP synthesis. We did not observe significant changes of the NO/BAY41-dependent cGMP pro-duction in lysates of the lines overexpressing the N2-␣ 1 sGC splice form (results not shown).
Novel Splice Forms of Soluble Guanylyl Cyclase
We also selected BE2 stable lines expressing the C-␣ 1 sGC splice variant tagged with the FLAG epitope. BE2-C␣F line was selected using antibodies raised against a sequence at the C-terminal end of the human ␣ 1 sGC subunit, which recognizes both the short C-type ␣ 1 sGC (54 kDa) and the full-length (83 kDa) ␣ 1 -subunit (Fig. 3A). We found that the DEA-NO-or DEA-NO/BAY41-2272-stimulated cGMP synthesis in the lysates of the BE2-C␣F clone was not significantly different from parental BE2 cells. On the other hand, the amount of ␣ 1 -sGC protein in BE2-C␣F was lower than in BE2 cells (Fig. 3A). It appears that the expression of C-␣ 1 variant can compensate for decreased levels of full size ␣ 1 -sGC subunit through a formation of active heterodimer.
Interaction of the C-␣ 1 Splice Variant with the  1 -Subunit-To confirm the interaction of the C-␣ 1 sGC with the  1 sGC subunit we tested for co-immunoprecipitation of the C-␣ 1 sGC with the  1 sGC subunit from the BE2-C␣F clone. As shown in Fig. 5A, polyclonal antibodies raised against the C terminus of the  1 -subunit precipitated both ␣ 1 and C-␣ 1 variants. This immunoprecipitation correlates with the depletion of the C-␣ 1 and ␣ 1 signals from the post-immunoprecipitate lysates (Fig. 5A). Moreover, the similar intensity of the co-precipitated signals suggests that both ␣ 1 and C-␣ 1 heterodimerize with  1 FIGURE 2. Expression of ␣ 1 sGC, N1-, and N2-␣ 1 sGC mRNA in human tissues. Expression was quantified by qPCR analysis and normalized to the levels of ribosomal housekeeping gene 36b4. Human total RNA Survey Panel (Ambion) containing mRNA from 20 different tissues pooled from three healthy donors was used in this comparison. equally well. Alternatively, immunoprecipitation of C-␣ 1 sGC with anti-FLAG M2 affinity gel precipitated the  1 sGC subunit from the lysate of BE2-C␣F stable clones, but not from BE2 lysates (Fig. 5A). The amount of  1 sGC was higher in the eluates treated with the FLAG peptide, consistent with a FLAGspecific elution.
Despite a clear inhibitory effect of the N1-␣ 1 splice form ( Fig. 3B and supplemental Fig. S4), we were not able to observe the co-precipitation of the N1-␣ 1 sGC with the  1 -subunit (Fig. S5).
Level of C-␣ 1 Splice Form Is Not Affected by ODQ Treatment-A recent report indicated that heme-deficient sGC or sGC with an oxidized heme prosthetic group is more prone to degradation, which may contribute to endothelial dysfunction (29). In those studies, treatment of intact cells with the sGC inhibitor ODQ decreased the level of sGC protein because of ubiquitin-dependent degradation. Thus, we tested whether the active C-␣ 1 / 1 heterodimer shows a similar response to ODQ. Indeed, as demonstrated in Fig. 6A, the amount of full-length ␣ 1 band decreased after BE2-C␣F cells were treated for 24 h with 20 M ODQ. The C-␣ 1 splice band, however, was not affected by the same treatment. A time course study showed that the intensity of the ␣ 1 band decreased shortly after ODQ administration in all tested cells, while the level of C-␣ 1 even slightly increased with time (Fig. 6B). These data suggest that the C-␣ 1 subunit protein is more stable to intracellular processes occurring after the oxidation of sGC heme.
DISCUSSION
Alternative splicing frequently occurs in eukaryotic genes and provides an important mechanism for tissue-specific and developmental regulation of gene expression. About 15% of mammalian gene mutations associated with pathological conditions affect RNA splicing signals (30).
A number of previous reports demonstrate that alternative splicing is an essential mechanism regulating the function of several members of the cGMP signaling pathway. For example, a splice-dependent deletion of 90 -100 N-terminal residues of cGMP-dependent protein kinase I (PKGI) produces two splice variant isoforms, I␣ and I, which have different tissue distribution and target specificity (31). Recent studies also demonstrated that PKGI splice isoforms respond differently to activation by hydrogen peroxide (32). Guanylyl cyclase B (GC-B) is another example with two truncated splice forms. It was proposed that these splice forms regulate the function of the full-length subunit and show different tissue distributions (33).
Several reports suggested that splicing may also be a method of sGC regulation. For example, the ␣ 2i sGC splice variant, carrying an insertion in the catalytic domain was detected in sev- Arrows indicate the protein band of appropriate size in Sf9 cells and the BE2-N1F clone. B and C, identification of stable lines expressing C-␣ 1 variant by Western blotting with anti-␣ 1 sGC antibody and protein levels of endogenous ␣ 1 and  1 sGC in stable lines and parental BE2 cells. Arrows indicate the protein bands corresponding to ␣ 1 and C-␣ 1 sGC in BE2-C␣F. B, representative Western blot; C, densitometry analysis of ␣ 1 and  1 levels normalized to -actin and expressed as percent of expression in control BE2 cells taken as 100%. The value for BE-N1␣ and BE-C␣F clones are presented as mean Ϯ S.D. calculated from three independent lysates. D, rate of cGMP production in lysates of BE2 cells, BE2-N1␣F and BE2-C␣F in response to DEA-NO (200 M) alone or in combination with BAY41-2272 (2 M). Data from six (for BE2 cells) or three (for BE2-N1␣F and BE2-C␣F clones) independent lysates measured in triplicates are presented as mean Ϯ S.E. #, difference with BE2 cells is statistically significant (p Ͻ 0.05); *, difference is not significant (p Ͼ 0.05).
eral human tissues (20). The ␣ 2i sGC splice variant has a dominant-negative function. In addition, the expression of two mRNA species of human ␣ 1 sGC with deletions in the exon 4 correlates with decreased sGC activity in immortalized B-lymphocyte cell lines (24). These ␣ 1 sGC splice forms, however, were never isolated or characterized.
In this report, we characterized several newly identified splice forms of the ␣ 1 -subunit of human soluble guanylyl cyclase. Analyzing the abundant data available in the NCBI data base, we identified a large number of individual ␣ 1 sGC transcripts. It is notable that the diversity of uncovered ␣ 1 sGC transcripts (12 cDNAs, see supplemental Table S1) is much larger than for the  1 sGC (5 cDNAs). This diversity does not appear to be unique for humans. Analysis of the mouse genome data base also identified the N1-␣ 1 splice variant (GenBank TM number AK031305). Conservation of this particular splice form points to a functional importance of the N1-␣1 variant in mammals. Most of these transcripts are produced by sequence rearrangements in 5Ј-and 3Ј-untranslated regions, which are usually associated with altered post-transcriptional regulation of gene expression. For example, binding of the RNA-stabilizing protein HuR to the 3Ј-UTR of ␣ 1 sGC plays an important role in post-transcriptional regulation of sGC expression in response to cyclic nucleotides and in aging in rats (15,34,35). Thus, the diversity of the 5Ј-and 3Ј-untranslated region sequences most likely reflects multiple selective mechanism(s) regulating the stability of ␣ 1 sGC transcripts in response to various extracellular and intracellular stimuli in different cell types or at different developmental stages.
We concentrated our studies on four RNA splice variants which encode truncated ␣ 1 sGC, with deletions at the N or C termini of the protein (supplemental Table S1 and Fig. 1). We found that human neuroblastoma BE2 cells express three (N1-␣ 1 , N2-␣ 1 , C-␣ 1 ) out of four alternatively spliced RNAs (supplemental Fig. S1). Quantitative PCR analysis of the N1-␣ 1 and N2-␣ 1 splice mRNAs demonstrates that the majority of human tissues express more than one splice variant, but the levels and the relative ratio of the ␣ 1 and the N1-␣ 1 and N2-␣ 1 splice The rates of cGMP production in Sf9 cell lysates expressing C-␣ 1 / 1 or ␣ 1 / 1 sGC in response to various concentrations of DEA-NO was determined as described under "Materials and Methods." B, C-␣ 1 / 1 sGC and ␣ 1  1 proteins show similar responses to BAY41-2272. The rates of cGMP production in Sf9 cell lysates expressing C-␣ 1 / 1 or ␣ 1 / 1 sGC in response to various concentrations of BAY41-2272 were determined as described under "Materials and Methods." C, C-␣ 1 / 1 sGC and ␣ 1  1 proteins have the same sensitivity to ODQ. Sf9 lysates with ␣ 1 / 1 or C-␣ 1 / 1 heterodimers were mixed with different amounts of ODQ, and the rate of cGMP production induced by 50 M DEA-NO was measured. Data from four independent experiments are presented as mean Ϯ S.D. forms is tissue-specific ( Fig. 2 and supplemental Table S4). For example, N2-␣ 1 is found in all tested tissues, while N1-␣ 1 sGC is not detected in esophagus, bladder, testes, thyroid, placenta, and skeletal muscle (Fig. 2). Similarly, C-␣ 1 sGC is detectable in all tested human tissues, although the relative ratio was tissuespecific (supplemental Fig. S2), while C*-␣ 1 sGC was detected only in some tissues (supplemental Fig. S3). Taken together, these data indicate that the expression of ␣ 1 sGC splice forms is independently regulated. It should be noted, though, that the levels of N1-, N2-, C-, and C*-␣ 1 splice mRNAs are on average lower than the full size ␣ 1 sGC. Interestingly, the semi-quantitative PCR analysis suggests that in human adipose tissue the level C-␣ 1 and C*-␣ 1 mRNA may be collectively comparable with the level of ␣ 1 sGC mRNA. Because the comparative analysis reported here is based on RNAs extracted from entire organs, it might not represent the true abundance of individual spliced sGC transcript at the cellular level. Cellular and subcellular localizations of individual ␣ 1 sGC splice variants in human tissues remain to be determined. Localization may significantly alter the functions of these splice forms beyond the properties reported here for BE2 neuroblastoma cells.
In this report, we identified and characterized N1-␣ 1 and N2-␣ 1 sGC splice proteins lacking the catalytic domain (Fig. 1). Although by themselves or in combination with the  1 -subunit these truncated splice ␣ 1 proteins do not have any catalytic activity (data not shown), one of them displays dominant-negative properties. BE2 cells overexpressing N1-␣ 1 sGC have a significantly decreased NO-and NO/BAY42-2272-induced cGMP production despite the unchanged level of endogenous ␣ 1 / 1 sGC (Fig. 3). Moreover, when co-expressed in Sf9 cells with ␣ 1 / 1 heterodimer, N1-␣ 1 reduced NO-and BAY41-2272dependent sGC activity in direct correlation with the expression level of the N1-␣ 1 protein (supplemental Fig. S4). We were not able to detect any direct heterodimerization of the N1-␣ 1 protein with the  1 -subunit, even when both proteins were coexpressed in large quantities in Sf9 cells (supplemental Fig. S5). This suggests that the dominant-negative effect of N1-␣ 1 is not due to a direct competition with the full-length ␣ 1 for the binding with  1 -subunit, but is rather indirect. Recent deletion mapping showed that the segment spanning residues 363-372 of the ␣ 1 -subunit is important for the formation of the ␣ 1 / 1 heterodimer (36). Because the N1-␣ 1 splice protein constitutes the first 363 residues of the ␣ 1 -subunit, perhaps, it interferes with the heterodimerization function of the adjacent 363-372 segment of the full-length ␣ 1 -subunit.
Direct regulation of guanylyl cyclase by dominant-negative splice variants has been proposed before. For example, the ␣ 2i sGC splice variant blocks the formation of a functional ␣ 1  1 sGC heterodimer (20), while the truncated GC-B splice variants hinder the formation of active full size GC-B homodimers (33). Considering the wide distribution of the N-␣ 1 splice form in different human tissues ( Fig. 2 and supplemental Table S1), modulation of N1-␣ 1 sGC protein expression may also be a regulatory mechanism controlling the amount of active sGC heterodimer.
In contrast to N1-␣ 1 , N2-␣ 1 sGC had no effect on sGC activity in cell lysates when overexpressed in BE2 cells or co-expressed with ␣ 1 / 1 in Sf9 cells. However, it cannot be excluded that this splice variant may serve another function besides directly affecting sGC activity. Our studies also demonstrate that the C-␣ 1 sGC isoform clearly forms a fully functional sGC heterodimer with  1 sGC subunit. The activity of the recombinant C-␣ 1 / 1 enzyme expressed in Sf9 cells was indistinguishable from the ␣ 1 / 1 sGC, at least in regard to the degree of activation by the NO donor and allosteric activator BAY41-2272 and inhibition by ODQ (Fig. 4). The preservation of the sGC activity in the BE-C␣F stable line (Fig. 3) and co-immunoprecipitation of C-␣ 1 sGC and  1 sGC subunits (Fig. 5) all support the conclusion that C-␣ 1 -and ␣ 1 -subunits are interchangeable. Also, these results suggest that the lack of 240 N-terminal amino acid residues by the ␣ 1 sGC subunit does not affect the heterodimerization and enzyme activity. This observation does not support previous findings that the region between residues 61 and 128 of ␣ 1 is mandatory for heterodimerization (37). The properties of this naturally occurring splice variant, however, are in agreement with earlier reports, which show that a significant portion of the N-terminal region of the ␣ 1 sGC subunit can be deleted without affecting NO sensitivity and heterodimerization (36,38,39).
The existence of ␣ 1 -positive bands other than the 83-kDa full-length ␣ 1 sGC subunit was mentioned in several previous reports. While screening selected human tissues with anti-␣ 1 antibodies, Zabel et al. (40) observed in cortex, cerebellum, and lungs a band similar in size to the ϳ54 kDa C-␣ 1 described in this report. Interestingly, the antibodies used were raised against the same epitope as in current studies. Moreover, additional bands were also detected in crude lysates from human amygdale (41). Presence of several bands with electrophoretic mobility similar to C-␣ 1 indicates that either C-␣ 1 form may undergo additional tissue-specific processing and/or modifications, or that additional, yet unidentified, ␣ 1 sGC splice forms exist.
Recent studies suggested that some conditions of endothelial dysfunction may be associated with the accumulation of oxidized and heme-free sGC leading to a poor response to NO (29). The same report showed that in ODQ-treated cells sGC is subjected to ubiquitination and subsequent degradation. Although our current studies showed that C-␣ 1 / 1 and ␣ 1 / 1 heterodimers have identical sensitivity to ODQ with a similar IC 50 (Fig. 4), C-␣ 1 and ␣ 1 sGC subunits show an opposite response to ODQ-induced degradation (Fig. 5). Western blotting confirms that the 83-kDa ␣ 1 sGC band disappears after exposure to ODQ, while the level of C-␣ 1 protein does not decrease. These data suggest that C-␣ 1 lacks the structural cues contributing to decreased levels of ␣ 1 after ODQ treatment. The functional studies presented here prove that the C-␣ 1 / 1 heterodimer can fully compensate for sGC activity. In light of reported previously decreased levels of ␣ 1 -subunit in aged or diseased vessels (41)(42)(43)(44), the expression of a more stable C-␣ 1 may be a specific protective adaptation to these conditions. Thus, it is possible that the positive vasodilatory effects of BAY58-2667 in diseased blood vessels observed previously (29) are mediated by the more stable C-␣ 1 / 1 heterodimer. Future studies will show whether the C-␣ 1 splice form may be, under certain circumstances, the main or the sole type of ␣ 1 -subunit.
Although the functional studies suggest that the N-terminal fragment missing in the C-␣ 1 -subunit is not important for the activity of sGC, this region is preserved in evolution in the ␣ 1 -subunits of all vertebrates. It is possible that this region, often referred to as the regulatory region (37,38), is responsible for the integration of NO/cGMP signaling with other regulatory pathways. In this case, despite similar catalytic properties, ␣ 1 / 1 and C-␣ 1/  1 heterodimers may be modulated differently by NO-independent mechanisms, e.g. protein modifications or protein-protein interactions.
cGMP-dependent kinase is one of the major effectors of cGMP generated by sGC in response to NO. Increased cGMP levels in neuronal cells leads to a PKGI-dependent phosphorylation of the Splicing Factor 1 (SF1), which functions at early stages of pre-mRNA splicing and splice site recognition (45). Thus, it is appealing that sGC-dependent cGMP production may regulate sGC activity by affecting the spliceosome assembly and/or switching between different splice sites during pre-mRNA processing. This, in turn, may affect the diversity of expressed ␣ 1 sGC splice proteins modulating the function of the sGC heterodimer.
In summary, our present study identifies and characterizes several new splice variants of the human ␣ 1 -subunit of sGC. The splicing of ␣ 1 mRNA seems to be ubiquitous and follows the domain organization of the ␣ 1 sGC subunit. Our findings point to new mechanisms of modulation of sGC function and activity that may have significant effects on nitric oxide and cyclic GMP signaling and perhaps pathophysiology. | 2018-04-03T01:01:46.532Z | 2008-05-30T00:00:00.000 | {
"year": 2008,
"sha1": "e14f91f6519c0706c1948e51c5555e04123d7fb1",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/283/22/15104.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "040841c0040049d146c84430f3c6c946439aae0e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
231847090 | pes2o/s2orc | v3-fos-license | Identifying Influential Nodes in Weighted Networks using k-shell based HookeRank Algorithm
Finding influential spreaders is a crucial task in the field of network analysis because of numerous theoretical and practical importance. These nodes play vital roles in the information diffusion process, like viral marketing. Many real-life networks are weighted networks, but relatively less work has been done for finding influential nodes in the case of weighted networks as compared to unweighted networks. In this paper, we propose a k-shell-based HookeRank (KSHR) algorithm to identify spreaders in weighted networks. First, we propose weighted k-shell centrality of the node u by using the k-shell value of $u$, the k-shell value of its neighbors ($v$), and edge weight ($w_{uv}$) between them. We model edges present in the network as springs and edge weights as spring constants. Based on the notion of Hooke's law of elasticity, we assume a force equal to the weighted k-shell value acts on each node. In this arrangement, we formulate the KSHR centrality of each node using associated weighted k-shell value and the equivalent edge weight by taking care of series and parallel combination of edges up to 3-hop neighbors from the source node. The proposed algorithm finds influential nodes that can spread the information to the maximum number of nodes in the network. We compare our proposed algorithm with popular existing algorithms and observe that it outperforms them on many real-life and synthetic networks suing Susceptible-Infected-Recovered (SIR) information diffusion model.
Introduction
In our day to day life, we come across numerous complex networks like communication networks, social networks, biological networks, and the world wide web network [1]. Such networks consist of a large number of nodes with non-trivial properties leading to a variety of research problems [2]. Complex networks can be modeled as graphs G = (V, E) where V corresponds to set of nodes, E represents edges between nodes. Many complex networks are weighted networks where edges are associated with some weight, where weight refers to the duration or frequency of communication, collaboration, friendship, or trade between two entities [3]. For example, in social networks, the weight of an edge can be a function of time, affection, and exchange of services between two persons [4]. In other weighted networks, the weight often expresses the potential or capability of the tie [5]. In airport networks, the edge weight between the two cities can refer to the number of direct flight services between them. Usually, information or diseases are more likely to be transferred between the nodes having high or frequent interactions.
Finding influential spreaders is one of the prominent research problem in the field of complex networks analysis [7]. Influential nodes play crucial roles in many spreading phenomena like viral marketing, influence maximization [6], rumor control [9], epidemic spreading [10]. The action of influential spreaders can lead to maximizing the information coverage in the advertisement of the products and viral marketing, whereas minimizing the damage in case of rumor or epidemic spreading in the network. Influence maximization [14] aims to choose a constant number (k) of influential nodes as seed nodes such that information originating from them can reach to the maximum number of nodes in the network due to cascade trigger through word-of-mouth strategy. The influence maximization activity comprises of two phase: identifying the seed spreaders, and information diffusion phase. Researchers have effectively applied various epidemic spreading methods to model information diffusion in networks to assess influence spread arises due to trigger from the selected seed nodes [11]. Identifying influential nodes problem on unweighted networks has been extensively studied in recent years. However, it is relatively less explored for the weighted networks. The majority of such methods on weighted systems are just extension of existing methods on unweighted methods by bringing edge weight into the picture. In this paper, we propose a k-shell decomposition and Hooke's law of elasticity based algorithm named as KSHR for locating the influential spreaders in the weighted network. Our algorithm considers the influence of nodes in a setting where edges are demonstrated as springs and edge-weights are displayed as flexibility coefficients. We model edge weights as spring constants. The edges present in the system are displayed as springs, which are associated in arrangement and equal. They lengthen by a separation under the influence of an expected steady power adhering to Hooke's law of flexibility, and this is the identical spread separation between nodes in the system.
The contributions of our work are as follows: 1. We introduce a new algorithm for weighted k-shell decomposition and use the weighted k-shell value of a node as stretch value for the edges connected to that node in a setting where the weighted graph is modeled as springs. 2. Based on the notion of weighted k-shell centrality and Hooke's law of elasticity in the case of complex weighted networks, we propose the KSHR algorithm to identify influential nodes. 3. The experimental results on real-life and synthetic network datasets based on various performance parameters reveal that the proposed algorithm performs reasonably well compared to other existing popular methods.
The rest of the paper is organized as follows: Section 2 presents the related weighted centrality, related applications of modeling the edge of the graph as the spring, and information diffusion model. Section 3 discusses the performance matrices and data-sets used in this paper. In Section 4, we describe the proposed algorithm, time complexity of the algorithm, and its simulation on a toy network. Section 5 discusses the experimental results on each performance parameter for various datasets. Finally, in Section 6, we conclude our study.
Node Centrality
The underlying models in the field of impact maximization have significantly been advancements in the field of unweighted networks where all edges are similarly significant. In real networks, these edges are related with weights that should be thought of while calculating the quality of these nodes during a pattern of information diffusion. At the point when we think about these parts of topology, we can accumulate insights into what is generally useful for the expansion of information propagation. The early advances in weighted networks were through centralities like DegreeRank utilized for unweighted networks, by extra weighing of these edges to accomplish a weighted DegreeRank [16]. In a similar way, different centralities taken from unweighted networks, advancing into a technique for weighted graphs through numerical changes adjusting the weighted calculations. Betweenness centrality thinks about the most limited way of a node in an unweighted graph, and it was stretched out for weighted rendition giving the weightedbetweenness centrality [17,18]. In view of the thought of the voting method, analysts have proposed influence calculation in unweighted just as weighted networks where the nodes getting the maximum votes in each round as spreader nodes [19,20,21]. The h index is a proportion of the impact of analysts dependent on the quantity of references got, and by expanding edge weight, Yu et al. proposed a weighted h-index centrality [22]. Weighted-eigenvector centrality is appropriate in a weighted network and depends on the way that a node is significant if its neighbors are likewise celebrated and finds the centrality for a node as an element of the centrality of its neighbors [23].
Spring-based edges
Eades [24] proposed to display the edges of the network as springs to draw graphs by limiting expected potential energy. This technique was later refined by Fruchterman et al. [25], where they model nodes as electrical charges and edges as associating springs. The electrical charges cause these nodes to repulse one another. One of the most well known calculations for drawing graphs is Kamada and Kawai's strategy, which models the edges of the graph as springs acting observing Hooke's Law [26,27]. The strategy increases the length of the spring between any two nodes by limiting a worldwide cost variable. We contend the relevance of the spring-based model to quantify the centrality of the nodes and to discover persuasive spreaders.
Weighted K shell
The first K-shell was applicable to just unweighted graphs [28]. It is a classical calculation that has been utilized to discover influential nodes. The expected augmentation of the calculation on weighted networks was performed by consideration of weights on biological networks by Eidsaa [29]. The calculation was additionally stretched out based on interaction and a hybrid k-core methodology was created [30]. These centralities were improved by taking neighboring k-shells and figuring the consolidated result by Maji and Wei [31,32]. We propose a weighted K-shells method that performs better in our context.
Information Diffusion Model
SIR Model: In this paper, we employ the popular stochastic susceptible-infectedrecovered (SIR) model as the information propagation model [33]. The SIR model partitions nodes of the network into three classes: susceptible (S), infected (I), and recovered (R). Susceptible are those nodes that are supposed to receive the information from their infected neighbor. In this model, the selected seed nodes are initially in an infected state, and all other nodes are in a susceptible state. In each subsequent iteration, the infected nodes influence their susceptible neighbors with a probability of β. Infected nodes at the following timestamp enter the recovered stage with a probability of γ. Once a node is in recovered state, it can't infect its neighbors further. Hence, according to this model, the amount of influence spread caused by the selected seed nodes can be estimated by counting the number of nodes who got infected and then later recovered in the subsequent stage of the SIR simulation.
Performance Metrics
In this paper, we adopt a variety of evaluation matrices to reasonably analyze the performance of the proposed algorithm and other existing popular algorithms. The following evaluation matrices are included in our study: (i) Influence spread or number of active nodes: It counts the total number of nodes that became active or infected at the end of the simulation of the information diffusion model, which caused due to the cascade effect triggered by the initial spreaders. In the SIR model of information dissemination, the selected seed nodes are initially infected nodes. They try to infect their susceptible neighbors with the rate β. This process continues till the state of nodes keeps on changing, and at the end, the total number of nodes during the diffusion process which converts from susceptible state to infected state and then to recovered state are counted to measure the influence spread. It is one of the most important parameters to judge the effectiveness of an influence maximization algorithm as it directly measures the overall spread of the information in the network originating from the selected spreaders. (ii) Kendall tau correlation (τ ) :-Kendall tau correlation [? ] measures the ordinal association between two measured quantities in statistics. It is utilized to determine the precise spreading influence of the selected spreader nodes. In our case, the Kendall tau correlation is used to find the correlation between the rank list (R 1 ) generated by various algorithms of the influence maximization with the natural ranking list (R 2 ) generated by the SIR Model. For a pair of seed nodes, if x i > y j in the list R 1 , and x i > y j > in actual rank list R 2 then such pair is known as concordant pair otherwise it is called discordant pair. The value of Kendall tau correlation lies in the range [-1, 1]. Kendall tau correlation (τ ) between two ranked list R 1 and R 2 is computed using the following formula: where n con is the number of concordant pairs, n dis is the number of discordant pairs and n is total number of nodes in both lists L 1 and L 2 . (iii) Average distance between spreaders (L s ): This metric computes the average shortest path distance between the selected spreaders. The high value of the average distance between spreaders implies that the spreaders are chosen from the diverse locations in the network and therefore, can propagate information to a significant portion of the system. The following equation computes the average distance between spreaders (L s ): where S represents the set of nodes and d uv represents the shortest path distance between a pair of seed nodes u and v in S.
Methodologies
Hooke's law is stated as follows: In this section, we present the depiction of the edges present in the network as springs, which are associated in series and parallel, and present the proposed KSHR algorithm. In a weighted network, weights commonly imply that the higher the weight, the stronger is the association. The same is valid for our KSHR centrality, since we know from Hooke's law that more is the constant of the spring(k) and less is the force (F ), lower is the spring stretched (x) as in eq. 3. When we visualize the edges of the graphs as springs, where their edge-weight takes the role of spring constant k, the separation (x) is the parameter to be minimised between the nodes for the influence maximization. By calculating the separation (x), we can accommodate the various paths that exist between any pair of nodes. We describe the definition of spring terminology below.
Parallel: When Springs are put in parallel, they end up as a joint spring with the complete elasticity of another spring of a spring stretched that can be displayed utilizing the fact that the spring is certainly much stiffer.
Series: It is conceivable to include the commitments of the springs in series. The Springs in series make a progressively increasing spring that will in general stretch more than the springs that are connected.
Distance Calculation
When two springs of different spring constants, k 1 and k 2 respectively are placed in series with each other, we get: When two springs of different spring constants, k 1 and k 2 respectively are placed in parallel with each other, we get: This means in an actual network is that springs in series are stiffer if the strength of ties in the individual connections is strong. It also implies that more connections from one node to another, add up a single connection, as seen in case of parallel connections. Now the equivalent distance between any of these nodes, under a constant force is given as: where f = weighted − k shellvalue and k = weight of the edge between node i and j When we model edges present in the graph as springs, which are associated in series and parallel. As shown in Fig. 1, an amount of force acts on each node and hence causes the springs to be stretched. For our method, we consider a constant force acting on each node, which is given by its weighted k-shell value. Thus, the force on every node is given as F 0 = wkshell[node], which stretches the edges connected to it. We propose a novel weighted k-shell value in this paper.
Our model assumes the node(i) is associated with direct neighbor node (j) with a spring constant k equal to edge weight of w ij . This gives us k = w ij as the spring constant of direct neighbors. By exploiting series and parallel combination of springs, for a given source node we can consider its indirect neighbors as well for the information spread. The value of spring constant k eq for indirect neighbors can be computed using the formula for series and parallel combinations (Eq. 4, and 5). Now, using the value of k eq , we compute equivalent spring stretch distances (x) for direct and indirect neighbors. In our algorithm, we consider the indirect neighbors up to 3-hops by utilizing a Breadth first traversal (BFS) for a given source node (i). Let us consider the node (i) with which our BFS starts and j as one of its indirect neighbors. We calculate the spring constant (k eq ) between nodes i and j. The stretch distance (x) between nodes i and j, is given by Eq. 6. Our overall objective here is to minimise the value of stretch distance (x) for the neighbor because a less stretch distance shall mean an increased spreading potential of seed node i implying that information spread by i can easily reach j. Thus, restating the same as follows: 1 Assuming f = wkshell[node], we can easily see that the full measure of distance in this network is relative. We need to maximise the value of 1/x so that the spreading can happen and we name the average of this value as kshr value of the given node i when connected to all nodes j such that j N 3 i , where N 3 i signifies the neighbors upto 3 hops. return wkshell 15: end function
Proposed algorithm
The explanation of the algorithm is as follows: 1. We create a kshr value dictionary and initialize for all nodes as in Step 1.
2. Since we wish to minimize x, the stretch distance of the string. We instead maximize 1/x = k/f as in Eq. 7. Hence, we create a dictionary named as spring constant of constants from Step 4 and 5 to calculate k. 3. Our BFS happens for 3 hops from the start node in the complete loop from 7 to 13, here N 3 i denotes all neighbors upto 3 hops. 4. The BFS takes the nodes that have not been visited and initializes them to the value of the parent and combines them in series in Step 9 using equation 4. 5. All the nodes that occur again in the BFS traversal are assumed to be in a parallel connections and add up to the spring constant as in Step 11 as in eq. 5. 6. We calculate the inverse distance for the node by finding the equivalent spring constant (k eq between the neighbor and the current node and dividing it by the node's weighted K-shell value as in step 16 given by Eq. 7 7. The KSHR Value is calculated based on the addition of these individual distances given by Eq. 8 8. As, the objective of influence maximization is to select top c nodes, where c is a constant. Here, the top c nodes are the nodes having the maximum KSHR score values in the ranking, and such nodes can be chosen as the influential spreaders. 9. We sort the KSHR value dictionary and return it so that top nodes can be chosen as influential spreaders.
Time Complexity Analysis
The time complexity of this algorithm is calculated as follows. We traverse on each node of the graph and venture to 3 hops of its neighbors. The steps from line 15-34 give us a time complexity of O(k 3 .n). If we assume hat the diameter of the network is fairly large, the average value of k = n 1/d . Thus giving a loose upper bound of O(n d+3 d . In sparse graphs, k is a constant with much lower magnitude and the average complexity is of the order of O(n) if neighbor in series then 9: use Eq 4 for series combination return kshr value 21: end function
Results and Analysis
We perform the experiment of the proposed KSHR method along with the contemporary centrality measures like weighted-degree, weighted betweenness centrality, weighted eigenvector centrality, and weighted voteRank. The investigation has been performed on a toy network and three real-world networks of different nature, application, and size that are listed in Tab 1. We use the SIR model to compute the final infected scale, f (t c ), as a function of spreaders fraction and final infected scale in terms of increasing timestamps. The results were averaged over SIR 100 simulations. For simplicity and to maintain consistency in the analysis for all data-sets, we chose infection rate (β ) as 0.01, meaning that when a node is infected, then it can infect 1% of its neighbors randomly.
Simulation of the proposed algorithm on a toy network
In this section, we simulate the working of the proposed KSHR method using a toy network, as depicted in Fig. 2. The toy network that we use is randomly weighted and has 11 nodes with different edge weights. The force acting on each node is assumed to be equal to the weighted-k-shell value of the node. The traditional K-shell value of the nodes is indicated using colors in Fig. 2. The red colored nodes have K-shell value 3, the blue colored nodes have value 2, and the green colored nodes have a K-shell value 1. The traditional K-shell value will help us calculate the weighted K-shell value as in Eq. 9. Nodes are connected using weighted-edges represented as springs, based on our algorithm. We aim to find the equivalent spring between all direct and indirect neighbors of a given node using series and parallel spring combination. This relation between series and parallel is illustrated through the example of nodes B and F in this case. As shown in Fig. 3, the spring between B and E is in series with the spring from E to F . The resulting value of resulting spring B − E − F is found using eq. 4. This resulting spring B − E − F is in parallel to the spring between B and F and the net spring between B and F is given by eq. 5. Let us consider the steps of the algorithm for a sample node B to find the KSHRvalue of B and understand the working of KSHR centrality. Now, we begin processing the algorithm by taking the node B and compute the spring constant for the 1-hop neighbors of B, namely A, C, E, D, and F using the series connection of the spring. After the 1-hop neighbors are covered and we get the 2-hop neighbors by combination series in parallel similar to the example of B, E, and F above. A similar calculation is performed taking all nodes in the network as the start and performing the computation using series and parallel combination of the direct and indirect weighted edges to calculate the KSHR-value given by eq. 8. Table 2 summarises the computed KSHR-values and F is determined to be the top node for the toy-network. ) with respect to the percentage of spreaders for three real-life data-sets with infection rate (β) as 0.01. We consider the percentage of influential spreaders as the seed nodes in the range of 2%, 4%, 6%, 8%, and 10% to plot the final infection scale. In Fig. ??, note that the number of nodes affected by the infection is maximum for HookeRank on the US-Airports Network for most percentages of spreaders. In Fig. 4, HookeRank greatly exceeds the performance of other algorithms towards increasing the count of the spreader fraction. In the weighted PowerGrid data, shown in Fig. 5, HookeRank performs better than most other algorithms from an early stage. In the weighted PowerGrid data, shown in Fig. ??, the increase in the number of spreaders results in WVoteRank becoming marginally close to HookeRank, but our algorithm still performs better than all other algorithms in the simulation. ) with respect to the increasing timestamps with infection rate (β) as 0.01 and top 7% influential as seed nodes on US Power-Grid network. Fig. ?? shows the final infection scale (f (t c )) with respect to the increasing timestamps with infection rate (β) as 0.01 and top 5% influential as seed nodes on US PowerGrid network. Fig. ?? displays the final infection scale (f (t c )) with respect to the increasing timestamps with infection rate (β) as 0.01 and top 5% influential as seed nodes on Facebook-like weighted network. From above results on three real-life networks, it is evident that HookeRank performs better than state-of-the-art methods like weighted-degree centrality, weighted-betweenness centrality, weighted-eigenvector centrality, and weighted-voteRank, and also stretchedly outperforms recent methods like WVoteRank in terms of final infected scale with respect to time t and spreader fraction p on real-world networks as depicted in Tab. ??.
Conclusion
In this paper, we proposed the KSHR method for finding influential nodes in weighted networks by modeling edges of the network as springs and edge weights as spring constants. Initially, we found a measure of the distance between indirect neighbors through the series and parallel combination of edges, by modeling them as springs. Then we proposed a new method of calculating K-shell score on a weighted network. By finding the KSHR value of the nodes, our method locates the top spreaders in the given real-world network to reach a large number of people in the network to maximize the spread of the information. We performed the simulation of the proposed method along with contemporary methods on six real-life data-sets taking the basis of evaluation as the average distance, Kendall tau and Final Infected scale and concluded that the proposed influence maximization KSHR centrality performs considerably well and is effective in real-life scenarios. | 2021-02-09T02:15:57.299Z | 2021-01-23T00:00:00.000 | {
"year": 2021,
"sha1": "a3998e03f35b9efe55d3447c8140c6daafb344be",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a3998e03f35b9efe55d3447c8140c6daafb344be",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
81532434 | pes2o/s2orc | v3-fos-license | Bilateral Luxatio Erecta : A Case Report
Luxatio erecta is a highly uncommon type of shoulder dislocation, representing approximately 0.5% of all shoulder dislocations fitting into this category. In this paper, we present the case of a patient who presented post trauma with bilateral inferior glenohumeral joint dislocations (also called Luxatio Erecta). Considering that Luxatio Erecta only has a 0.5% chance of being present with a unilateral dislocation, the probability of a patient presenting 2 of these dislocations at the same time is approximately 0.25% (0.5% × 0.5% = 0.25%). In addition to this, Luxatio Erecta frequently presents with injuries to the brachial plexus and/or a humeral fracture. Despite this, neither of our patient’s dislocations were associated with any fractures or neurovascular injury and both were successfully reduced the Emergency Department. Both the patient’s presentation and outcome are quite uncommon, which makes this case an invaluable opportunity to go over the unique characteristics to Luxatio Erecta.
Introduction
Luxatio Erecta is a rare type of shoulder dislocation, accounting for approximately 0.5% of all shoulder dislocations [1].It is associated with traumatic injuries that hyperabduct the arm, which forces the proximal humerus into the acromion, which allows the humeral head to disengage from the glenoid.
Classically, physical exam will show the affected arm hyperabducted with flexion at the elbow, and hand positioning superior or posterior to the patient's head.The examiner should be able to palpate the humeral head in the patient's axilla, along with an empty glenoid cavity [1].Presenting vitals in the ED were pulse of 68 bpm, respiratory rate of 18, and blood pressure of 158/74 mmHg.On physical exam, the patient had bilateral abducted and flexed arms, with his L hand superior to his head and his R hand superior + posterior to his head; empty glenoid cavity bilaterally, palpable humeral head in the L axilla.No significant neurological or vascular deficiencies were found for both extremities.No other significant findings were present on exam.
Case Presentation
(Table 1) Bilateral shoulder and humeral head x-rays were ordered.Shoulder x-rays demonstrated bilateral, inferior dislocations of the humeral head (Luxatio Erecta), no fractures were identified on the shoulder or humeral x-rays (Figure 1 and Figure 2).However, due to the patient's body habitus, and his bilateral dislocations, proper positioning could not be obtained for the R humeral x-ray, and a fracture could not be entirely ruled out.
Prior to closed reduction of the both glenohumeral joints, we had a conversation with patient and his spouse concerning the risk for additional injury should there be an unseen fracture of his right humeral head.The patient and his spouse understood, and consented to closed reduction of both shoulders.
Closed reduction was carried out under Conscious Sedation with Ketamine, using Traction-Counter-traction technique.A sheet was wrapped around the L shoulder and both ends were pulled together at the R hip.The 1 st physician was positioned at the head of the bed holding the proximal L arm, while the 2 nd was positioned the R waist, holding both ends of the sheet.The 1 st physician pulled on L proximal arm, creating axial traction, while the 2 nd physician pulled on the sheet, creating counter-traction.During this, the 1 st physician increased the intensity of abduction in the affected limb to provide additional pressure on the humeral head.L shoulder was successfully reduced, and the R shoulder was successfully reduced with a setup mirrored to this one.Neurovascular function remained intact bilaterally post reduction, and post reduction x-rays demonstrated no fractures of humeral head (Figure 1 and Figure 2).Patient was placed in bilateral slings and discharged home with 1 week follow up with Orthopedic Surgery.
Discussion
Luxatio Erecta is a truly uncommon type of shoulder dislocation and it is important for an Emergency Physician to be aware of its presence when seen.A physician should know how to properly reduce Luxatio Erecta, as most instances can be successfully treated with non-operative management, but also understand when surgical care is required.In addition, it is vital to remember the associated risks and complications associated with this injury.
Typical Presentation
On presentation, Luxatio Erecta will have the affected extremity held above or behind the patient's head, with the elbow flexed and abducted.The glenoid cavity is empty and the physician may palpate the humeral head in the axilla.On x-ray, the humeral head will be inferior to the rim of the glenoid and the humeral shaft is parallel to the scapular spine [2].With an assistant, wrap a sheet around the affected shoulder, with both ends of the sheet pulled towards the contralateral hip.This provides the counter-traction required.Following this, straighten the affected elbow while keeping the arm fully abducted.Then pull in line with humeral shaft (if required, the assistant may apply additional force to the humeral head in a cephalad and lateral direction).When the humeral head is reduced into glenoid fossa, slowly adduct the shoulder towards the body.
Reduction Techniques
The Two-Step Maneuver: Inferior Dislocation→Anterior Dislocation→Reduction.
On the affected side, push on the lateral aspect of the midshaft humerus, and pull on the medial epicondyle of the elbow.This converts the inferior dislocation to an anterior dislocation.Once this has been accomplished, the physician may reduce the anterior dislocation with a number of maneuvers [3] (two examples being the Milch technique or Scapular manipulation).
Reduction Tips
Always ensure adequate relaxation and sedation prior to reduction attempt.
Just like other closed reductions, a thorough pre and post reduction neurological exam is essential.
Always obtain post reduction x-rays.
Complications
With an inferior dislocation, there is an associated risk of rotator cuff tear, with some citing a 12% incidence with Luxatio Erecta [4] [5].Luxatio Erecta can also present with concomitant fracture of the greater tuberosity.An article from the Journal of Orthopedic trauma reviewed 80 cases of Luxatio Erecta and found that 80% presented with a rotator cuff tear or a fracture of the greater tuberosity [5].While our patient had no neurovascular abnormalities, the same article estimates that 50% -60% of Luxatio Erecta patients have an associated Brachial Plexus Injury [5].Considering this, a thorough and well documented neurological exam of the affected extremity should be performed before and after reduction.The same can be said for a vascular exam, due to risk of injury to the axillary artery [6].
Surgical Intervention
Despite the high rate of success, Luxatio Erecta cannot be universally treated with closed reduction.Should the dislocation by irreducible, open or associated with vascular injury, the patient will require full surgical intervention.In addition, a fracture of the acromion, clavicle, inferior glenoid fossa and greater tuberosity will require surgical management [7].
ic fall.Patient was hanging outdoor Christmas lights, when his ladder fell out from under him.Patient held onto the edge of his roof, hanging from his arms before he fell and landed on his back.Patient denied neck/back pain, along with loss of consciousness.Was given 10 mg of Morphine by EMS prior to ED presentation?
At the time of writing, there are two methods for closed reduction of Luxatio Erecta; Traction-Counter-traction and The Two-Step Maneuver.J. Lippert, B. Desai DOI: 10.4236/crcm.2018.712056611 Case Reports in Clinical Medicine Traction-Counter-traction:
Table 1 .
Summarizes clinical characteristics of patient presentation.
HPI: 70 year old male s/p traumatic fall from a ladder, landing on his back.No LOC, back or neck pain.10 mg IV Morphine from EMS. Vitals: 68 BPM, RR 18, BP 158/74 mmHG.Physical exam: Bilateral abducted and flexed arms, L hand superior to head and R hand is superior + posterior to head.BL empty glenoid cavity.Neurovascular function intact bilaterally. | 2019-03-18T14:04:46.232Z | 2018-11-29T00:00:00.000 | {
"year": 2018,
"sha1": "0f3652906c4a8f75067e6bf555b05ce39f5fbf53",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=89200",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "203bb3e62116c066327b0d66c21cd96db8b1bd1d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245020766 | pes2o/s2orc | v3-fos-license | Accuracy of photogrammetry, intraoral scanning, and conventional impression techniques for complete-arch implant rehabilitation: an in vitro comparative study
Background To compare the accuracy of photogrammetry, intraoral scanning and conventional impression techniques for complete-arch implant rehabilitation. Methods A master cast containing 6 implant abutment replicas was fabricated. Group PG: digital impressions were taken 10 times using a photogrammetry system; Group IOS: intraoral scanning was performed to fabricate 10 digital impressions; Group CNV: splinted open-tray impression technique was used to fabricate 10 definitive casts. The master cast and conventional definitive casts were digitized with a laboratory reference scanner. For all STL files obtained, scan bodies were converted to implant abutment replicas using a digital library. The accuracy of a digitizer was defined by 2 main parameters, trueness and precision. "Trueness" was used to describe the deviation between test files and reference file, and "precision" was used to describe the closeness between test files. Then, the trueness and precision of three impression techniques were evaluated and statistically compared (α = 0.05). Results The median trueness was 24.45, 43.45 and 28.70 μm for group PG, IOS and CNV; Group PG gave more accurate trueness than group IOS (P < 0.001) and group CNV (P = 0.033), group CNV showed more accurate trueness than group IOS (P = 0.033). The median precision was 2.00, 36.00 and 29.40 μm for group PG, IOS and CNV; Group PG gave more accurate precision than group IOS (P < 0.001) and group CNV (P < 0.001), group CNV showed more accurate precision than IOS (P = 0.002). Conclusions For complete-arch implant rehabilitation, the photogrammetry system showed the best accuracy of all the impression techniques evaluated, followed by the conventional impression technique, and the intraoral scanner provided the least accuracy.
fitting prosthesis, either by digital or conventional impression techniques.
In the workflow of conventional procedures, the splinted open-tray impression technique is mostly used to transfer the implant positions from the patient's mouth through the impression material. The splined open-tray impression technique provides acceptable clinical results, but it requires complicated procedures that are time consuming and discomfortable for the patient. Moreover, the accuracy of definitive casts is influenced by multiple factors, for instance, impression materials [3], matching tolerance of components [4] and dimensional changes in master cast [5].
Photogrammetry technology is a method of making precise measurements by using reference points in photographs [28][29][30]. As early as 1994, photogrammetry technology was introduced to implant dentistry to detect the marginal adaptation between the prosthesis and the implants [31]. In 1999, Jemt et al. [29]reported that photogrammetry technology could successfully record the implant replica positions of an edentulous mandible cast, and the accuracy of this technology was comparable to that of the conventional impression technique. However, due to its complicated operation, photogrammetry technology could not be further applied in clinical practice. With the development of digital technology, commercially available photogrammetry systems provide a new method for implant impression making for edentulous patients. Some case reports have reported that the photogrammetry system can be successfully used with complete-arch implant impressions with high framework fit [32][33][34]. However, current studies on the accuracy assessment of photogrammetry systems are very scarce, and the results are inconsistent [17,35,36]. Moreover, previous studies have not evaluated the position of implant abutment replicas but the position of scan bodies on the implants, which may not represent true clinical procedures.
The purpose of this study was to compare the accuracy of three impression techniques for complete arch implant rehabilitation: photogrammetry, intraoral scanning, and conventional impression techniques. Accuracy consists of trueness and precision (ISO 5725-1, DIN55350-13) [37]. Trueness was used to describe the deviation between test files and reference file, and precision was used to describe the closeness between test files. The null hypothesis was that no significant difference would be found in accuracy among the three different impression techniques.
Methods
A maxillary polymer resin model containing 6 implant abutment replicas (RC 4.6 mm repositionable analog for screw retained abutments; Institute Straumann AG) was prepared by using a polymer 3D printer (S300; Union-Tech) and polymer resin (Model V2.0; UnionTech). Then, a stone master cast was fabricated from the polymer resin model by taking a splinted open-tray impression. The impression was poured with type IV dental stone (Marmoplast N; SILADENT Dr. Böhme & Schöps GmbH). This stone cast served as the master cast (Fig. 1). The depth and angulation of the implant abutment replicas are described in Table 1. From the master cast, three impression techniques were performed, namely, digital impression by using a photogrammetry system (group PG), digital impression by using an intraoral scanner (group IOS), and conventional impression (group CNV). The master cast was digitized using a laboratory reference scanner (E4; 3Shape; Software version 2.1.4.2) with an accuracy of 4 μm and exported to standard tessellation language (STL) file to serve as reference file. The laboratory reference scanner was calibrated prior to every scan.
For group PG, scan bodies (ICamBody; Imetric4D Imaging Sàrl, Software version 9.1.79) were positioned and hand tightened on each implant abutment replication on the master cast ( Fig. 2A), and a photogrammetry system (ICam4D; Imetric4D Imaging Sàrl) was used to digitize the master cast according to the manufacturer's recommendations under room lightening conditions. The photogrammetry system was calibrated prior to every scan. The master cast was scanned ten times repeatedly without changing the position of scan bodies, and a total of 10 STL files were obtained (Fig. 2B).
For group IOS, an intraoral scanner (TRIOS 3;3Shape; Software version 19.2.2) and scan bodies (CARES Mono Scan body for screw-retained abutment; Institute Straumann AG) were used to fabricate 10 digital impressions under the same room lightening conditions. All the scan bodies were brand new, and the intraoral scanner was calibrated prior to every scan. After scan bodies were screwed onto the implant abutment replicas on the master cast by hand tightening (Fig. 3A), the digital scan began from the occlusal surface of the scan body at the left molar area, continued to the contralateral right molar area, then went to the palatal surfaces of the scan bodies, and finally covered the buccal surfaces of the scan bodies. This scan pattern was in accordance with the manufacturer's recommendations. The scanning was repeated ten times without changing the position of scan bodies, and a total of 10 STL files were obtained (Fig. 3B). The scanning procedures were performed by an operator with 5 years of clinical experience with intraoral scanning.
For group CNV, abutment-level impression copings (RC 4.6 mm impression coping for screw retained abutments; Institute Straumann AG) were connected to the implant abutment replicas on the master cast by hand tightening, and the impression copings were splinted using autopolymerized acrylic resin (Pattern Resin; GC).
To reduce the polymerization shrinkage of the resin splint, the resin splint was sectioned and reconnected (Fig. 4A). A custom tray was fabricated using light-cured resin (LC-tray; Müller-Omicron GmbH & Co.KG). Tray adhesive (Tray adhesive; DMG) was applied 10 min before impression making, and the definitive impression was taken using the custom tray and polyether impression material (Impregum Penta Soft; 3 M ESPE). Impression procedures were performed in a room with a constant temperature range of 22-25℃. Four minutes later, impressions were removed from the master cast, and the implant abutment replicas were repositioned to the coping. The definitive cast was poured with Type IV dental stone (dentoststone 220; dentona AG) according to the manufacturer's instructions (Fig. 4B). The definitive cast was digitized using a dental laboratory reference scanner (E4; 3Shape; Software version 2.1.4.2) with an accuracy of 4 μm and exported STL file. The conventional impression procedures were repeated 10 times to fabricate 10 definitive casts. Then, a dental laboratory reference scanner was used to digitize the 10 definitive casts, and a total of 10 STL files were obtained. All the STL files were imported to dental CAD software (exocad DentalCAD; exocad), and scan bodies were converted to implant abutment replicas using a digital library (Fig. 5A). [15] Then, updated STL files were imported to inspection software (Geomagic Control X; 3D systems) for trueness and precision assessments. The 2 STL files were superimposed using the "best fit algorithm", and the three-dimensional discrepancy between 2 STL files was evaluated by the root mean square (RMS) error calculated by the inspection software. Then, a colorimetric map of the results was exported, and the surface tolerance of these deviations was chosen as 20 μm. (Fig. 5B). Trueness was evaluated by superimpositions and threedimensional comparisons between reference file and test files, and a total of 10 RMS values were obtained in each group; precision was used to evaluate the threedimensional deviation of the pairwise comparison of files within the test groups, and a total of 45 RMS values were obtained in each group [10,16,38,39].
Statistical evaluation was performed using an analysis software program (IBM SPSS Statistics, v25; IBM Corp). The Shapiro-Wilk test revealed that the data were not normally distributed. Differences between groups in trueness and precision were evaluated using the Kruskal-Wallis test, and the Dunn-Bonferroni test was performed for post hoc analysis. The level of significance was set at α = 0.05.
Results
The trueness and precision are shown by the median and interquartile range (IQR) of the RMS values (Tables 2, 3). The power test of the statistical analysis was greater than 80%. The Kruskal-Wallis test indicated significant differences for both trueness (P < 0.001) and precision (P < 0.001). Table 4 presents the results of post hoc analysis.
The median of precision was 2.00 (IQR 1.65), 36.00 (IQR 9.95), 29.40 (IQR 4.80) μm for group PG, IOS and CNV, respectively; group PG gave more accurate precision than group IOS (P < 0.001) and group CNV (P < 0.001), group CNV showed more accurate precision than group IOS (P = 0.002). A boxplot of the precision of the three impression techniques is shown in Fig. 7.
Discussion
This study compared the accuracy of photogrammetry, intraoral scanning, and conventional impression techniques in an edentulous maxilla stone cast with 6 implant abutment replicas. The null hypothesis was rejected, as significant differences were found among the three test groups. For both trueness and precision, the photogrammetry system tested showed the best outcomes, the second place was the conventional impression technique, and the last was the intraoral scanner evaluated.
At present, research on the accuracy of photogrammetry systems is still very scarce, and the results are inconsistent. Tohme et al. [35] reported that the photogrammetry system exhibited better accuracy than intraoral scanner and conventional impression technique, which is consistent with the results of this study. However, Revilla-León et al. [17] came up with a different outcome with this study, compared with intraoral scanner and conventional impression technique, the photogrammetry system tested showed the least accuracy. Another study also reported by Revilla-León et al. [36] suggested that the photogrammetry system was less accurate than conventional impression technique. Comparing the 2 previous studies with opposite results, the different outcomes may be due to different study designs involving reference file and measurement methods. In previous studies, the reference file were obtained by a coordinate measuring machine, and then the linear and angular deviations were evaluated. In this study, the reference file were obtained by a laboratory reference scanner, and then the accuracy was assessed by root mean square error. Compared with laboratory reference scanner, coordinate measuring machine exhibit better accuracy and repeatability, but it is less accurate in accessing freedom plane, in addition to the size and shape of its spherical probe, it is impossible to detect complex and undercut area, which may influence the accuracy of the reference file, this is also the reason why the laboratory reference scanner was chosen in this study. As reported in multiple studies [18,23,40], 2 STL files were superimposed through the "best fit algorithm" in this study. The standard "best fit alignment" uses an iterative closest point (ICP) algorithm to align the STL files, which is not affected by the operator factors. The alignment is performed by minimizing the Table 4 Comparison of trueness and precision among groups tested PG photogrammetry, IOS intraoral scanner, CNV conventional impression *Indicated significant differences between groups (P < 0.05) error between the distance of each corresponding data point [41]. The main limitation of this method is that inherent errors inevitably occur during the superposition process, which has a certain impact on the accuracy evaluation. This inaccuracy was avoided by using the root-mean-square error to measure 3D deviations, and RMS values offset the positive and negative deviations of the "best fit" between the reference file and test file. This kind of method has been used in many studies [10,16,38,39]. Different studies have investigated the accuracy of intraoral scanners in complete arch implant rehabilitation, but there is no consensus. Some reports have shown that the accuracy of intraoral scanners can be comparable to that of conventional impression techniques [9,10,[12][13][14]17], whereas some studies have revealed that the conventional impression technique is still more accurate than intraoral scanners [11,15,16]. In this study, the RMS values of trueness and precision in group IOS were both significantly higher than those of the conventional impression technique, and the results indicated that intraoral scanning was still less accurate than the conventional impression technique. The possible explanation for this result is that the 3D images obtained by intraoral scanners are generated by a series of image stitches, a longer scanning path may lead to the accumulation of error, and the lack of a stable identification marker on the mucosal surface also influences the accuracy of intraoral scanning. Previous literature reports have proven that compared with partial dental arch scans, intraoral scans with larger scan areas have greater deviation [25,42]. Another study [40] also suggested that the accuracy of the subsequent scan quadrant was lower than that of the first scan quadrant. Different techniques offering stable characteristics between implants have been described to facilitate intraoral scanning procedures. An in vivo study indicated that the use of an auxiliary geometric part significantly improved the accuracy of the intraoral scanning accuracy for implant-supported complete arch prostheses and facilitated the scanning process itself [43]. Another in vivo study also suggested that the extensional structure of the scan body could significantly improve scanning accuracy, but this in vitro study showed that the conventional impression technique is still more accurate than intraoral scanning [16]. After all, whether these techniques can actually improve the accuracy still needs to be further explored in vitro and in vivo.
PG VS CNV PG VS IOS IOS VS CNV
The photogrammetry system overcomes the limitations of intraoral scanners in obtaining the location of implant abutments in complete-arch implant rehabilitation. Intraoral scanners generate 3D images by a series of image stitches, and a longer scanning path may lead to the expansion of error [25,38,42]. However, compared with the intraoral scanning technique, the photogrammetry system takes all measured data in each picture and generates director vectors of the exact position of the scan bodies in relation to one another with the help of reference points. This method makes it possible to calculate the locations of scan bodies without superimposing pictures, which potentially ensures greater accuracy. Additionally, the photogrammetry system has multiple cameras with a larger scanning range and faster scanning speed. The scanner acquires images outside the mouth, which minimizes the influence of saliva, blood and humid environments on accuracy. However, the photogrammetry system has certain limitations; it only records the position information of the implant abutments in the patient's oral cavity. Therefore, other procedures are needed to obtain soft tissue information.
This study compared the three-dimensional position of implant abutment replicas, which was under the assumption that the accuracy of the implant abutment replica positions was more important than that of the peri-implant mucosa in complete-arch implant rehabilitation cases; therefore, the location of the scan bodies was converted to implant abutment replicas in the digital library. However, there is an inherent connection error between different components [4], and the location of the scan bodies may not represent the true position of the implant abutment replicas. However, connecting errors are clinically inevitable. Digital impression techniques require only 1 connecting procedure to obtain the location of implant abutments, while the conventional impression technique requires 2 connecting procedures.
There are still a few limitations to the present study. This in vitro study could not completely simulate a patient's oral situation. This in vitro study avoids the influence of oral saliva, gingival crevicular fluid, humid environment, mucosal mobility, and patient mouth opening. These advantages may also make the accuracy higher than the accuracy achieved with intraoral scanner or photogrammetry system in clinical applications. Further studies are needed to explore the accuracy of different photogrammetry systems, as well as the impact of the number of implants, interimplant distance, angle and depth on it. The present study provides a certain degree of support for the clinical application of photogrammetry system, but more in vivo and in vitro studies are needed to verify its effectiveness.
Conclusions
Within the limitations of this in vitro study, the following conclusions were drawn: 1. The photogrammetry system obtained the lowest 3D discrepancy in terms of trueness and precision for the implant abutment positions. 2. The intraoral scanner tested resulted in the highest 3D discrepancy for both trueness and precision, representing the least accuracy among the three impression techniques tested. 3. The trueness and precision of conventional impression technique were both less accurate than photogrammetry system, but both more accurate than intraoral scanner. | 2021-12-12T06:16:17.765Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "d294ee8ec25f4d70ab5ec03616cc1c3e2a9b4523",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12903-021-02005-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ced9ca8140e55c8810c9d9a4d73e85dd070908db",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
56196866 | pes2o/s2orc | v3-fos-license | Government Subsidy and Crash Risk
Using the sample of companies listed on Chinese GEM between 2009 and 2015, we examines the impact of government subsidy on companies’ future stock price crash risk and explores how the earning information opacity moderates the relation between government subsidy and crash risk. We find that: 1) the government subsidies for the listed companies increase their crash risk; 2) the firms with higher information opacity are exposed to higher stock price crash risk; 3) considering the cross effect of opacity and government subsidy, the positive correlation between government subsidy and crash risk is weakened under the high information opacity environment. With further analysis, we find that that government subsidy dominates the earning management level of Jones model while measuring firm’s information opacity. This paper not only enriches the study of external influencing factors of crash risk, but also broadens the study of government subsidy efficiency and provides a new decision basis for the investors to recognize the firms’ earning information quality.
Introduction
In recent years, stock markets are exposed to sharp fall frequently.Late in June 2015, thousands of shares in Chinese stock market fell by 10%, the maximum allowed in one day, the SH index fell nearly 1000 points and GEM index shrank for more than 25%.At the beginning of 2016, the Chinese stock market put on the "crash 3.0", with nearly 7 trillion market value evaporated.The phenomenon of collapse has brought great challenges to the financial market stability and the topic of crash risk has attracted a lot of market attention.In academic area, Jin & Myers (2006) took the lead in clarifying the formation mechanism of crash risk from the perspective of information theory.The executives, for some motivation, would try to hide the bad news.Once accumulated to a certain threshold, the bad news would be released to the market, leading to the sharp fall of stock price.In this framework, the following documents may study the motivation of executives hoarding the bad news, such as the pursuit of equity incentive (Kim et al., 2011) and the excess compensation (Xu et al., 2014).Some also explore the impact factors on the stock price crash risk from the company's internal characteristics, such as the disclosure of internal control information (Ye Kangtao et al., 2015), the shareholding ratio of institutional investors (Cao Feng et al., 2015), the ownership of large shareholders (Wang Huacheng et al., 2015) and overinvestment (Jiang Xuanyu & Xu Nianxing, 2015).Moreover, some investigate it from the external environment, such as political factors (Piotroski et al., 2015), analysts' optimistic bias (Xu Nianxing, Jiang Xuanyu et al., 2012), religion (Callen & Fang, 2012) and other aspects.
On the other hand, the government subsidy is a way for government to allocate resources.Characteristics of high-tech content and emerging industry clustering make all GEM listed companies highly favored by government.As of December 31, 2014, a total of 419 companies have gained government subsidies, with the cumulative amount reached more than 34 billion yuan.Such huge amount of government subsidies, with methods of financial return, tax incentives, special subsidy and innovation award etc., have become a major source of corporate profits, providing convenience for enterprises' earnings management behavior.That is why the government subsidy is criticized repeatedly as "GEM's last straw".However, as for the problem of efficiency of government subsidy, there is no consistent conclusion.Most scholars believe that government subsidies bring free cash flow for enterprises so that it improves the short-term solvency of enterprises (Tzelepis & Skuras, 2004;Zou et al., 2006).It is conducive to improve the short-term performance of enterprises (Chen Xiaohe & Lee Jing, 2001).But in the long term, it doesn't promote the profitability of enterprises (Tzelepis & Skuras, 2004) and may even have adverse effects (Leng Jianfei, Wang Kai, 2007).Tang & Luo (2007) investigate the function of government subsidy on the listed companies from aspects of social and economic benefits and they find that it does not have a significant impact on economic benefits.Yu Minggui and his colleagues (2012) suggest that the consequence of government subsidy depends on the degree of political connection.
Taken altogether, we have not yet found any relevant literatures that discuss about the relationship between companies' stock price crash risk and government subsidy so far.Therefore, this paper approaches the efficiency of government subsidy in the perspective of crash risk and then discusses its function mechanism.
Using the sample of the GEM listed companies which received government grants between 2009 and 2014, we discuss the effects of government subsidy on the GEM stock price crash risk.We find that: 1) the government subsidy for the listed companies significantly improves their crash risk; 2) the higher the information opacity, the higher the GEM stock price crash risk ; 3) considering the cross effect of information opacity and government subsidy, the positive correlation between government subsidy and crash risk is weakened while the opacity is high.Further analysis shows that on the proxy variable for the opacity of information, government subsidy should be better than the earning management level measured by Jones model.
The contribution of this paper may be reflected in the following aspects.First, this paper studies the impact of government subsidy on the company's stock price crash risk.It not only enriches the emerging study of crash risk, but also provides a new research path for the efficiency of government subsidy.Second, information opacity is an important factor of the crash risk, and earning management level using the Jones model is often taken as its proxy value.But this study shows that government subsidy dominates this common proxy, which provides a new way to measure the opacity of information.
The paper proceeds as follows.Section 2 reviews prior literature and develops our hypotheses.Section 3 describes the sample, variables measurement and research design.
Section 4 presents the empirical results.Section 5 shows the further analysis and section 6 shows robustness test.Section 7 concludes.
Theoretical Analysis and Research Assumptions
The stock price crash risk refers to the phenomena that company's share price fall sharply.Romer (1992) is the first to reveal the cause of this phenomenon from the perspective of information disclosure.Jin & Myers (2006) confirmed this formation mechanism with the cross-border data and found that under the motivation of salary contract, occupation career and empire building, executives chose to hide the negative news.When negative news accumulated to a certain threshold, they would be poured into the market as a whole, resulting in the crash of company stock price.Hutton et al (2009) took the level of earnings management as a proxy for corporate information opacity and found the more opaque the company information was, the more the share price tended to crash.Francis et al. (2012) discussed from the perspective of the reliability of financial information analysis and concluded that real earning management behavior would push up the stock price crash risk in the future.Therefore, the root of stock price crash was asymmetry of information.As executives hold more information, they would tend to adopt the opportunism behavior and adversely affected the company's share price performance.Under the imperfect system in our country, when the company specific information content is lower, the risk of a crash shows higher (Jin & Myers, 2006;Piotroski & Wong, 2010).Based on this, hypothesis 1 is proposed below: H1: in the case of other conditions unchanged, the higher the company's information opacity, the higher the crash risk of the company's share price.
Government subsidy is an important way for government to intervene in the economy.The research on the efficiency of government subsidy first focused on the employment rate, and then extended to the enterprise performance.Chen & Li (2001) found that the local government conducted earning management for the local listed companies through government subsidy for the sake of winning over local resources, but this behavior would bring about a serious distortion of the accounting information.Although government subsidy can improve corporate performance in the short term (Zou et al., 2006), it concealed the firm's operation problems.Companies that obtain government grants may show lower specific information content and would be exposed to higher crash risk.Shi et al. (2014) found that in places with high degree of marketization, government undertook fewer interventions in business and earning management was kept in a low level so that the stock price crash risk was also relatively low.Based on this, we put forward hypothesis 2: H2: in the case of other conditions unchanged, the higher the level of government subsidy to the company is, the higher crash risk of the company's stock price is.
Opacity of information is the enterprise's own characteristics.Government subsidy, a way of earning management for listed companies, in a certain extent reduces the firm specific information content, so the company's share price crash is more likely to occur.Kim & Zhang (2012) found that the negative correlation between accounting conservatism and crash risk is more prominent in companies with higher information opacity.Based on this, we proposed hypothesis 3: H3: in the case of other conditions unchanged, high information opacity will enhance the positive correlation between government subsidy and the crash risk.
Sample Selection and Data Sources
The initial sample comprises firm-year observations for which government subsidy information is available on CSMAR.In addition, we collect: 1) CSMAR daily stock files to estimate our measures of firm-specific crash risk; 2) firm-level accounting data from Gildata annual files.We restrict our CSMAR sample to common industries, exclusive of finance and insurance industry.We also exclude the stocks with weekly return data per year less than 26.Our final sample consists of 1350 firm-year observations for the years 2009-2015.In addition, we winsorize continuous variables except for proxy variables of crash risk in 1% and 99%, and conduct cluster adjustment in the industry level.
First, we first estimate firm-specific weekly returns from the following expanded market and industry index model regression for each firm and year: where , i t r is weekly return considering cash dividend reinvestment on stock i in week t, , m t r is the return on the GEM value-weighted market index considering the cash dividend reinvestment in week t., i t ε is the residual of the regression.We then define the firm-specific weekly return ln 1 Thus, we construct the following two indicators: 1) the negative coefficient of skewness of firm-specific daily returns (NCSKEW) where n is the number of observations of firm-specific weekly returns during the fiscal year t.A high value of NCSKEW indicates a serious negative skewness, thus a high level of crash risk of stock price.
2) the down-to-up volatility of firm-specific daily returns (DUVOL) where n u and n d are the number of up and down weeks over the fiscal year t, respectively.For any stock i over a one-year period, we separate all the weeks with firm-specific weekly returns above (below) the mean of the period and call this the ''up" (''down") sample.We than calculate the sum of the square of , i t W for the ''up" and ''down" samples separately.Similar to NCSKEW, a large value of DUVOL indicates a high level of stock price crash risk.
Earning Information Opacity
We use the company's earning management level as a proxy variable of corporate information opacity.The company's operating earnings management level is obtained through the Jones model: In this model, Accruals t measured by the difference between the net cash flow from operating activities and the operating profit, indicates the operating earning management level.A t−1 refers to total assets at the beginning of the year, S t is the primary business income and PPE t is the net worth of fixed asset.
We take the absolute value of ε i,t as the company's earnings management level in year t, and referring to the model used by Hutton et al. (2009), we measure firm i's information opacity in year t with moving average of its three phase lag, that is:
Control Variables
Following prior literature (Hong, 2001;Xu Nianhan, 2012), we employ the following control variables: company size (Size i,t ), company's market to book rate (Pb i,t ), return on total assets (Roa i,t ), asset-liability ratio (Da i,t ), the average of company's specific weekly return rate (Ret i,t ), the standard deviation (Sigma i,t ), abnormal turnover rate (Yturn i,t ), fund shareholding ratio (Fund i,t ).We also introduce the dummy variable of industry and year in all regressions to control the year and industry's impact.The definition of each variable is shown in Table 1.
Model Designation
To test hypotheses 1 and 2, this paper builds the following models: We expect that among these models the coefficient of information opacity Acc i,t , is positive, so it is with the coefficient of government subsidy Lngg i,t .To dig out how the information opacity functions the correlation between the government subsidy and stock price crash risk, this paper introduces the cross term of information opacity and government subsidy.The following model is obtained: If the result is consistent with our assumptions, the coefficient of cross term Acc i,t * Lngg i,t should be positive.
Descriptive Statistics
As Table 2
Correlation Analysis
As Table 3 shows, the correlation coefficient between NCSKEW t+1 and DUVOL t+1 is 0.9141, significant at the 1% level, suggesting that these two indicators are consistent in terms of stock price crash risk.And the correlation coefficients between government subsidy and these two indicators are 0.0871 and 0.0861, statistically significant at the 1% level.The positive relationship shows that without considering other factors, the higher the government subsidy, the higher one-year forward crash risk, consistent with the hypothesis of H2.However, information opacity doesn't share a significant relationship with future crash risk.In addition, the relationship between government subsidy Lngg t and firm's asset profit margin in the same period Roa t is positive, proves that the government subsidy can help improve the firm's short-term operation.
Regression Analysis
Table 4 shows the regression results of model (5) (6) (7) (8).In the regression equation (a) (b), the coefficients of information opacity (Acc) were 0.391 and 0.134, significant in the 5% level, consistent with hypothesis 1.In the equation (a) and (b), the coefficients of government subsidy Lngg were 0.0150 and 0.009, respectively.With the latter significant in the 1% level, we deduce that the higher level of government subsidy, the higher one-year ahead crash risk of the company's share price, therefore the hypothesis 2 is validated.With regards the control variables, Fund, Ret, Size, Roa were all positive significantly at the 1% level, suggesting that with other conditions unchanged, companies with higher proportion of fund shareholdings, higher specific return rate, bigger firm size as well as greater return on total assets, are more inclined to suffer stock price crash in the capital market, consistent with previous studies on the whole.
In the model equation (e) and (f), the relationship between government subsidy (Lngg) and company information opacity (ACC) were both significantly positive, verifying hypothesis 1 and hypothesis 2 again.Adding the cross term of government subsidy (Lngg) and company information opacity, the coefficient of government subsidy (Lngg) kept positive, moreover, were both significant in the level of 10%.In the regression equation (g), the coefficient of the cross term was −0.648, significant in the level of 5%.However, this result is incompatible with our assumption.It suggests that in the companies whose information opacity is low, the positive relationship between government subsidy and future crash risk is not suppressed, instead, would have a callback.
As for other variables, the results are basically the same with that in Table 5.
Further Analysis
Previous empirical results show that government subsidy would increase the future crash risk of the company's stock price, which illustrates to a certain degree that management tend to cover company negative information through the government subsidy.
As we assumed, this method should have been adopted more generally under high opaque information environment.Nevertheless, our regression results in Table 6 show that the positive relationship between government subsidy and crash risk has not been inhibited in the companies with low information opacity.Does this mean that in the measure of information opacity government subsidy is better than the quantitative indicator of Jones model?In order to verify this, we first take government subsidy as the dependent variable and company information opacity as independent variable in the regression model as follows: With the residual obtained (here we defined it as Res), we replace the government subsidy in the model ( 5) and ( 6) with Res.The models are as follows: Res Acc CONTROL Industry Year . DUVOL As stated in Table 6, in all regression equations, residual term Res shows positive effect on both two indicators of stock price crash risk, respectfully significant in 10% and 1% level.This proves our conjecture that government subsidy is a better measure of information opacity.moving average, and the results come to be similar with above.In addition, we also use the modified Jones model to measure accrual earnings management level.It turns out that regardless of whether smoothing or not, the regression results are basically the same.• In the screening of samples, we relaxed the requirement for the data volume of stocks weekly return rate, excluding the sample with less than 13 weekly return rate data per year, instead of 26 that we adopted initially.The result showed little difference with our initial results.
Conclusions
We investigate whether government subsidy is associated with future stock price crash risk.Using a large sample of companies listed on Chinese GEM from the years 2009 to 2015, we find robust evidences that the government subsidy is positively related to oneyear ahead stock price crash risk.These findings enhance our understanding of government subsidy in predicting future stock price crash risk and corroborate our explanation of the role of government subsidy in hoarding bad news by managers.
Our empirical results also show that the earning information opacity would significantly increase its share price crash risk.However, the positive relation between government subsidy and future crash risk is more salient for firms with low information opacity, which is inconsistent with prior literature.Further analysis suggests that government subsidy is a better proxy variable for the opacity of information than earning management level measured by the Jones model.This provides a new way for studying the opacity information.
These findings also suggest that companies with higher proportion of fund shareholdings, higher specific return rate, bigger firm size as well as greater return on total assets, are more inclined to suffer stock price crash in the capital market.Hence, our study may provide investors with an effective strategy to help predict and eschew future stock price crash risk in their portfolio investment decisions.
Collectively, this study demonstrates how government subsidy is related to higher moments of the stock return distribution.However, government subsidizes companies through a variety of ways, including funding, tax return, fiscal interest discounts, tax return, etc.This study doesn't explore the association between different methods of government subsidy and stock price crash risk.Would different methods of subsidy show different effects on the stock price crash risk?This is a promising area for further exploration.
Table 1 .
Definition of variables.
Firm i's fund shareholding ratio in year tYturn i,tAbnormal turnover rate, calculated by the difference between annual turnover rate in year t and annual turnover rate in year t − 1 stated, 1) the average values of NCSKEW and DUVOL are −0.440 and −0.150, respectively, while the standard deviations are 0.950 and 0.330.This indicates that the difference of NCSKEW among sample firms is relatively large, but DUVOL is relatively stable.2) The average value of Lngg is 15.590, is slightly lower than its mean value of 15.640.Among the GEM sample firms, firms with high government subsidy hold a high proportion than that with low government subsidy.3) The average value of Acc is 0.080, indicates that GEM listed companies generally conduct earning management.
Table 3 .
Pearson correlation coefficient of main variables.
Table 5 .
Regression results with cross term.
• As for the proxy variable of information opacity, we directly use the company's accruals earning management level measured by the Jones model (model 5), without | 2018-12-18T21:03:38.845Z | 2016-08-31T00:00:00.000 | {
"year": 2016,
"sha1": "fe6f0cfc762ffd1e1cdbea8938bfc430bba17575",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=71028",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fe6f0cfc762ffd1e1cdbea8938bfc430bba17575",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
213008299 | pes2o/s2orc | v3-fos-license | The health worker motivation and competence in the utilization of MCH handbook In Bireuen District
Mothers and children are the groups most vulnerable to various health problems such as pain, nutritional disorders that often ends with kecatatan and death resulting in increased independence of the family in order to maintain the health of mothers and children is the main program of the Ministry of Health (MOH and JICA, 2009) , Efforts are being made to realize the independence of the family in maintaining the health of mothers and children is through the use KIA book for enhancing knowledge and skills of the family as a means of promotion and prevention in order to reduce morbidity and mortality in mothers and children.
I. Introduction
The degree of public health are very closely related to the Maternal Mortality Rate (MMR) and Infant Mortality Rate (IMR), which is the main target in the National Long-Term Development Plan (RPJMN) 2015-2019 health sector, namely enhancing the health and nutritional status of mothers and children. (Presidential Decree No. 2, 2015).
Mothers and children are the groups most vulnerable to various health problems such as pain, nutritional disorders that often ends with kecatatan and death resulting in increased independence of the family in order to maintain the health of mothers and children is the main program of the Ministry of Health (MOH and JICA, 2009) , Efforts are being made to realize the independence of the family in maintaining the health of mothers and children is through the use KIA book for enhancing knowledge and skills of the family as a means of promotion and prevention in order to reduce morbidity and mortality in mothers and children.
In the MCH handbook there is an explanation about the standard of antenatal care, danger signs of pregnancy, signs of labor, care of infants and toddlers, early detection of infant growth, and immunization, and then through the book KIA families can improve the knowledge of pregnant women and families about the health of mothers (pregnant, childbirth, and postpartum) and babies (newborn to age 6 years) as well as information about how to care for maternal and child health. (MOH, 2016) Implementation of a policy or program in achieving its objectives need to involve all the components to be actors in the implementation of activities such as components of the organization, procedures, and techniques that realize the expected goals (Ayuningtyas, 2014). Support infrastructure and human resource capacity both financially and is needed in the process of implementing a guarantee the implementation of activities to achieve the expected outcomes. (Ayungnityas, 2014).
One capacity that support the objectives of an organization is its human resources. Human resources by Wake (2012) is one of the important resources within the organization to achieve its goals that make the role of humans as the competitive power that can distinguish it from other organizations. The human resources include health professionals. Predisposing factors in the delivery of health services consisting of the human factor is the willingness and ability of individuals to carry out their duties. The efforts made by the individual in the form of persistence and consistency is the motivation of the individual who is a strength and direction in achieving organizational goals (Robbins, 2015) The theory of motivation as the basis for the conduct of the leaders in the field including the Theory X and Theory Y of Douglas McGroger this theory gives insight on two different viewpoints is a positive angle is a negative angle theory X and theory Y. Below are the X theory believe that at basically people do not like the work that needed referral or compulsion to do its job, in turn under the theory of Y states that workers view work as natural so that on average they can learn and work to accept and even responsible for the job.
Furthermore, two-factor theory by Hetzberg leads to satisfaction and job dissatisfaction. The concept theory suggests that the opposite of satisfaction is discontent, so that this concept can not delete dissatisfaction characteristics of the job does not necessarily make the job satisfying. Furthermore Hertzberg advise and emphasize on the factors related to the work itself or with results that can be obtained directly from the job, such as the opportunity to be promoted, personal growth opportunities, recognition, responsibility, and achievement. Research conducted by Elly Nur on KIA book as material utilization counseling in antenatal care performed by a health center midwife showed that there is a significant relationship with the motivation variable use KIA book by a midwife clinic. Likewise, the research conducted by Faridah (2015) that the effect of the strong motivation of health workers will provide a big boost to pregnant women by 2.5 times in the use KIA book.
Further research conducted by Nawawi on health worker motivation influence on outcomes puskesmas performance is significantly great effect in the amount of 0.60 (standard deviation). In harmony with the results of research conducted by Farida (2015) that support good health worker will encourage mothers 2.5 times the size in the utilization of KIA book. While research on the motivational factor midwife in charge of compliance KIA book an obstacle to the low utilization of KIA book that is only 2.2% in use by health professionals (midwives) in the utilization of KIA book (Sistiarani, 2014)
II. Research Method
The method of research is a qualitative phenomenological approach. The research was conducted in four health centers in remote and very remote working area of Bireuen district, by taking samples or indept interview were 21 informants. Sampling techniques atun informants with purposive sampling technique with particular consideration for the instrument in this study are researchers themselves (human instrument) by holding on guidelines open questionnaire (guied questionnaire), where interviews with the informant conducted face to face, with a guided questionnaire prepared basis. Open as well as notebooks and recorders. Technical analysis of the data was descriptive analytic techniques starting from describing the characteristics of informants, categorizing the data to be summarized in matrix form until the conclusion of the analyzed data.
Characteristics Needs
Many contributing factors to look for health care could be realized if the perceived benefits for the individual, the meaning is the basic factor is the direct cause of the need for individuals to seek and do it. Based on the results of interviews with 21 informants representing the working area of Bireuen district in the implementation of KIA book as part of one of the government's efforts in improving knowledge of mothers, families and communities to change attitudes and behaviors about the health of mothers and children. Each informant said that the use KIA book is done and has been the KIA program that automatically they have to carry out the policy.
In the description above, optimalizer implementation of the utilization of KIA book starting from the factor of health personnel in preparing activities such as defining target pregnant women, logistics KIA book, preparations in the promotion of the use KIA book as well as the schedule of class mother and Posyandu, although it did not rule out their views and needs another in optimizing the implementation of the use KIA book. This is like saying mothers as follows: "There is nothing to do other than just weigh".
And further revelation: "never see., With a laugh, come take a book KIA, return on enter again into the paper crackle hanging on the bedroom door and coming up next month taken again, and so on, do not be said for at baca2 at home.
Characteristics of society indirectly be factors that support or hinder the implementation of activities. Social status, social status of the majority of low-and middledyeing situation in the region is very isolated population with the majority of them are farmers and low level of formal education. Lack of education and the level of public awareness for healthy living and maintaining a healthy family to be a phenomenon that often occurs in people in very remote areas.
The number of health problems experienced by people in remote areas to make the government through programs that support in improving health, otherway through the use KIA book done in 1993 to support the peningatan knowledge society and as a medium for health personnel in documenting the health of the mother pregnant, babies and toddlers.
Results of research on the utilization of KIA book performed by health workers in the use KIA book that can improve the health knowledge of pregnant women and mothers about health. Power factor health into one of the supporters in the effort to improve health public status which includes the motivation and competence of health professionals will have an impact on the views and needs of the community against the use KIA book. In line with research conducted by by Faridah (2015) that the effect of the strong motivation of health workers will provide a big boost to pregnant women by 2.5 times in the use KIA book.
The Influence of Power Motivation to Use KIA Book
There are many factors that cause health workers to increase the motivation to do the job. Factors that cause itself consists of its own individual characteristics, and low sense of awareness and ownership of the job responsibilities. Besides the support of the leadership, guidance and supervision of program managers should also be improved. If the supervision and guidance of both program managers and district health centers are not up to then use KIA book only used as a recording material without being able to be made as media promotive and preventive improved its level of public health. Knowing the job duties into sections to be considered given will be able to find out anything needs to be done in providing services.
The service standards of midwifery care and the care of infants and toddlers conducted by health workers such as standard antenatal care, care early detection of infant growth, immunization and others listed in the book KIA and used as a medium for recording and information for both health professionals and the public (Ministry of Health, 2016).
From the interview, know the motivations and competence of health professionals in the use KIA book presented from several transcripts of interviews them is the motivation of health personnel in performing his job duties as village midwives statement: "Because they thought I was just helping childbirth course, for other services fifty-fifty" And also the midwife statement is "to provide information only briefly." As well as factors that make the routine job bored in providing services, such as the revelation of a village midwife: "The problem is boredom factor, to explain because it's it". Other factors such as responsibility to work into parts that need to be noticed as the following statement KIA coordinator: "When there is village built, bides assume it is not their area of responsibility" Looking at the phenomenon of the informant statements when viewed from some theories of motivation can be interpreted that the viewpoint of theory X and theory Y that states basically they do not like the job therefore necessary coercion or landing in performing their duties from the standpoint negative.
Good motivation to be able to give a good performance as research conducted by Elly Nur on KIA book as material utilization counseling in antenatal care performed by a health center midwife showed that there is a significant relationship with the motivation variable use KIA book by a health center midwife.
Likewise, the research conducted Viewed in theory that the motivation factor is a potential factor in influencing the performance of the organization in providing services to the community including health care because of talk about motivation not just as part of their efforts to work hard, but motivation is seeing the standpoint of the abilities and confidence in achieving predetermined goals (Robbins, 2015). As a statement of MCH coordinator expressed about the motivation of midwives in manage service activities are: "Lack of preparation of midwives in preparing activities, lack of motivation of midwives as in carrying less material gain knowledge of pregnant women".
The influence of motivation can have an impact on the achievement of performance results of organizations such as the results of research conducted by Nawawi on health worker motivation influence on outcomes public health center performance is significantly great Britain International of Exact Sciences (BIoEx) Journal ISSN: 2686-1208(Online), 2686-1216(Print) Vol. 2, No. 1, January 2020 effect in the amount of 0.60 (standard deviation). In harmony with the results of research conducted by Farida (2015) that support good health worker will encourage mothers 2.5 times large in the use KIA book.
Competence Influence Health Personnel in the Use KIA Book
At the time of the interview seen the expression of informants to the question about what is being done during the preparation kegaiatan to completion, the result is known is "do not know about the guidelines in the use KIA book, about what are the conditions of use KIA book, with a facial expression confused and laugh embarrassed. Furthermore, the statement by the Head of the District Nutrition KIA states that: "Communication is less effective than delivering a message (midwife) and the receiver of the message (the cadre) to carry out their job duties. Tdak information so that the right target". Likewise, in the revelation of the competencies that must be done in accordance with the technical instructions in the use KIA book is, as the village midwife statement as follows: "There is also sure to be noted".
In the guidelines KIA book stated that the completeness of recording in the books of the KIA is bagain essential for monitoring the health of the mother and child, and this is an important part in the use KIA book to the maximum, as research conducted by Sistiarani, 2014 at motivational factors midwives in compliance charging KIA book an obstacle to the low utilization of KIA book that is only 2.2% in use by health professionals (midwives) in the utilization of KIA book Utilization of KIA book that has socialized and undertaken since 1993 is the basis of the objectives in reducing maternal and infant mortality rates and increased knowledge through empowerment of families and communities towards the better. Ripley and Franklin in Winarno (2008) argues that the implementation of a policy that has set output or authority must provide the program with a real input and program objectives expected by the government and officials.
IV. Conclusion
MCH handbook is one of the Government's aim to increase public knowledge so that they can maintain the health of their families through educational information contained in the books of the KIA. Furthermore, through the use KIA book is good and right will be a documentation of the medical history of pregnant women, infants and young children as part of the monitoring of health status and become an advanced information for other referral agencies such as hospitals.
Based on the information and data obtained in the field, researchers noted several issues that can be repaired and further development, among others: 1. Population characteristics so remote a part that needs great attention, lack of education, low social status of the majority of farmers made no factor to their needs so that they will get to do it. 2. The need for awareness of health personnel in performing their duties that is part of the responsibilities of the job to be done without fed. 3. The need for supervision and oversight of the management of a more serious program to raise awareness in the use KIA book as a medium of education for the public information | 2020-02-06T09:14:05.398Z | 2020-01-31T00:00:00.000 | {
"year": 2020,
"sha1": "ed6067121ab506096d86024e213da939cf273c90",
"oa_license": "CCBYSA",
"oa_url": "https://biarjournal.com/index.php/bioex/article/download/148/177",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "f808ce6794e79edcbe330f475d91babe7d6d1400",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
92544552 | pes2o/s2orc | v3-fos-license | Metabolic Profile and Hormonal Status Comparison Between Primiparous and Multiparous Non-Cyclic Cows
Abstract Several reports indicated that a large proportion of dairy cows have not resumed cyclicity until day 60 after calving. These cows are traditionally classified as non-cycling (anoestrous or anovular cows). Static ovaries (SO, lack of luteal tissue and follicles >8 mm, and progesterone < 0.5 ng/mL) could be a possible underlying reason that contributes to a non-cycling status. Although SO affects both primiparous (PP) and multiparous (MP) cows, PP cows are more prone to be non-cycling than MP. Therefore, this study aims to compare the metabolic profiles and hormonal status between non-cycling PP and MP cows diagnosed with SO. One hundred and twenty one animals that did not express signs of oestrus until day 60 postpartum were grouped by parity (PP, n=58 and MP, n=63), then blood sampled and examined using transrectal ultrasonography. Blood samples were collected before the ultrasonographic examination. Out of those, 42 PP (72.4%) and 28 MP (44.4%) were diagnosed as non-cycling (bearing SO). Serum concentrations of triglycerides, cholesterol, total protein and albumin did not differ between parity groups. The glucose concentrations in PP cows (1.43 ± 0.59 mmol/L) and MP cows (1.69 ± 0.71 mmol/L) did not differ, however, they were less than the normal physiological concentration. In addition, no differences were detected between parity groups for concentrations of NEFA, β-HBA, progesterone and estradiol. In summary, we concluded that non-cycling PP and MP cows bearing SO have similar hormonal status and metabolic profiles.
INTRODUCTION
The early resumption of ovarian cyclicity following parturition has a great impact on the reproductive efficiency in dairy cows. Achieving high reproductive efficiency generally requires early onset of ovarian activity, insemination and conception within 90 days after calving, leading to once a year calving (1). Indeed, early reestablishment of ovarian activity derives maximum economic benefit to the farmers. In that respect, Ambrose et al. (2) reported that cows that resume ovarian cyclicity and had their first ovulation within 3 weeks after calving were more fertile at first service than cows that ovulated for the first time after 9 weeks post-partum (46% versus 23% conception rate), respectively. Nevertheless, there are still large proportions of postpartum dairy cows (6-59%) that have not resumed cyclicity until day 60 after calving, traditionally classified as noncycling (anoestrus or anovular) cows, (3). Among the factors that contribute to non-cycling status, such separated ovarian disease (4), persistent corpus luteum (1), post-partum uterine disease (5), static ovaries (hereinafter in the text: SO) that stand as one of the possible underlying reasons (6). Cessation of cyclicity caused by SO could transpire in cows that are either experiencing severe nutritional restriction (7) or are in a negative energy balance -NEB (8). The latter occurs during the transition period due to the differences between dietary energy intake and requirements for milk production, resulting in mobilization of body fat reserves and increased blood serum non-esterified fatty acid (NEFA) and β-hydroxybutyric acid (β-HBA) concentrations (9). Indeed, some studies have showed that anovular cows (non-cycling cows) have a reduced feed intake (between 2.5 kg and 3.6 kg per day) compared to cycling cows (10), and hence more progressive NEB status and significantly increased plasma NEFA and β-HBA concentrations in comparison to the ovulatory (cycling) cows (11). It was assumed that the NEB status negatively affects LH secretion necessary for resumption of the follicular growth, causing a certain delay in the re-establishment of cyclicity (12). Therefore, as soon as cows overcome the NEB status, they could achieve earlier onset of cyclical ovarian activity and possibility of conception (13).
In addition, parity also has been shown to play an important factor that contributes to noncycling status (3,4). Several studies showed that PP cows are more susceptible to metabolic stress, to experiencing more severe NEB during the transition period (8,14), to having extended recovery from NEB and hence longer intervals to first ovulation in comparison to MP cows. A reduced capacity of PP cows for food intake, greater demands for nutrients for their own growth and increased requirements for their first lactation have been assumed as possible underlying reasons for their prolonged NEB recovery (15).
Since PP cows generally have a higher incidence of being non-cycling (bearing SO) after day 60 postpartum than MP cows, we hypothesized that the metabolic profiles and hormonal statuses between non-cycling PP and MP cows bearing SO are different. Therefore, the objective of the present study was to compare the metabolic profiles and the hormonal status between the non-cycling PP and MP cows diagnosed with SO.
Animals and experimental design
The study was conducted during the period between January 2010 -December 2012 at two dairy farms (Farm A and B) located at north most (Farm A) and southeast part (Farm B) of Republic of Macedonia. On Farm A, the cows were housed in free-stall barns with cubicles, fed a standard TMR ration based on corn silage, chopped alfalfa, straw and a 16 % protein concentrate-mineral mix and milked twice daily with an average 305 d-milk production level of 6100 kg. On Farm B, the cows were housed in tie-stall barns on deep straw bedding, milked thrice daily with an average 305 d-milk production level of 6500 kg. The cows were fed a corn silage, grass silage, alfalfa, brewers' grain and concentrate-mineral mix (16 % protein), offered twice daily according to the stage of lactation, milk production and reproductive status of the animals in amounts of between 9-12 kg per day.
One hundred and twenty one animals that either did not express signs or were not seen in oestrus by farm personnel by day 60 after parturition were included in the study. The animals were grouped by parity PP (n=58) and MP (n=63) cows, blood sampled and examined using transrectal ultrasonography. The blood sampling was done prior to the ultrasound examination. Cows were classified as non-cycling (bearing SO) if no luteal tissue (corpus luteum, CL) and no follicles larger than 8 mm were detected concomitantly with a serum progesterone (P 4 ) concentration < 0.5 ng/ mL (16,17). If a luteal tissue was observed along with serum P4 > 1 ng/mL, the cows were classified as cycling; cystic (presence of follicular cystfollicular like structure larger than 25 mm without presence of luteal tissue and P4 level <0.5 ng/ml); and cows in heat (presence of follicle between 16-18 mm, no luteal tissue, P 4 < 0.5 ng/mL and fluid within the uterus).
Ultrasonographic examination
Ultrasonographic examination of the ovaries was done with a B-mode scanner Aloka SSD 500, (Tokyo, Japan), equipped with a 7.5 MHz linear-array transducer for intra-rectal use. The examination was done as described previously (17). Briefly, before insertion of the lubricated transducer, the rectum was emptied, and the ovaries were first manually located. After insertion, the size of the ovaries and the diameters of the follicles were obtained from two linear measurements taken at perpendicular angles by means of electronic callipers located on the ultrasound device and using the images on which the diameters of the ovaries and follicles were maximal.
Hormonal and metabolic profiles analysis
Blood samples for estradiol (E 2 ) and P 4 analysis were collected from the jugular vein into glass tubes (without anticoagulant) and transported at +4 0 C within 3 hours after collection. The samples were centrifuged (2500 RPM x g 5 minutes), and after serum extraction were stored at -20 0 C until assayed for E 2 and P 4 using enzyme-immune assay (EIA). The assay was done at the Faculty of Veterinary Medicine -Skopje (Macedonia), using commercially available kits (HUMAN, Progesterone and Estradiol ELISA Test -Germany) on Immuno-scan BDLS reader. The intra-assay CV averaged 7.2% and 9.3%; while the inter-assay CV was 8.6% and 9.2% for P 4 and E 2 respectively.
The metabolic profile analysis was done at the Faculty of Veterinary Medicine in Skopje. The glucose, total protein, albumin, cholesterol, triglycerides, NEFA and β-HBA concentration analyses were performed by enzyme-colorimetric determination with an end point method using commercially available kits. For glucose, total protein, albumin, cholesterol -Human (Germany), for triglycerides-Sentinel (Italy) and for NEFA and β-HBA -Randox (UK), all in accordance with the IFCC, on semiautomatic photometer Stat-Fax 3300 (Inc., Awareness Technology, USA).
Statistical analysis
The data for the tested parameters on the individual and farm level were subjected to descriptive statistics and analysed for normality distribution using Shapiro-Wilk test, setting the level at p<0.05. Comparison of tested parameters between parity groups was done by Student's T-test and Mann-Whitney U test depending on the data distribution. The results are presented as mean and standard deviation values (mean ± SD). The statistical analysis was carried out in STATISTICA (data analysis software system), version 8.
RESULTS
In total, 70 out of 121 cows (57.8%) were diagnosed as non-cycling (bearing SO). From these cows, 42 (72.4 %) were PP and 28 (44.4 %) MP cows. In the remaining cows, ovarian cysts were detected in 6.6%, CL in 25.2 %, and 10.3% of the cows were in heat. The length of the ovaries in both parity groups ranged between 16.0 mm to 19.0 mm, while the width ranged between 10.0 mm to 11.0 mm. The follicle diameters in both groups of cows ranged between 2 mm to 6 mm.
DISCUSSION
The present study intended to compare the metabolic profiles and hormonal status between the non-cycling PP and MP cows diagnosed with static ovaries. Based on the gathered results, we could not find any differences between neither the metabolic profiles nor hormonal status of both parity groups of cows; therefore, we have rejected our hypothesis. However, the present study revealed several findings.
Firstly, -the clarification of the non-cycling status of the cows using a single ultrasound examination. In order to clarify the non-cycling status of the cows, we included cows that did not express signs of oestrus by day 60 post-partum. Using an ultrasound accompanied with the serum P 4 and E 2 examination, we have classified 57.8 % of the examined cows as non-cycling. Similarly, Silva et al. (18) and Stevenson et al. (19), using the same method (ultrasound and P 4 ) have classified the cows as cycling or non-cycling. It should be noted that, in the present study, we have performed a single ultrasound examination to clarify the non-cycling status of the cows while vast majority of the studies use two sequential ultrasound examination 7 to 14 days apart (17). Indeed, when a single ultrasound examination is performed (as opposed to serial) there are limitations to making conclusions, since cows in pro-oestrus, oestrus and first days of metoestrus will have no visible CL and P 4 concentration <0.5 ng/mL. Nevertheless, cows in pro-oestrus or oestrus will have at least one dominant or preovulatory follicle, respectively, and cows in met-oestrus a growing CL (except for the first two days of met-oestrus) that distinguishes them from non-cycling cows. In fact, detection of non-cycling cows at a strategic time in the postpartum period (day of first injection of GnRH in the breeding Ovsynch protocol) when a single ulrasonographic examination was performed and compared with P4, resulted in misdiagnosing of 21 % (37/174) cycling cows, that were incorrectly classified as non-cycling (19). Nevertheless, when implemented in detecting non-cycling cows, this method showed an accuracy, sensitivity, and specificity of 87.3 %, 85.7 % and 87.7 %, respectively (18). Therefore, from a practical point of view and according to the results of the present study, implementation of a single ultrasound examination is a reliable method for diagnosis of non-cycling cows (16).
Secondly, -the hormonal status of PP and MP non-cycling cows. Regarding the hormonal status, our results have shown that non-cycling PP and MP cows have low concentrations of both P 4 and E 2 without any significant differences. The latter is somehow expected, since both parity groups of cows lacked a CL and had very small antral follicles. It is interesting to note, that in both groups, the cholesterol, as a substrate for P 4 and E 2 production, ranged within its physiological values. It seems that these small antral follicles (present on the ovaries) are not capable of producing higher amounts of E 2 , thereby leading to lower concentration of E 2 (3). Decreased E 2 concentrations are sufficient to block the pulsatility of GnRH and LH, thus impeding the growth of the follicle leading to noncyclicity (3,20). In contrast, CL-absent cows (low P 4 concentration) have been shown to have a higher E 2 concentration than CL-present cows (high P 4 concentration). which in turn leads to a higher LH pulse frequency that enhances the follicular growth (21). Nevertheless, it should be emphasized that, for cows diagnosed as non-cycling, the major underlying factor for compromised follicular development could be the low LH pulse frequency, although, other metabolic hormones like IGF-1, insulin, growth hormone (GH) that are involved and crucial for normal follicular development should not be avoided (3).
Thirdly, -the metabolic profiles between the PP and MP cows; our results have shown that PP cows have glucose concentration similar to that of MP cows. Nevertheless, both groups had glucose concentrations less than the normal physiological concentrations (2.3 -4.1 mmol/l) (22) i.e. in the state of hypoglycaemia. Hypoglycaemia has been shown to have a negative impact on the process of resumption of the follicular growth postpartum (23). In that respect, it has been reported that glucose together with insulin were the most likely molecules that exert an effect on GnRH secretion in post-partum dairy cows (24). As long as glucose remains low (state of hypoglycaemia), insulin remains low. Decreased plasma concentration of insulin reduces the androgen and E 2 production and therefore compromise the ability of follicles to acquire LH receptors (25) necessary for the resumption of the follicular growth. When the glucose concentrations are increasing, then the insulin and IGF-1 (later on) starts to increase. The latter has been shown to represent a 'metabolic signal' of the resumption of ovarian function (10). Elevated insulin concentrations affect the GnRH secretion thereby causing the cows to release more GnRH, which in turn stimulates the LH pulsatility (23). Additionally, an increased insulin concentration recouples the GH/IGF-1 axis causing substantial increases in plasma concentrations of IGF-1 (26). Increased insulin and IGF-1 concentrations have been shown to enhance the androgen production in theca cells (as a substrate for E 2 ), which in turn causes increased follicular E 2 production (27) that stimulates the LH pulse frequency and hence supports and sustains follicular growth (28).
The remaining biochemical parameters were similar between the groups. The protein status (total proteins and albumins) were not significantly different between the parity groups. The latter implies that cows in both groups have a normal equilibrium in anabolic and catabolic protein metabolism during this stage of lactation. Although the energy status is affected by hypoglycaemia in both parity groups, the lipid parameters did not present liver failure development, since serum concentrations of triglycerides and cholesterol did not reveal significant differences between the groups. Therefore, all biochemical parameters revealed a normal alimentary supply of the metabolic requirements necessary for this stage of the productive cycles.
Finally, our results have shown similar nonsignificant NEFA and β-HBA concentrations between the parity groups. The serum NEFA concentration together with β-HBA and glucose concentration serves as indicators of the energy status of the animals (29). In both groups, the NEFA and β-HBA concentrations ranged within the normal values (0.10-0.90 mmol/l, and 0.03-1.20 mmol/l), (22), respectively. Since the cows were 60 days post-partum, these results suggest that cows have passed the period of negative energy balance that usually diminishes around 60 days postpartum. Therefore, we are assuming that, in the present study, the non-cycling status of both PP and MP cows was not influenced, at the time of sampling, by the negative energy status.
CONCLUSION
In summary, our results showed a similar metabolic profile and hormonal status between non-cycling PP and MP cows diagnosed with SO. Therefore, it can be assumed that one possible underlying reason for compromised follicular development in both PP and MP non-cycling cows could be hypoglycaemia that occurs in the postpartum period due to increased demands for glucose in milk production. | 2018-12-31T08:10:32.991Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "652cbbd986991f05f6f34bae840def3e2e588feb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.2478/macvetrev-2018-0022",
"oa_status": "GOLD",
"pdf_src": "DeGruyter",
"pdf_hash": "652cbbd986991f05f6f34bae840def3e2e588feb",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
73659717 | pes2o/s2orc | v3-fos-license | STRUCTURE AND VOTING BEHAVIOR OF THE BOARD OF DIRECTORS: THEORETICAL AND EXPERIMENTAL EVIDENCES
We examine the value of outsiders by voting behavior of boards. Our model proves that boards with a majority of trustworthy but uninformed outsiders can implement institutionally preferred policies and augment corporate performance by upgrading resource allocation. Our laboratory experiments strongly support this conclusion that higher proportion of appointed outsiders yields more efficient boards. We also find outsider-dominated boards, given enough time, will reduce information asymmetry among directors and thereby execute institutionally preferred policies.
Introduction
A board is the core of corporate governance, entrusted by shareholders to manage the corporation. It is designed to mitigate conflicts of interest between privately informed insiders and owners. Scandals at Enron and WorldCom raised issues about reducing moral hazards of insiders and increasing function of the board of directors through several mechanisms of corporate governance. These have become major concerns of investors, experts and government. Appointing outsiders has been regarded as one of the best way to boost effectiveness of boards.
However, advocacy of outsiders is surprising as research has produced weak or mixed results on effectiveness of outsiders on boards. Some researches see no obvious evidence that employing outsider directors bring significant benefits to corporation and shareholders. 13 But due to empirical difficulty, these studies show two blatant shortcomings: [1] whether outsiders are truly or nominally independent and [2] lack of deep exploration of actually operation of outsiders on boards. Since secondary data to implement empirical study spawns difficulty, this 13 Results fail to prove that employing outside directors will afford large benefits to corporations and shareholders. See Patton and Baker, 1987;Weisbach, 1988; Hermalin and Weisbach, 1991;Mangel and Singh, 1993;Shivdasani, 1993;Mehran, 1995;Yermack, 1996;Agrawal and Knoeber, 1996;Romano, 1998;Klein, 1998;Bhagat and Black, 2002;Lin et al., 2003;Park and Shin, 2004;Howton, 2006. research employs experimental method and theoretical view to explore effects of appointing outsiders to boards.
We construct a theoretical model to explain the effect of different proportion of outsiders to the voting behavior of the board. Our model shows that insiders approved the project regardless of its quality when their private benefits are larger, and outsiders and insiders, those who have smaller private benefits, only approved good quality project. Owing to lack of private information, outsiders only observe the information transmitted by the behavior of insiders and make reasonable expectation. We suggest outsiders have two possible types of reasonable expectation: [1] insider information transmission 14 and [2] preventing value discounted 15 .
Then we design the experiment of the voting behavior of directors following the model. We verify that the higher the proportion of appointed outsiders, the more efficient the board and the higher the possibility to adopt institutionally preferred policies 14 The reasonable expectation of insider information transmission means outsiders believe information transmitted by insiders and judge the quality of project base on the information. 15 Reasonable expectation of preventing value discounted means outsiders believe that insiders transmit dishonest information and mislead their judgment. Especially if the quality of project is bad, insiders may transmit dishonest information for private benefits. So, outsiders will prevent corporation value destroyed and result in voting against regardless of the project's quality. advantageous to corporate development and raising corporate value. We find that outsider-dominated boards with necessary time, which will reduce information asymmetry among directors, are more likely prefer employing reasonable expectation of insider information transmission and to execute institutionally preferred policies.
The remainder of this paper is organized as follows. Section 2 cites prior literature that explores the role played by outside directors in corporate governance. Section 3 describes a theoretical model of outside directors with independence, to state voting behavior of directors. Section 4 presents experimental design of voting behavior and analyzes experimental results. Section 5 discusses conclusions drawn from this study.
Outsiders Have Positive Effects on Corporation and Shareholders
Due to its monitoring role, the appointed outside directors are an important implement to reduce agency costs and hence it has a direct impact on corporate performance through they can be used effectively to align benefits of stockholders. Consequently, the outside directors are an important governance mechanism. Many agency theorists show that outsiders are important monitors of management and providers of relevant expertise and as such are central to the effective resolution of agency problems between managers and shareholders (Fama, 1980;Fama and Jensen, 1983;and Singh and Harianto, 1989). Fama and Jensen (1983) find that outside directors compete in the outside directors' labor market. Consequently, Outside directors have incentives to develop personal reputations as experts in monitoring management because the value of their human capital depends primarily on their performance as monitored senior management of other enterprises. Such empirical evidence on monitoring efficacy finds that independent directors protect external shareholders in specific cases where there is an agency problem between managers and shareholders (Brickley and James, 1987;Weisbach, 1988;Byrd and Hickman, 1992;Lee et al., 1992;Barnhart and Rosenstein, 1998;Davidson et al., 1998;Fields and Keys, 2003;Benkel et al., 2006). Baysinger and Butler (1985), Fiegener (2005), and Luan and Tang (2007) find that outsider directors can increase firm value. Higher ratio of independent outside directors on boards, with more independence and fewer conflicts of interest, enhances firm performance (Rosenstein and Wyatt, 1990;Pearce and Zahra, 1992;Dobrzynski, 1993;Ezzamel and Watson, 1993;Alshimmiri 2004). Superior governance will result to the extent that director and shareholder interests are aligned. Board composition affects alignment. Outsider directors are assumed to effectively represent the interests of the shareholders because they are considered to be independent of management, lack of self-benefit behavior, and promoting shareholders wealth (Fama, 1980;Kesner and Dalton, 1986;Rechner, 1989;Baysinger and Hoskisson, 1990). Denis and Sarin (1999), Coles and Hesterly (2000), along with Peasnell et al. (2005) suggest that monitoring value of outside directors is contingent on the extent of the firm's agency problems. Greater firm agency problems mean more benefit from outside directors. Perry and Shivdasani (2005) document how boards with a majority of outsiders are more likely to initiate performance-increasing restructuring program.
Outsiders Have Not Positive Effects on Corporation and Shareholders
Some studies do not support that the appointment of outsider directors may actually solve the agency problem between managers and shareholders, and increase firm performance. Scandals involving firms such as Enron and WorldCom, as well as other widespread cases of bankruptcy, have raised important questions with regard to the effectiveness of board monitoring and the high compensations that directors' receive. 16 In fact, this is a coalition of private benefit both insiders and outsiders. Jensen (1993) suggests that boards of directors often fail to monitor a firm's management effectively, in that board culture inhibits constructive criticism, and because of the great emphasis on politeness and courtesy at the expense of truth and frankness in boardrooms. Brick et al. (2006) show excessive compensation via mutual back scratching or cronyism.
When the managers are involved in the director selection process, especially if managers serve on the board's nominating committee, the director is more likely to be an affiliated rather than independent outsider. As a result, outsider can not to perform their monitoring duties effectively (Mangel and Singh, 1993;Shivdasani and Yermack, 1999). Outside directors, by virtue of their business, previous employees of the related company, or other links with the firm, may tend to identify more closely with interests of management than of the shareholders. Though some directors may be classified as independent, they may rely on top management kindness in tricky ways: e.g., acting as paid advisors or consultants to the company. These directors are reluctant to resist top management's request and make resulting in less rigorous monitoring (Lin et al., 2003;Gillette et al., 2003). Outside directors' lack of sufficient incentives, time, and expertise to perform their monitoring duties effectively has led many commentators to express doubts about their ability to make a meaningful contribution to promoting 16 The New York Times (16 December 2001) reported that the compensation for each Enron director ranked them as the seventh highest paid directors in the United States.
Several empirical studies document that that shareholders do not benefit from the appointment of outside directors. Hermalin and Wesibach (1991), Mehran (1995), Klein (1998), Romano (1998), Bhagat and Black (2002), and Cho and Kim (2007) all report no systematic relation between measures of firm performance and fraction of outside directors. Agrawal and Knoeber (1996) document that more outside directors on a board negatively impact firm performance. Yermack (1996) found that there is a negative relationship between percentage of outside directors and firm performance measured by Tobin's Q. Outside directors, as a whole, may not improve governance or increase shareholder welfare. Only directors classified as venture capitalists, equity block holders and suppliers of debt finance as affiliated outsiders with strong monitoring incentives, can benefit shareholders (Shivdasani, 1993;Lin et al., 2003;Park and Shin, 2004). Howton (2006) finds that the presence of outside directors does not increase survival chances of the firms.
Outsiders may actually receive penalties if they fail to monitor firm management effectively or to promote interests of shareholders. Studies find outsiders associated with underperforming firms and/or perceived as ineffective monitors at one firm likely to hold additional outside directorships (Gilson, 1990; and Kaplan and Reishus, 1990;Brickley et al., 1999). Evidence shows likelihood of financial reporting failure dropping with more outsiders on boards (Beasley, 1996;Dechow et al., 1996;Agrawal and Chadha, 2005;Farber, 2005;Srinivasan, 2005). In sum, views concerning value of outside directors are mixed. Our study's theoretical view and laboratory analysis enable us to address confounding problems, to explore effects of outsiders on boards.
Model
In the model, there are three time point (t = 0, 1, 2), and managers, insiders, and outsiders as agents, and managers also may be insiders. A board is constituted of two groups of insiders and outsiders. Together, they must decide whether to accept a new project. The project's fate is decided by majority vote. Among them, the ith insider's equity ownership proportion of holding is α i . The model assumes regardless of the project's quality that it can provide private benefits of managers and insiders, those who incentives are misaligned with shareholders. Relatively, the benefits of outsider are aligned with shareholders. All agents in deciding, whether the project is approved or not, is based on satisfying they maximum expect payoffs of target. Sequence of the theoretical model as following: At t=0, managers propose new project and insiders have the private information of the project's quality; outsiders do not. At t=1, board calls meeting, and insiders and outsiders will proceed communication. Outsiders are unable to discriminate between value-increasing and -decreasing projects; insiders have private information enabling them to distinguish between these types of projects. Outsiders observe the transmitted signal by behavior of insiders as a basis for voting. After communication, a board proceeds with voting immediately and knows the outcome soon. At t=2, expected payoff of all agents is fulfilled. If a project is rejected at t=1, expected value of agents is unchanged. If the project is accepted at t=1, agents will acquire the ex-post payoffs.
Agent's Payoffs and Information
We present here our model of voting behavior of board of directors. We assume that the firm has the amount of cash I, which it invests in a project with the gross rate of return R. The firm has no costs, so the profits are RI. Not all of the profits are distributed to shareholders on a pro rata basis. New project will increase the profits Agents acquire the expost payoffs.
Managers rope in insiders. Insiders learn about the quality of project, and the acquiring of private benefits.
where is constant, > 0, and outsiders lack of private benefits and payoffs of each is equal. Moreover, we believe that the rankings of ex-post payoffs of outsiders exhibit consensus regardless of the new project's quality because outsider incentives are aligned with those of the firm's shareholders. Similarly, the rankings of ex-post payoffs of insiders also exhibit consensus following a good quality project because it will increase firm value and insider private benefits. However, the rankings of ex-post payoffs of insiders have not exhibited consensus following a bad quality project because it will increase insider private benefits and destroyed firm value. Consequence, given in the above result, we reasonably believe that the rankings of the payoffs are as follows: uncertain. are and of rankings the and
Agent's Voting Behavior and Strategy
The model assumes the quality of proposed project has good and bad of the two types. Insiders have private information to know the project's quality; yet outsiders only can reasonably expect the quality of the project. Whether the project will be approved or not depends on the majority vote of board. The model assumes that there are n i insiders and n j outsiders in board. There are two stages of board meeting, which are communication stage and voting stage. Outsiders, lack of private information, will utilize communication stage to observe the information transmitted by the behavior of insiders and reasonably expect the quality of the project. After the communication stage, meeting proceeds to the voting stage. The project will be accepted if the affirmative vote is majority. But the project will be rejected if the voting is not in favor, including against and abstentions. Below is the explanation of the communication process and employ strategy of insiders and outsiders.
Information Transmission and Employ Strategy of Insiders
The model assumes that the probability of good quality project is g, where g = 0.5. Therefore, the probability of good and bad quality project are equal, and it also is as prior belief of outsiders. Owing to the rankings of ex-post payoffs of insiders exhibit consensus following a good quality, the speaking of insiders unanimously, using CON to represent, support the project in the communication stage and to vote supporting the project in voting stage when the signal of s is G. Contrary, when the signal of s is B, the speaking of insiders does not exhibit consensus using NCON to represent in the communication stage, and the vote action of insiders is different as well. Moreover, in special condition, all insiders have larger private benefits when the signal of s is B, and the speaking of insiders unanimously support the project as well. The probability of special condition is far less than 0.5. As a result, below are the probabilities of prior belief of outsiders and information transmission of insiders as
Reasonable Expectation and Employ Strategy of Outsiders
The model assumes that outsiders have the unanimous voting because the rankings of ex-post payoffs of outsiders exhibit consensus regardless of the new project's quality. Owing to lack of private information, outsiders only observe the information transmitted by the behavior of insiders and make reasonable expectation. We suggest that outsiders have two possible types of reasonable expectation, including the reasonable expectation of insider information transmission and the reasonable expectation of preventing value discounted. The reasonable expectation of insider information transmission means outsiders to believe that the information is transmitted by insiders and judge the quality of project base on the information. The reasonable expectation of preventing value discounted represents outsiders to believe that insiders transmit dishonest information and mislead the judgment of them. Especially if the quality of project is bad, insiders may transmit dishonest information for private benefits. So, outsiders will prevent corporation value discounted and result in voting against regardless of the project's quality. A. Outsiders employ the reasonable expectation of insider information transmission. We employ both the probabilities of prior belief of outsiders and information transmission of insiders, to calculate the probabilities of posterior belief of outsiders by the Perfect Bayesian Equilibrium as Based upon the above results, outsiders make a good quality reasonable expectation and to vote approved the project when the speaking of insiders unanimously supports the project.
Based upon the above results, outsiders make a bad quality reasonable expectation and to vote rejected the project when the speaking of insiders does not exhibit consensus to supports the project.
B. Outsiders employ the reasonable expectation of preventing value discounted. A coalition between manager and insider may occur when they hope to obtain more private benefits from investment even if it destroys firm value. Thus, all outsiders vote to reject the project in order to prevent corporate value discounted. If outsiders employ the reasonable expectation of preventing value destroyed, it shows that insiders use dishonest information transmission to mislead outsiders, and result in outsiders distrust information by insider transmission and vote against the project regardless of its quality.
Outcome of Board Voting
We can find that there are two type outcomes of board voting, including the institutionally preferred outcome and the institutionally undesirable outcome. Now, we define the institutionally preferred outcome as efficient outcome that accepts the project only when it is a good quality project, and rejects the project only when it is bad quality project, otherwise as the institutionally undesirable outcome which is inefficient outcome.
Further, combining the employ strategy of outsiders and insiders which can be supported by Nash equilibrium, we find that outsider-dominated boards are more likely to execute institutionally preferred outcome and only approve good quality project when outsiders prefer employing the reasonable expectation of insider information transmission. Next, when outsiders prefer employing the reasonable expectation of preventing value discounted, outsider-dominated boards also are more likely to execute the institutionally preferred outcome and always reject bad projects, preventing a coalition of insiders from destroying firm value via investments in poor projects, however, the institutionally undesirable outcome cannot be completely eliminated relating to reject good project. What is Outsider's decision-making? Below of this section, we employ experimental method to confirm the decision-making of outsiders and explore the effects of appointed outsiders on board. In a word, our experiments provide strong evidence that outsider-dominated boards are more like to implement institutionally preferred policies.
Experimental Design
We examine board effectiveness using an experiment research technique that enables us to address many confounding problems of the faced. The experiment consists of four central factors. Each central factor involved one experimental session and lasted 10 rounds. Similar experimental method can be found extensively in experimental literature, and its purpose is by limited experimenter to obtain multiple observations in short game experiment. The experimental subjects were MBA and EMBA students to major in finance. The subjects were told they would have an opportunity to earn money in a research experiment involving group decision-making. Every subject participated in only one experimental session.
Basic Design
Before experiment, subjects read a set of instructions (see appendix I ) to understand the regulations of experiment, completed assigned worksheets, and were given the opportunity to ask questions to assure the subjects can fully realize the game rules. The term -board‖, -outsiders‖, and -insiders‖ were never mentioned in the instructions during the experiment to avoid unnecessary experiment bias with in advance expectance of subjects. Jensen (1993) believes the board size should be limited to seven or eight members, so that the marginal cost of coordination and processing problems does not exceed the marginal benefit. According to Gillette et al. (2003), they also think seven members is the best. Then, this study adopts this type of seven persons a group as well. After experiment starts, the monitors randomly divided the subjects into seven persons a group. Subjects were randomly drawn for agent's type. There are seven balls in the bucket, in which there are yellow and blue balls. Those who draw a yellow ball will be insider and those who draw a blue ball will be outsider.
Moreover, we divide the experiment, which can be categorized based on the number of outsider in board, into four central factors (sessions): (1) no outsider factor, that is each group involved seven insiders and no outsider, using NO to represent; (2) two outsiders factor, that is each group involved five insiders and two outsiders, using O2 to represent; (3) three outsiders factor, that is each group involved four insiders and three outsiders, using O3 to represent; (4) four outsiders factor, that is each group involved three insiders and four outsiders, using O4 to represent.
At each session, subjects will play the same role in 10 rounds experiments after by drawing to decide playing the role of insiders or outsiders. Because Eckel and Holt (1989) believe that different grouping ways will affect the outcome. Then, the study will employ two types of random and repeated as the method to divide group. In first five rounds, each member was decided by random. It means using a draw to decide new group after the end of each round. Comparatively, in second five rounds, each member is unchanged. This design may also catch the real world that the condition of board members of changed or repeated has any difference the outcomes of voting.
Milgrom and Roberts (1996) believe that the design of different communication way will affect the function of board. Farrell and Rabin (1996), Forsythe et al. (1999), and Charness and Grosskopf (2004) find simple conversation can increase the effectiveness of communication and the efficiency of decision. So, each round, this experimental design requires communication in advance before voting of each group and the communication divides into two stages. In the first stage, a communication will be held between sub-group of insiders and outsiders among each group, and the place of communication between sub-groups will be isolated. Before inside directors start to discuss, monitors will draw a ball from the bucket to indicate the project is good (white ball) or bad (black ball). The bucket will contain fifty white balls and fifty black balls. After each drawing the ball will be returned to the bucket, thus each ball has an equal chance of being selected in each stage. The time for the meeting is limited to four minutes in the first stage. If the preceding outcome of drawing is white ball, then omit the following activities. If black ball is drawn out, it indicates that the quality of project is bad, then each insider still have to proceed to draw again, in order to further divide into two sub-group type I 1 and I 2 : I 1 means the private benefits of insider is larger, which is contain sixty red balls and thirty green balls. After each drawing the ball will be returned to the bucket. Those who draw a red ball will be type I 1 and those who draw a green ball will be type I 2 . The proportion is mainly according to the result from induction of model , which shows that if insiders have x person , x=3,4,5,7 and if the chance of drawing black ball of each individual is y, then the probability of insider drawing black ball is Therefore, y at least is close to 2/3, indicating the appropriate proportion of black ball to white ball is 2:1. The time for the meeting is limited to four minutes in the first stage.
Next, insiders return to group and proceeded communication among entire group in four minutes. During communication, insiders can not reveal the outcomes of all drawing. Besides, according to Cooper et al. (1989Cooper et al. ( , 1994, Forsythe et al. (1999), they state that the discussion without restrictions will affect the outcome of the experiment. Then, during the all stages, this experiment adds some restrictions to regulate the communication must obey the following rules: (1) there will be no speech of having nothing to do with the voting; (2) there will be no discussion of bodily threats; (3) there will be no discussion of other payments; (4) there will be no discussion between groups.
Following the communication time, voting will then take place. The subjects can cast either a vote of -Yes‖ or -No‖ for the project to be taken. After voting, the monitor will return to each group their group's majority vote and the project's quality. Earning of subjects will be calculated. In addition, this experiment discovers all participants can obey the rules and they also don't feel the limitation and insufficient of communication time.
In accordance with the rankings of agents' payoffs in section 3.1, payoffs were designed to ensure that I 1 of insiders, as obtaining larger the private benefits, preferred to accept the project regardless of the outcome. They received at least $0.7 following project be accepted, compared with a $0.5 following its rejection. Contrary, the I 2 of insiders, as obtaining smaller the private benefits, preferred to accept the good project. They could receive $0.8 following a good project be accepted, compared with a $0.4 following a bad project be accepted, and to earn $0.5 from rejection as well. The outsider payoffs were designed to ensure that they preferred to accept the project if it was good. They could expect to earn $0.6 from acceptance the project conditional on a good project, to only earn $0.1 from acceptance it following a bad project, and to earn $0.4 from rejection. Below are the payoffs for each subject type for all possible outcomes in Tables 1-3. <Table 1 is inserted about here> <Table 2 is inserted about here> <Table 3 is inserted about here>
Central Factors and Treatments
We employ four central factors and eight treatments to examine the effect of different proportion of outsiders to the voting behavior of the board. The central factors can be categorized, which were based on the number of outsider on seven members of board, into four central factors: no outsider factor, NO; two outsiders factor, O2; three outsiders factor, O3; four outsiders factor, O4. The treatments can be categorized based on the mixing protocols employ random mixing (RA), where group membership was changed after every round but subjects retained their agent-type for the entire session, and repeated groups (RE), where group composition remained unchanged for the duration of the session. Each central factor was divided into two treatments of the random mixing and repeated groups. So, we obtain eight treatments included RANO, RENO, RAO2, REO2, RAO3, REO3, RAO4, and REO4.
Results
We now examine results from the central factors and treatments along two dimensions including the incidence of the institutionally preferred outcome and outsiders voting patterns. When the higher the proportions of appointed outsiders, the results show that the incidences of the institutionally preferred outcome enjoy greater predictive success.
Data
In Table 1 We also perform an ANOVA on these means, which difference are significant at 1% level as well. The results show that the higher proportions of appointed outsiders can promote shareholders wealth because outsiders' benefits are aligned with shareholders.
<Table 4 is inserted about here> We used Chi-squared statistics to examine differences in the institutionally preferred outcome distributions across central factors. Panel A of Table 6, following all draw, reports the significant differences at the 1% levels of NO with other three central factors, and O2 with O4. Following bad draws in Panel C of Table 6, the results of Chi-squared statistics indicate the significant differences at the 1% levels among central factors. The results confirm that the effect of appointed outsiders significantly influence the adoption of the institutionally preferred policy, suggesting that the higher proportions of appointed outsiders can significantly improve resource allocation. Following good draws, differences were significant only O4 with other three central factors in Panel B of Table 6. Results prove that outsider-dominated boards always block acceptance of the project, preventing a coalition of insiders from destroying firm value via investments in poor projects.
Incidence of the Institutionally Preferred Outcome in the Central Factors
<Table 5 is inserted about here> <Table 6 is inserted about here> Table 7 depicts the frequency with which the institutionally preferred outcome occurred in the treatments. Further, we also use Chi-squared statistics to examine these differences in Table 8. The results verify that these differences were significant across treatment of different type central factors, consist with the result of Table 5 However, we now focus on analyzing the difference across treatment of the same type central factors, for example, between RANO, and RENO. We only find that the difference of RAO4and REO4 is significant following good draws. Contrary, these differences, RANO and RENO, RAO2 and REO2, and RAO3 and REO3, are insignificant following good and bad draws. We believe that the members of board are repeated group to encourage outsiders employed the reasonable expectation of trust information transmitted by the behavior of insiders, and to alter their voting behavior increasing frequency of institutionally preferred outcome follow good draws. Results suggest tenure of outsiders has a positive impact on outsiders' effective monitoring of insiders
Incidence of the Institutionally Preferred Outcome in the Treatments
In contrast to the preceding theoretical model, results show outsider-dominated boards having strong function in fraud-proof when outsiders prefer to employ the reasonable expectation of preventing value discounted. Outsiders are inclined to vote against preventing a bad quality project destroying corporate value, when they doubt the message transmitted by inside directors. If outsider-dominated boards want to prevent the risk of rejected good policy, outsiders must possess necessary time. Outsiders with necessary tenure, reducing information asymmetry among directors, are more likely alter reasonable expectation employing reasonable expectation of insider information transmission and to execute institutionally preferred policies regardless of a project's quality.
<Table 7 is inserted about here> <Table 8 is inserted about here> Table 9 presents outsider votes consistent with the institutionally preferred, which is outsider votes -Yes‖ to accept if the project is good and votes -No‖ to rejects if it is bad. Percentage of outsider votes consistent with institutionally preferred outcome are 75.56 percent, 88.89 percent, and 87.22 percent following good in central factors O2, O3, and O4, respectively. Central factor O2 obviously differs from O3 and O4, attributable to outsiders voting to reject the project following bad draws. In fact, in central factor NO, following bad draws, institutionally preferred policy is adopted to vote again only 64.58 percent versus 97.92 percent and 98.86 percent in central factors O3, and O4. These show that lower proportions of appointed outsiders mean less possibility to adopt institutionally preferred policies. Table 10 presents Chi-square statistics for outsiders' voting consistency with institutionally preferred across central factors. Following all and bad draws, differences of outsider votes were significant at the 1% levels of O2 with O3 and O4 in Panel A and C of Table 10. These tests highlight the impact the effect of the high proportions of appointed outsiders have on board performance, as they indicate that the presence of higher proportions of outsiders significantly affected they voting following bad draws. When proportion of outsiders is low, for example, the proportion only 28.57%, we suggest that outsiders unable to affect policy of the voting result sometimes will follow intentions of insiders and relinquish their duty to cast vote for a bad quality project, even if it destroys firm value. We document effect of the higher proportions of appointed outsiders significantly altering distribution of outsider votes: boards more likely to execute institutionally preferred policies and increase corporate performance by improving resource allocation.
Outsiders Voting Patterns
<Table 9 is inserted about here> <Table 10 is inserted about here>
Conclusions
Arguments concerning the value of outsiders are mixed results. We suspect that the mixed results are due to impediments to empirical research. These impediments include difficulties in measuring day-today effect of board composition on corporate performance, poor disclosure regarding board meeting of institutions, and defective proxies for the level of outsider independence. Our model and experiments contribute to research exploring true value of outsiders on board by laying groundwork that can control for such impediments and ensure that outside directors are truly independent.
We construct a theoretic model to explain the effect of different proportion of appointed outsiders to the voting behavior of the board. Then, according to the model, we design the experiment of the voting behavior to examine board effectiveness. We verify that the higher the proportions of appointed outsiders, the more efficient of the board and the higher the possibility to adopt institutionally preferred policies, which increase corporate performance by improving its resource allocation.
Especially, outsider-dominated boards, those who employ reasonable expectation of preventing value discounted, obviously execute institutionally preferred outcome and always reject bad quality projects, preventing an insider coalition from destroying firm value via investments in poor projects. Institutionally undesirable outcome, which means a good quality project rejected, however, cannot be entirely eliminated. Our experiments indicate that outsiders with adequate time to alleviate information asymmetry among directors, are more likely to alter their voting behavior employing the reasonable expectation of insider information transmission and to prevent the risk of rejected good policies.
With proportion of appointed outsiders low, we find outsiders unable to affect the policy of the voting result sometimes will follow the intention of insiders to vote and give up their duty. Higher proportions of outsiders significantly alter to opportunistic behavior of outsiders, and the institutionally preferred allocations are more likely arising. In East Asia, most countries require listed companies to appoint at least certain proportion of outsider. However, the proportions of outsiders required weight be lower, we suggest that outsider-dominated boards more likely achieve improvement in governance practice.
2. Each round, the draw type of project is determined by monitor randomly drawing a poker ball from a bucket. The bucket contains fifty white and fifty black balls. A white ball represents Draw I, black ball Draw II. After each drawing the ball will be returned to the bucket; thus Draw I and Draw II have an equal chance of being selected in each period. 3. If the preceding outcome of drawing is white ball, then omit the following activities. If black ball is drawn out, then Type A participants still proceed to draw again, in order to further divide into two sub-group Type I 1 and I 2 . The bucket will contain sixty red balls and thirty green balls. After each drawing the ball will be returned to the bucket. Those who draw a red ball will be Type I 1 and those who draw a green ball will be Type I 2 .
Majority vote:
Whether the project is taken on or not for any group depends on the majority vote of that group. The project is undertaken if the Yes votes outnumbered the No votes, at least four Yes votes. When the Yes votes equal or less than the three votes, the project is rejected.
Earnings:
Your earnings depend on three events: (1) Fill in the blanks below, answer either -Yes‖ or -No‖, below question 5~8. 5. If the project type is Draw I, the Type A (B) participants, as a subgroup, are consensus to vote ( ). 6. If the project type is Draw II, the Type A participants further divide into two sub-group Type I 1 and I 2 , and Type I 1 (I 2 ) are consensus to vote ( ), and the Type B participants are consensus to vote . This is the end of the instructions. If you have any questions please raise your hand and ask them at this time. Table 4. Description of the Central Factors and Treatments This table describes the four central factors and eight treatments, including the number of groups employing, the distribution of draws, and the number of bribe occurrence for each factors and treatment. The central factors can be categorized, which were based on the number of outsider on seven members of board, into four central factors: no outsider factor, NO; two outsiders factor, O2; three outsiders factor, O3; four outsiders factor, O4. The treatments can be categorized based on the mixing protocols employed random mixing (RA), where group membership was changed after every round but subjects retained their agent-type for the entire session, and repeated groups (RE), where group composition remained unchanged for the duration of the session. Each central factor was divided into two treatments of the random mixing and repeated groups. So, we obtain eight treatments included RANO, RENO, RAO2, REO2, RAO3, REO3, RAO4, and REO4. The subgroup-group communication protocol (SG) was used in all treatments, that is, before the voting for project, first subjects were permitted to communicate only with other agents of their type (outsiders or insiders) after which the entire group was permitted to communicate. Average payoffs of insiders (outsiders) ($) denote average payoffs of each subject in each round, according to the subjects type are insiders (outsiders). 25 Significance at the 10 percent, 5 percent and 1 percent confidence levels is denoted by -*‖ , -**‖ , -***‖, respectively. Significance at the 10 percent, 5 percent and 1 percent confidence levels is denoted by -*‖ , -**‖ , -***‖, respectively.
Central Factors
Central Factors Two Outsiders, 28 | 2018-12-21T13:00:18.167Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "0047bfba9fcae20548611cd5c8b8586cf1415a26",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.22495/cocv5i3p11",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0047bfba9fcae20548611cd5c8b8586cf1415a26",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
11379657 | pes2o/s2orc | v3-fos-license | Mouse in Vivo Neutralization of Escherichia coli Shiga Toxin 2 with Monoclonal Antibodies
Shiga toxin-producing Escherichia coli (STEC) food contaminations pose serious health concerns, and have been the subject of massive food recalls. STEC has been identified as the major cause of the life-threatening complication of hemolytic uremic syndrome (HUS). Besides supportive care, there currently are no therapeutics available. The use of antibiotics for combating pathogenic E. coli is not recommended because they have been shown to stimulate toxin production. Clearing Stx2 from the circulation could potentially lessen disease severity. In this study, we tested the in vivo neutralization of Stx2 in mice using monoclonal antibodies (mAbs). We measured the biologic half-life of Stx2 in mice and determined the distribution phase or t1/2 α to be 3 min and the clearance phase or t1/2 β to be 40 min. Neutralizing mAbs were capable of clearing Stx2 completely from intoxicated mouse blood within minutes. We also examined the persistence of these mAbs over time and showed that complete protection could be passively conferred to mice 4 weeks before exposure to Stx2. The advent of better diagnositic methods and the availability of a greater arsenal of therapeutic mAbs against Stx2 would greatly enhance treatment outcomes of life threatening E. coli infections.
Introduction
Shiga toxin-producing Escherichia coli (STEC) encompass a group of pathogenic E. coli that represents a major public health concern worldwide. Infections with STEC occasionally result in severe symptoms of bloody diarrhea and hemolytic-uremic syndrome (HUS) [1,2], which is defined as the triad of hemolytic anemia, thrombocytopenia, and acute kidney injury [3]. Shiga toxins (Stxs) play an important role in the pathogenesis of these disorders. There are two types of Stxs produced by STEC, Stx1 and Stx2 [4]. Both are encoded by stx genes on toxin-converting lambdoid temperate bacteriophages [5] and have an AB 5 structure [6]. The molecular weight of the holotoxin is about 70 kDa, which consists of a single A-subunit of 32 kDa and 5 identical B subunits of 7.7 kDa. The A-subunit is an enzymatically active N-glycosidase that inhibits protein synthesis by cleavage of an adenine base from the 28S rRNA at position 4324 of the eukaryotic ribosomal 60S subunit, resulting in cell death [7,8]. The B-pentamer contains multiple receptor binding sites for globotriaosyl ceramide (Gb 3 ) [9] or globotetraosyl ceramide (Gb 4 ) [10] expressed on mammalian cells. Despite their structural similarities, Stx1 and Stx2 exhibit significant differences in biological activities. Epidemiological and molecular typing studies indicate that STEC strains producing Stx2 have been associated more closely with HUS than STEC strains producing Stx1 [11,12].
Currently, no specific protective treatment has been developed for STEC-induced HUS other than supportive therapy. The effect of antibiotics on HUS is still controversial. A study of 259 children infected with STEC indicates that antibiotic use during STEC infection enhances production and release of Stxs, which eventually increases the frequency and severity of HUS [13]. However, there is also evidence showing that some STEC strains do not release Stxs in response to therapeutic concentrations of antibiotics like ciprofloxacin, meropenem, fosfomycin and chloramphenical [14]. Plasma exchange has not been shown to affect the course of the disease [15]. Novel strategies designed for disease prevention include vaccines [16], use of toxin receptor mimics [17], small molecules that block Stx-induced apoptosis [18], and antibodies against Stx [19]. Unfortunately, most of these potential therapeutics have not been tried in humans and none of them have had any impact on the incidence and severity of human cases of STEC-induced HUS. Recently, Stx was observed in the circulation of children with STEC-HUS. Stx could bind to leukocytes for up to 1 week after the diagnosis of STEC-induced diarrhea [20], which indicates the pivotal role of the toxin in the pathogenesis of disease, justifying the use of mAbs against Stx to prevent HUS in patients infected with STEC. Similar to other toxin-induced diseases [21], little endogenous serum antibody is induced against Stxs following STEC infection [22]. Therefore, passive administration of toxin-neutralizing antibodies should be an effective therapy for HUS. A number of Stx-specific mAbs have been developed and tested for their ability to protect animals from Stx-mediated death [23][24][25][26][27][28][29][30][31][32]. However, a detailed toxicokinetic analysis of un-modified Stx2 in the presence or absence of neutralizing antibodies against this toxin in an animal model has not been fully described in the literature. In this study, we tested and validated a newly developed ELISA for the sensitive detection of Stx2 in mouse sera, determined the half-lives of Stx2 in mice and monitored the clearance of Stx2 from the circulatory system by mAbs. We also showed the efficacy of pre-and post-treatment of Stx2 intoxication with neutralizing mAbs. This information will be useful for preclinical evaluation of immunotherapeutic reagents against Stx2 as a means of protecting susceptible patients from developing HUS.
Detection of Stx2 in Mouse Serum
Currently, diagnosis of STEC infection is determined primarily through isolation of the pathogen from stool culture. STEC strains are distinguished from other E. coli strains comprising the normal intestinal flora based on chemical markers, such as the unique sorbitol negative fermentation property of the O157 strain using isolation media [33]. However, this approach is unable to identify non-O157 STEC strains. To determine if a bacterial isolate is a STEC, the best way is to examine the production of Stxs. The availability of an assay that could detect Stxs in the blood system directly may improve the identification of individuals at high risk of HUS during and after a STEC outbreak because of the close association of the Stx with HUS [11,12]. We tried different formats of ELISAs (including direct and indirect ELISA using unlabeled primary and HRP-labeled secondary antibodies, instead of using signal amplification avidin-biotin complex presented in this study) for the detection of Stxs in sera samples and found that our newly developed ELISA [34] was at least 10-fold more sensitive than other formats tested (data not shown). In this study, the LOD determined for Stx2 spiked in mouse sera was 10 pg/mL with a quantification range of 10 to 1,000 pg/mL ( Figure 1).
Figure 1.
Standard curve of Stx2 spiked in mouse serum. Known standards ranging from 10 to 1,000 pg/mL of Stx2 in control sera (pooled healthy mouse sera) were used to determine the concentration of Stx2 in unknown blood samples. The linear regression of the standard curve has a correlation coefficient (R 2 ) of 1. The LOD of 10 pg/mL was determined by the addition of 3 times standard deviation to the mean background signal and is denoted here with a dashed line at 5984 relative luminescent counts.
In Vivo Toxicity and Toxicokinetics of Stx2
To determine the toxicity of Stx2 in vivo, we administered the toxin intraperitoneally to Swiss Webster mice. The mouse LD 50 of a commercially available Stx2 was determined as 290 ng/kg or about 6 ng per average sized mouse. Intoxication with Stx2 resulted in weight loss, frequent urination (observed as increased water intake and number of wet cages), and ultimately death. Mice that survived Stx2 challenge recovered weight as well as normal urination behavior.
Little is known thus far about the in vivo toxicokinetics of naturally occurring Stx2. Using the sensitive ELISA assay described above, we were able to detect minute amounts of Stx2 in animal sera. Mice treated with 100 ng/mouse of Stx2 via iv were bled and sacrificed over time (2, 5, 10, 20, 30 min and 1, 1.5, 2, 3, 6 and 8 h at n ≥ 5 per time point). The concentration of unknown samples was determined by ELISA using a standard curve of known samples diluted in pooled mouse sera. The half-lives, consisting of the distribution phase (t 1/2 α) and the slower clearance phase (t 1/2 β) were determined as 3 min and 40 min, respectively ( Figure 2). We observed no statistically significant difference between the concentrations of Stx2 recovered from sera and plasma (data not shown).
Figure 2.
Biologic half-lives of Stx2 in mouse serum. Stx2 was introduced into mice by iv. Sera was taken at 2, 5, 10, 20, 30 min and 1, 1.5, 2, 3, 6 and 8 h after intoxication and the Stx2 concentration was determined based on standard curves plotted in non-linear regression of the second polynomial (Prism 6). The fast distribution phase t 1/2 α and slow clearance phase t 1/2 β were determined using the same program. The mean values for each time point were plotted along with the standard error of the mean (SEM) with n ≥ 5.
Protection of Mice from Stx2 with Monoclonal Antibodies
In previous studies, we developed five mAbs (Stx2-1, Stx2-2, Stx2-4, Stx2-5, and Stx2-6) for the sensitive detection of Stx2 in immunoassays [34] and (unpublished data). These mAbs were also tested for their ability to neutralize Stx2 activity in Vero cells. Only mAb Stx2-5 showed significant neutralization activity in the cell-based assays [34]. In this study, we tested these mAbs for the in vivo neutralization of Stx2. Mice were treated with different doses of a single mAb or a 1:1:1 combination of anti-Stx2 mAbs (Stx2-1, Stx2-2, and Stx2-5) about 30 min prior to ip administration with a lethal dose (3 ip mouse LD 50 ) of Stx2. The survival of mice treated with mAbs or sterile PBS were plotted over time ( Figure 3). In contrast to the Vero cell toxin neutralization assays, mAbs Stx2-1 and Stx2-2 protected mice well, providing complete protection from death with only 5 µg/mouse of mAbs ( Figure 3A and 3B). MAb Stx2-5 provided the highest level of protection, showing full protection at 1 µg/mouse ( Figure 3C). MAbs Stx2-4 and Stx2-6 did not provide significant protection from Stx2 even at 25 µg mAb/mouse indicating that the protective effect seen with mAbs Stx2-1, 2 and 5 were not due to the general presence of mAbs ( Figure 3D and 3E).
Other studies with antibody protection against botulinum toxin A have shown a substantial additive protective effect of combining two or more mAbs [21,35]. In this study, a combination of the best protective mAbs Stx2-1, Stx2-2 and Stx2-5 conferred complete protection from Stx2 at 1 µg mAb/mouse ( Figure 3F). However, this protection was not more significant than using Stx2-5 mAb alone.
Survival of Mice Treated with mAbs before and after Intoxication with Stx2
To elucidate the window of opportunity for mAb protection, we investigated the efficacy of mAbs before and after toxin exposure. Mice treated by iv with a combination of mAbs against Stx2 (3 µg each of mAbs Stx2-1, Stx2-2, and Stx2-5) at 2, 5, 10, 20 and 40 min after injection of Stx2 conferred some degree of protection as shown by the increase of time-to-death ( Figure 4A). To make sure a known quantity of Stx2 is in the bloodstream before mAbs were added, the toxin was given by iv. All mice treated with mAbs at 2 min post intoxication (mpi) survived; 60% and 20% of mice survived when treated at 5 and 10 mpi, respectively. All control mice that were treated with PBS instead of mAbs died within 5 days after intoxication ( Figure 4A). Mice that received the mAb combination 30 min before Stx-2 treatment, showed no signs of intoxication (data not shown). Significant protection was observed when mAbs were administrated before toxin exposure. Mice were treated with the same combination of mAbs at weeks 3 to 8 before injection with a lethal dose of Stx2 (18 ng/mouse by iv). All mice survived when treated with mAbs at 4 weeks or less before intoxication, while 80% of mice treated with mAbs at 5 and 6 weeks before intoxication survived ( Figure 4B and data not shown). Even mice treated with mAbs 7 weeks before intoxication displayed a protective effect as shown by the 20% survival with a slight increase in the median survival from 86 h in the PBS control to 110 h ( Figure 4B).
Clearance of Stx2 by Monoclonal Antibodies
To test whether the protection of mice from Stx2 with mAbs is due to the rapid serum clearance of the toxin, we examined the toxicokinetics of Stx2 in the presence or absence of mAbs. Mice were injected with Stx2 by iv, followed by iv introduction of the 3 mAbs combination (Stx2-1, Stx2-2, and Stx2-5) after two min. Sera were obtained at 2, 5, 10, 20, 30 min and 1 h, and the concentration of Stx2 at each time point was determined using the ELISA method described above. Within 3 mpi, the circulating titer of Stx2 went from 13 ± 1.2 ng/mL in the no treatment controls to 0.3 ± 0.05 ng/mL when mAbs were added ( Figure 5). At 8 mpi, Stx2 went from 9.3 ± 1.2 ng/mL in nontreated animals to 8 ± 3 pg/mL in mAb-treated animals, suggesting that this combination of mAbs protected mice from Stx2 intoxication by accelerating the clearance of toxin from the bloodstream.
Discussion
Stx2 is a major virulence factor of STEC associated with severe HUS. Detection of Stx is the most reliable method for diagnosis of STEC infections. However, due to the lack of sensitive detection methods, there is currently no report in the literature describing the serum Stx levels in humans with STEC infection [3]. In this study, we validated a newly developed ELISA for the sensitive detection of Stx2 in mouse sera (Figure 1). Using this method [34], we were able to detect Stx2 as low as 10 pg/mL. This method could be used in the future for the detection of Stx2 in human sera samples, which may aid in identifying those who might develop the HUS. Previously, the biologic half-life of Stx-2 has been determined with 125 I labeled Stx-2 [30,36,37]. However, iodinated proteins could be biodehalogenated in vivo. Using the highly sensitive ELISA method, we determined the biological half-life of unmodified Stx2 in mice. The distribution phase t 1/2 α of 3 min and the clearance phase t 1/2 β of 40 min of Stx2 suggest that this toxin is cleared rapidly from the bloodstream (Figure 2), possibly through distribution to the kidneys or the central nervous system where most of the Stx2 related damage is observed [3,30,37]. The t 1/2 α of 6 and 4 min observed in rats and mice, respectively, were comparable to those observed in this study. The t 1/2 β of 2.6 h was longer in rats and not reported previously in mice [30,36]. The highly specific mAbs developed for detection assays were tested for the in vivo protection of mice from Stx2 in this study. Previous studies have shown that these mAbs recognize different epitopes of Stx2. MAb Stx2-1 binds to the A chain while mAb Stx2-2 and Stx2-5 mainly bind to the B chain [34]. Only mAb Stx2-5 showed significant neutralization activity in the cell-based assays. We show here that all Stx2-1, Stx2-2 and Stx2-5 mAbs can individually protect mice from lethal doses of Stx2 (Figure 3). It has been reported that the ability to neutralize Stx2 in vitro does not necessarily correlate with the ability to neutralize Stx2 in vivo [38]. This discrepancy may be due to different mechanisms involved in these two systems. In the cell-based assays, the Stx2-specific mAbs accomplish their neutralization activity by blocking the enzymatic activity of the Stx2 A-subunit or by competing for receptor-binding sites on the B-subunit with cell receptors, resulting in reduced toxin entry into cells. With in vivo studies, one important mechanism for antibodies circulating in bloodstream is to bind the antigen, and antibody-antigen complexes are then cleared by the Fc receptors in the liver and pulled from the circulation [37,39,40]. Thus, it is not necessary for the antibodies to possess the toxin-neutralizing or the receptor binding site blocking capabilities needed in the cell-based assays.
It has been observed that the binding of multiple mAbs to a toxin molecule accelerates its clearance and increases neutralization [35,40,41]. Our results indicate that mAb Stx2-5 was able to prevent mice from Stx2 toxicity with as little as 1 μg when administered before intoxication with 3 ip LD 50 of Stx2 (Figure 3). Using a combination of the most potent mAbs (Stx2-1, Stx2-2 and Stx2-5) did not increase neutralization as is observed in antibody neutralization of other toxins [35,40]. This is likely due to the unique structure of the AB 5 family of toxins, where the single mAb Stx2-5 could bind to each of the five B-subunits of the holotoxin, improving the neutralization efficacy to about the same level as the addition of different mAbs.
There are seven subtypes of Stx2 (a through g) identified so far, but Stx2a, Stx2c, and Stx2d are the subtypes most closely associated with HUS [42]. These subtypes are very similar to each other at the amino acid sequence level and recognized by any one of our mAbs used as antibody pairs in our ELISAs. Even though Stx2-5 mAb alone neutralized Stx2 well in mice, we opted to use a combination of mAbs in our neutralization assays to increase the potential clearance of other Stx2 serotypes. We predict that the combination of these mAbs will be capable of neutralizing Stx2c and Stx2d besides Stx2a toxin tested in this study.
Using the combination of 3 mAbs against Stx2, we determined the window of opportunity of clearance of mAbs after systemic intoxication with Stx2. The mAb protection data mirrored that of the Stx2 biologic half-life closely. Mice intoxicated iv with 3 mouse ip lethal dose of Stx2 can be completely rescued if mAbs were administered 2 min after toxin ( Figure 4A). Our combination of mAbs cleared Stx2 within minutes of introduction into intoxicated mice ( Figure 5). The window of rescue opportunity rapidly closes by 5 min after intoxication ( Figure 4A), suggesting that Stx2 in free form would be absorbed into target cells rapidly after entering the circulating system. It was reported that piglets and mice were fully protected against STEC infection when treated with Stx2-specific antibodies 24 hours after bacterial challenge, shortly after the onset of diarrhea [24,43]. These results suggest that Stx2 enters the bloodstream after the onset of initial symptoms of STEC infection. Such antibodies may be also capable of protecting humans at risk of developing HUS if it is administered shortly after the onset of diarrhea but before the onset of HUS. We tested the window for neutralization after intoxication to validate the toxicokinetics determined by our ELISA assays (Figure 2). Antibodies can neutralize toxins in the bloodstream but not toxins that have been absorbed by organs. To increase the efficacy of post-exposure therapy, it may be possible to use some neutralizing antibody components or molecules that are small enough to penetrate intoxicated cells and neutralize toxins that have been absorbed.
We tested how long pre-exposure treatment with a modest dose of mAbs (9 μg/mouse) would substantially protect mice from intoxication and found that 80% mice were protected even when mAbs were given 6 weeks prior to intoxication ( Figure 4B). Mouse immunoglobulins have a half-life of about 6-8 days in vivo correlating very well with our observed timing of Stx2 protection [44]. Thus, mAbs can persist in the circulation for a long time, clearing any Stx2 produced over time by pathogenic bacteria in intestines. This is useful as a preventative measure where ingested food products are known to be contaminated or a patient has tested positive for pathogenic E. coli but has not yet shown severe symptoms. Given the fact that most patients develop HUS within 2 weeks after infection of STEC [45,46], a single effective dose of antibodies will be sufficient to prevent or treat severe HUS complications caused by STEC infection. The use of antibiotics is not currently recommended for combating pathogenic E. coli due to the likely induction of Stx production [47]. However, the risk of developing HUS might be reduced if the use of antibiotics is combined with antibody therapy.
Experimental Materials
Stx2 toxin was purchased from List Biological Laboratories, Inc. (Lot #1621A1, Campbell, CA). Endotoxin levels were tested by List Biological Laboratories and found to be acceptable. Toxin was reconstituted as suggested by the manufacturer into a 100 ng/µL stock (in 50 mM Tris, 100 mM NaCl, 0.1% Trehalose), aliquoted and frozen at −80 °C until use. Monoclonal antibodies against Stx2 (Stx2-1, Stx2-2, Stx2-4, Stx2-5) were prepared as described [34]. Stx2-6 was also prepared as mAbs Stx2-1 to Stx2-5 (unpublished results). Briefly, antibodies were purified from ascites fluids and diluted in sterile phosphate buffered saline, pH 7.4 (PBS) into indicated doses. Female Swiss Webster mice of 4-5 weeks of age were purchased from Charles River (Portage, MI) and were fed ad libitum and housed in standard conditions. Mouse experiments were performed according to animal-use protocols approved by the Institutional Animal Care and Use Committee of the United States Department of Agriculture, Western Regional Research Center.
Determination of Mean Lethal Dose
Groups of at least 10 randomly selected mice were treated by intraperitoneal (ip) injection with 500 µL per dose of serial dilutions of Stx2 (in a range that spans high lethality to no deaths). Mice were monitored for health or death for up to 14 days post-intoxication. The mean lethal dose (LD 50 ) was calculated by the Weil and the Reed and Muench method [48,49].
ELISA for Stx2
ELISA was performed as described previously [34]. Briefly, black NUNC plates were coated with mAb Stx2-1 (100 µL/well of a 5 µg/mL solution in PBS) and incubated overnight at 4 °C. Plates were then treated with 300 µL of blocking buffer containing 3% bovine serum albumin (BSA) in 0.02 M Tris-buffered saline with 0.9% NaCl, pH 7.4 and 0.05% Tween-20 (TBST) and incubated for 1 hour at 37 °C. Next, plates were washed twice with TBST. After toxin standards and samples (100 µL/well in PBS) were added, the plates were incubated for one hour at 37 °C and then washed six times with TBST. Next, a biotinylated detection antibody (mAb Stx2-2) was added (100 µL/well of a 100 ng/mL solution in blocking buffer). The plates were incubated for 1 hour at 37 °C, washed six times with TBST and then 100 µL/well of 1:20,000 dilution of streptavidin-HRP (Invitrogen, Carlsbad, CA) in blocking buffer was added. The plates were incubated for 1 hour at 37 °C. Finally, the plates were washed six times with TBST and SuperSignal West Pico Chemiluminescent Substrate (Pierce, Rockford, IL) was added. The Stx2 standards used ranged from 10 to 1,000 pg/mL diluted in pooled mouse sera (Figure 1). The data represent the mean ± SD of three replicates from each toxin concentration and was plotted. The unknown values were determined from the linear regression. The limit of detection (LOD) was defined as the lowest toxin concentration at which the average ELISA reading was three standard deviations above the negative control.
Toxicokinetics of Stx2
The biologic half-lives of Stx2 were determined in the presence or absence of mAbs against Stx2. Mice were treated iv with 100 ng per mouse (100 µL of 1,000 ng/mL stock) of Stx2. Blood from sets of at least 6 mice per time point were taken by submandibular bleeding (2, 5, 10, 20, 30 min and 1, 1.5, 2, 3, 6 and 8 h) into serum or plasma collectors (BD, San Jose, CA). Blood was incubated on ice for at least 1 h, centrifuged for 10 min at 3000 x g to separate sera from cellular fractions. Sera were then aliquoted and frozen at −80 °C until use. Sera were also collected from untreated mice for use as controls and pooled mouse sera and buffer were used to dilute Stx2 standards. In mAb clearance, a 100 µL sample of 90 µg/mL mAb combination (9 µg of mAbs per mouse made up of 3 µg ea of Stx2-1, Stx2-2 and Stx2-5) in PBS buffer was administered by iv 2 min after toxin. Blood samples were collected from sets of 6 mice at each time point (2, 5, 10, 20, 30 min and 1, and 2 h) as described above. The concentration of unknown Stx2 was determined from known standards curves by ELISA. The averages at each time point were plotted ± standard error of the mean (SEM), with standard curves plotted in non-linear regression of the second polynomial using the GraphPad Prism 6 program. Averages of Stx2 values at 5 min and 1 h time in sera were compared with those in plasma. We found no statistically significant difference in the sample values between plasma and sera (data not shown). The half-lives were determined by calculating two-phase exponential decay over time using Prism 6.
Treatment of Mice Post-intoxication or Pre-intoxication with Stx2 mAbs
To simulate post-intoxication treatment, mice were treated by iv with 100 µL of 180 ng/mL of Stx2. At different time points after toxin injection (2, 5, 10, 20, 40 min), 100 µL per mouse of a combination of mAbs (9 µg/mouse or 3 µg ea of Stx2-1, Stx2-2 and Stx2-5 mAbs) were administered by iv. To simulate pre-intoxication treatment, mice were treated by iv with 100 µL of the same Stx2 mAb combination at 3, 4, 5, 6, 7, and 8 weeks prior to iv treatment with 100 µL of 180 ng/mL Stx2. Mice were then monitored for at least 14 days post-intoxication. | 2016-03-01T03:19:46.873Z | 2013-10-01T00:00:00.000 | {
"year": 2013,
"sha1": "cbb563ea109de59c9b0e445e2e42b330c49312bb",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/2072-6651/5/10/1845/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cbb563ea109de59c9b0e445e2e42b330c49312bb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
88539912 | pes2o/s2orc | v3-fos-license | Relationships Among Wood Variables in Two Species of Ring-Porous Trees
One way of assessing the functional significance of wood-anatomical variables is by examining the relationships among these variables. This paper presents results of factor analysis of wood variables in two species of ring-porous trees (Quercus rubra and Fraxinus americana). Factor analysis of vessel diameter and density, conductive area, and conductivity in the earlyand latewood plus width of the earlyand Iatewood increment reveals from three to four independent sources of variance. Generally, these can be characterized as diameter-related factors in the earlyand latewood, tentatively related to water conduction, and a factor identified with width of the latewood increment and density of the Iatewood vessels, which may be a generalized representation of growth. Individual correlations among the variables show that variation in ring width is almost entirely variation in width of the Iatewood portion of the ring and that ring width (or latewood width) varies with the Iatewood characteristics (being positively correlated with vessel diameter and inversely correlated with vessel density). Vessel diameter and density are inversely correlated, but only in the latewood.
INTRODUCTION
The way in which wood structure should be characterized functionally is far from clear. Among the aspects of the wood that have been cited as significant in water conduction or response to water stress are vessel diameter and density (Carlquist 1975), percent of area taken up by vessels (conductive area ;Carlquist 1984), and sum of the vessel diameters to the fourth power (proportional to conductivity; Zimmermann 1983). The latter measure is a representation of flow through a series of pipes in parallel; because of the dependence on the fourth power of the diameter, the larger conduits contribute disproportionately to the flow. Whether flow through vessels can be approximated in this way has been questioned since vessels are known to twist around the trunk, anastomose, and have constrictions along their length. One way of evaluating the functional significance of these variables in determining flow rates is by measuring flow and comparing measured rates to rates calculated based on the wood anatomy. Such measurements have yielded values both considerably less than and approximating calculated values of conductivity (Zimmermann 1983;Salleo 1984;Ellmore and Ewers 1986).
Other approaches to this problem are possible. Baas (1986) mentions that spatial variance can be studied a) within species, b) among species, and c) within local floras. Identifiable patterns of spatial variance (such as that for vessel diameter and density ;Carlquist 1975) can be related to broad climatic controls and in this way give information about wood function. Another type of variance that is relatively easy to study in woody plants is temporal variance, and yearly variations in wood structure do appear to be affected by climatic factors (Eckstein and Frisse 1982;Woodcock 1989a).
The focus of the present study is the relationships among wood-anatomical variables of possible functional significance, viewed from the standpoint of their temporal variance. Limiting the study to 20-year sequences from two trees made it possible to obtain values for all the anatomical variables cited above, many of which are quite tedious to measure or calculate. Objectives are evaluation of the various anatomical variables in terms of their interrelationships and identification of the number of sources of variance present within the wood. Of additional interest is the way in which wood characteristics vary with width of the growth increment.
The trees investigated, Quercus rubra L. and Fraxinus americana L., have wood of the ring-porous type. These species were chosen because they are native to the study area (southeastern Nebraska) and are in addition wide-ranging. Because these trees produce two types of wood during the year, all of the variables cited above can be measured in both the early-and latewood. Diagrams of a typical transverse section through these two woods are presented in Figure 1. The distribution by size of the vessels, also presented in Figure 1, shows the two distinct populations of vessels that are present within one annual ring in these two species.
MATERIALS AND METHODS
Breast-height tree cores were obtained from two species of ring-porous trees (Quercus rubra and Fraxinus americana) growing near Lincoln, Nebraska. The individuals cored were both canopy trees approximately 40 years old. The cores were thin-sectioned and mounted for light microscopy. Measurements were obtained by means of a microscope equipped with an ocular micrometer within an area of uniform width (approximately 5 mm) extending across the rings. All the cells within this area were measured; the number of cells thus varied from ring to ring but was in all cases greater than 30. A 20-year sequence (1966)(1967)(1968)(1969)(1970)(1971)(1972)(1973)(1974)(1975)(1976)(1977)(1978)(1979)(1980)(1981)(1982)(1983)(1984)(1985) from each individual (one core) is analyzed in each case. The variables measured (or calculated) (see Appendix) are average vessel diameter, vessel density, conductive area, and conductivity in both the early-and latewood. Since yearly variation in conductivity is equivalent to the yearly variation in sum of the diameters to the fourth power calculated with respect to area, the term is used in this sense here (strictly speaking, conductivity is only proportional to the sum of the diameters 4 since several other quantities figure in the equation). Difficulties in distinguishing early-and latewood in ring-porous trees have been discussed elsewhere (Woodcock 1989b); presence of an abrupt shift in vessel size across the ring or greater contiguity of vessels within the earlywood increment was used to delineate the two different parts of the ring. Where vessels deviated from circular in cross section, the long and short axes were averaged. The other anatomical variables (as, for example, density or conductivity) are calculated with respect to a transsectional area (with the large rays of Q. rubra not included in total area). Width of the entire growth increment and of the early-and latewood is also included in the analysis. Average values of the variables, together with their coefficients of variation, are presented in Table 1.
The statistical treatment consisted offactor analysis (Biomedical Data Program 4M) of the included wood variables in these two species. This is the appropriate type of procedure when the proportioning of the shared variance is of interest, although it should be recognized that the results are only one representation of the data rather than a unique solution. From the set of 10 variables in each tree, a correlation matrix is produced representing the common sources of variance among the variables. Linear transformation of the correlation matrix yields the factors, which represent the independent sources of variance among the variables. All the variables but total ring width are included in the analysis; high correlations between total width and latewood width did not permit transformation of the correlation matrix with this variable included. Orthogonal rotation of the factors emphasizes the high-loading variables and helps in interpretation. This type of analysis permits identification of those variables most closely associated with the different factors. It is also possible in many cases to interpret the factors in terms offunction. Other statistical results presented here are correlations among selected variables.
Factor Analysis
Factor analysis of wood variables in Quercus rubra reveals that four independent axes of variance can be recognized among the variables ( Table 2). Two of these factors relate to the earlywood and two to the latewood. Factor 1 is identified most closely with diameter-related characteristics of the earlywood (average vessel diameter, conductive area, and conductivity). Factor 2 is identified primarily with latewood vessel density and secondarily with other characteristics of the latewood (width and vessel diameter). Factor 3 is identified with diameter-related characteristics of the latewood (conductive area, primarily). Factor 4 can be identified with earlywood vessel density. Other points are that latewood width varies closely with latewood anatomical characteristics, whereas this is not so clearly the case with earlywood width.
In Fraxinus americana, three independent axes of variance can be recognized (Table 3), one relating to the latewood and two to the earlywood. Factor 1 is related most closely to latewood vessel density and is also related to other latewood characteristics (conductive area, vessel diameter, and width). Factor 2 is related most closely to earlywood vessel density and other earlywood characteristics (conductive area and conductivity). Factor 3 is identified mainly with ear1ywood vessel diameter. Factor 1, which represents latewood characteristics, explains approximately half of the total variance in the data set. As is the case in Q. rubra, latewood width is related to latewood anatomical characteristics. Clearly, many of the variables investigated in these two species are closely related. Among the 10 variables, only three to four independent axes of variance are represented. In both species, two independent axes of variance are represented among the earlywood characteristics. The latewood contains essentially one source of variance in F. americana and two sources in Q. rubra. A tentative functional interpretation is as follows. The diameter-related factors probably relate to flow characteristics. In this sense, then, separate flow-related factors can be recognized within the early-and latewood. A third factor that can be recognized in both species relates to latewood characteristics (anatomical variables and latewood width). This factor may be considered as a generalized representation of growth (yield), as influenced by total photosynthate produced, although mechanical considerations relating to support may also be a consideration in determining the amount of wood produced.
Correlations between Selected Variables
Earlywood width, latewood width, and total width. -In these ring-porous woods, width of the entire ring and width of the latewood are very highly correlated, and in fact statistically would be considered the same variable (Table 4). Thus in both species, variation in width of the ring from year to year is almost entirely variation in the amount of latewood produced. In F. americana, width of the earlywood increment varies with latewood width, so that both parts of the increment are positively related to total width. In Q. rubra, on the other hand, amount ofearlywood produced is independent ofwidth and latewood width. This latter finding is consistent with the observation that, in some ring-porous oaks, the amount of earlywood produced is not significantly affected by precipitation amounts, with very dry years being marked by production ofearlywood only (Phipps 1967;Woodcock 1989a). Ifthe interpretation ofearlywood as an advanced adaptation is correct (Chalk 1937), then these trees have developed a high degree of reliance on this adaptive characteristic. The large vessels of the earlywood are generally thought to ensure adequate flow during the early part of the year when the leaves are expanding, and ring-porosity is coupled to a growth pattern in which the leaves emerge during a relatively short period and the need for water may be particularly high (Lechowiez 1984).
Width and the anatomical variables. -Since width is the most widely used measure of growth in trees, the relationships between ring width and wood characteristics are of special interest. In both species examined here, ring width shows significant relationships to latewood characteristics but is nonsignificantly related to earlywood characteristics (Table 5). In both cases, width is positively correlated with vessel diameter and inversely correlated with vessel density in the latewood.
These variables are, however, all interrelated (appear on the same factor in the factor analysis). That is, latewood vessel diameter and density also exhibit significant correlations. One way of assessing these relationships is by means of partial correlations analysis. When this is done (Table 6), it can be seen that the significant relationships between variables, with the effects of the other variables controlled for, are between width and latewood density in Q. rubra and latewood vessel diameter and density in F. americana.
The diameter measures.- Table 7 presents correlations between vessel diameter and three other variables, vessel density, conductive area (percent of cross-sectional area taken up by vessels), and conductivity (sum of the vessel diameters to the fourth power). Vessel diameter is negatively correlated with density in the latewood only. The absence of significant correlations between diameter and density in the earlywood of these two species is somewhat counter to expectation; it is in the earlywood with its large, relatively closely spaced vessels that packing constraints would be expected to come into play. Evidently, the earlywood vessels are not sufficiently tightly packed for this to be the case. Carlquist (1977) associates vessel diameter and density with vulnerability to water stress since vessel size influences susceptibility to embolism and vessel density determines availability of backup conduits should some conductive elements become nonfunctional. The trade-off between conductive efficiency and safety thus leads to some degree of covariance between these two variables. The general pattern of covariance that can be recognized on a spatial basis (along a gradient from mesic to xeric ;Carlquist 1975) between these two variables is seen here only in the latewood, a result that suggests that the earlywood is adapted for efficiency alone. Vessel diameter exhibits a significant positive correlation with conductive area in the early-and latewood of Q. rubra. In F. americana, on the other hand, the relationship is positive in the earlywood and negative in the latewood. Vessel diameter is significantly correlated with sum of the vessel diameters to the fourth power (conductivity) in the earlywood of both species. Both conductive area and conductivity are calculated from diameter measurements. Although not expressed in the same terms (since conductive area is a percentage measure and conductivity is represented in mm 4 per unit area), they are similar in the sense that conductivity is the sum of the diameters to the fourth power and the variance in conductive area is variance in the sum of the diameters squared. That is, both variables are dependent on a summed representation of vessel diameter. Both conductive area (expressed in absolute terms as vessel lumina cross-sectional area of Salleo, Lo Gullo, and Oliveri 1985) and conductivity (theoretical relative conductance of Ellmore and Ewers 1986) have been investigated in studies of hydraulic properties of wood. Results presented here suggest that although these variables, and average vessel diameter, may be related in some cases, they are in general not closely related and should be considered distinct. Since conductive area and conductivity are more difficult to measure than average vessel diameter, it would be appealing to approximate these measures-to represent conductive area, for instance, as average vessel diameter times vessel density (Carlquist 1988). The validity ofthese approximations will depend on the degree of variability in vessel size. The data collected here permit comparison of actual and approximated values (average conductive area and average conductivity, calculated from average vessel diameter) ofthese variables. In the case ofthe two species studied here, conductive area and average conductive area are highly correlated (>0.9) in both the early-and latewood. Conductivity and average conductivity show correspondences ranging from 0.94 (P < 0.001) to 0.59 (P = 0.003). (A better approximation to conductivity may be diameter of the largest vessel; in bur oak, diameter of the largest earlywood vessel and conductivity have a correlation coefficient of0.84 (P = 0.001); Woodcock 1987.) SUMMARY AND CONCLUSIONS Several findings are significant with respect to interpretation of ring-porosity as an adaptation: 1) independence of early-and latewood variables in terms of their temporal variance; 2) presence of a significant inverse relationship between vessel diameter and density in the latewood only; and 3) variance of latewood anatomical characteristics with total ring width. Latewood characteristics are thus affected by the same conditions that influence total growth while at the same time displaying a trade-offbetween efficiency and safety (covariance of vessel diameter and density). Earlywood characteristics, on the other hand, do not display these relationships and may have a different response to the environment, consistent with the idea that this wood may be functionally important during a relatively short period of the year.
Of the several sources of variance that are present, one (identified with latewood variables and width) can be related to yield and others (identified with diameter measures) probably relate to water conduction. The diameter measures, within either the early-or latewood, are largely interrelated, so no clear-cut answer as to the functional significance of average diameter vs. conductive area vs. conductivity is possible on the basis of these results. The patterns of temporal covariance identified here in some cases parallel those seen on the spatial scale and in some cases do not.
The sources of variance present within the wood can be represented by the three or four variables most closely identified with the factors. Choice of variables for further analysis may, however, also be influenced by ease and nonambiguity of measurement. In the trees studied here, earlywood vessel diameter and either latewood vessel diameter or ring width are representative of more than half of the total variance. Although conductivity may be the significant factor in representing volumetric flow through a tree and be important in experimental work, difficulties in measuring or approximating this quantity may mean that studies of spatial variance should focus on vessel diameter and range of vessel sizes present, perhaps in conjunction with features such as grouping of vessels and occurrence of the ring-porous vs. diffuse-porous condition. | 2017-02-17T08:44:35.884Z | 1989-01-01T00:00:00.000 | {
"year": 1989,
"sha1": "87cae34b308996244286a8affa36884844673319",
"oa_license": "CCBY",
"oa_url": "https://scholarship.claremont.edu/cgi/viewcontent.cgi?article=1457&context=aliso",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "87cae34b308996244286a8affa36884844673319",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
6771008 | pes2o/s2orc | v3-fos-license | The prevalence and risk factors of cytomegalovirus infection in inflammatory bowel disease in Wuhan, Central China
Background The etiology of inflammatory bowel disease (IBD) is not clear and cytomegalovirus (CMV) infection is often associated with IBD patients. The etiologic link between IBD and CMV infection needs to be studied. The objective of the present study is to investigate the prevalence and risk factors of CMV in a cohort of IBD patients from Central China. Methods Two hundred and twenty six IBD patients (189 ulcerative colitis (UC) and 37 patients with Crohn’s disease (CD)), and 290 age and sex matched healthy controls were recruited. CMV DNA was detected by nested PCR, while serum anti-CMV IgG and anti-CMV IgM was determined by ELISAs. Colonoscopy/enteroscopy with biopsy of diseased tissues and subsequent H&E stain were then conducted in IBD patients with positive anti-CMV IgM. Finally, we analyzed the prevalence and clinical risk factors of CMV infection in IBD patients. Results The prevalence of CMV DNA and anti-CMV IgG positive rate in IBD patients were 84.07% and 76.11%, respectively, higher than those in healthy controls (59.66% and 50.69%, respectively, P < 0.05), However, anti-CMV IgM positive rate was no different with healthy controls (1.77% vs 0.34%, P = 0.235). In univariate analysis of risk factors, the recent use of corticosteroid was associated with increase of CMV DNA and IgM positive rate in UC (P = 0.035 and P = 0.015, respectively), aminosalicylic acid drug therapy was correlated with positivity of CMV DNA and IgG in UC and CMV DNA in CD (P = 0.041, P < 0.001 and P = 0.014, respectively), the treatment of immunosuppresent was correlated with CMV IgM (P < 0.001). Furthermore, patients with severe UC were significantly associated with CMV DNA and IgM (P = 0.048 and P = 0.031, respectively). Malnutrition (albumin < 35 G/L) was also found to be related with CMV recent infection (P = 0.031). In multivariate analysis of risk factors in UC, pancolitis was significantly associated with CMV DNA positivity (P = 0.001). Severe UC and pancolitis seemed to be related with IgG positivity. For CD, there was just single factor associated with CMV positive in each group, multivariate analysis was unnecessary. Conclusions CMV positive rate in IBD patients was significantly higher, than in healthy controls. The use of aminosalicylic acid, corticosteroid, immunosuppressants, pancolitis and severe IBD patients seemed to be more susceptible to CMV infection in univariate analysis of risk factors. However, no risk factor was found to be significantly correlated with CMV infection in multivariate analysis of risk factors.
Introduction
Inflammatory bowel disease (IBD) including ulcerative colitis (UC) and Crohn's disease (CD), consists of chronic, non-specific inflammatory diseases of the gut with unknown etiology. According to a retrospective survey, the incidence and prevalence of IBD in China are on the rise [1], and IBD is gradually become one of the refractory intestinal diseases. Even though the etiology of IBD is unknown, recent studies have shown that the pathogenesis of IBD is related to susceptible genes, immune dysregulation and microbiota of the gut.
Cytomegalovirus (CMV) is a β herpes virus with double-stranded DNA. Worldwide the current infective rate ranges between 40% and 100% [2]. Two studies in USA demonstrated that CMV positive rate in patients with acute severe colitis was 21-34%, in corticosteroidrefractory cases was 33-36%, and in active UC was 10% [3,4]. In Egypt, the CMV positive rate was 34.8% in corticosteroid-refractory IBD patients [2]. However, the prevalence of CMV in Chinese patients with IBD has not been reported in the literature till now.
CMV infection in IBD patients often makes clinical diagnosis and treatment complex. Maher et al. [2] have shown that IBD patients with CMV-positive were more likely to get fever, cervical lymphadenopathy, splenomegaly, leucopenia, thrombocytopenia, and pancolitis as compared to CMV-negative ones. Kandiel et al. [3] used antiviral drugs for the treatment of acute severe CMVpositive colitis, and achieved a remission rate of 67-100%. However, Lévêque et al. [5] found no relationship between CMV viral load and disease severity in patients with active IBD. Out of 7 CMV-positive patients treated with immunosuppressants and no antiviral therapy, remission was achieved in 5 patients. de Saussure et al. [6] treated 3 CMV-positive IBD cases with antiviral therapy, and only 1 patient got remission. Recently a study on cytomegalovirus infection in IBD undergoing treatment of anti-TNF-αantibody demonstrated that active CMV infection did not progress to CMV infection/disease following infliximab therapy, the response to infliximab therapy did not appear to be influenced by, or influence the course of, CMV infection/disease [7]. These studies aided in proving a link between CMV infection and refractory IBD. However, due to the small cohorts in some studies, further studies with larger cohorts need to be conducted in order to find a conclusive relationship between CMV and IBD.
Nested PCR of CMV-UL93 is considered to be a highly sensitive method for the detection of CMV [8]. Serum anti-CMV IgG and IgM are widely used in practice. The detection of IgG antibodies against CMV should be detected at the first visit, when the diagnosis of IBD is confirmed, in order to clarify whether the patient is at risk of displaying primary infection (IgG negative) or reactivation/reinfection (IgG positive). However, IgM antibodies to CMV are systematically detected in primary infection, reactivation or reinfection, it indicates CMV infection is in active stage. With primary infection, an early IgM antibody rise occurs and becomes detectable in the blood within the first week of infection. Anti-CMV IgM increases within 2 weeks of infection, and can remain positive for 3 months to 2 years. Its sensitivity and specificity for CMV infection could reach up to 100% and 98.6% respectively [9].
In this study, PCR detection of CMV DNA, and serological determinations of IgG and IgM were used to investigate the prevalence of CMV infection in IBD patients from central China. Moreover, risk factors for CMV infection in IBD patients were analyzed to find relationship between IBD and CMV infection.
Patients and healthy controls
From 2006 to 2011, 226 IBD patients (189 UC and 37 patients with CD) were recruited from Zhongnan Hospital of Wuhan University School of Medicine. The diagnosis of UC and CD was based on clinical, laboratory, imaging, endoscopic and histological examinations, and in accordance to the Chinese Medical Association diagnostic criteria for IBD [10]. Clinical disease activity of UC and CD was assessed by Truelove and Witts criteria [11] and Crohn's disease activity index (CDAI) respectively [12].
Simultaneously, 290 sex and age matched healthy volunteers (controls) were recruited in Zhongnan Hospital Medical Center of Wuhan University, who attended this center for constitutional examination of health. All volunteers were unrelated to each other from Hubei province, and had no history of IBD, chronic infectious diseases, immune-mediated, ischemic and radiation-induced diseases. Subjects with history of use of corticosteroids and immunosuppressive agents, of drug abuse and unhealthy living were excluded from the study.
Definitions
The patients and controls were classified as current smokers if they had smoked more than 1 cigarette per day within 6 months before recruited, and nonsmokers if they never or rarely smoked. As for definition of alcohol drinking before the onset of symptoms, 3 categories were included: frequent drinking, light drinking and former drinking. Frequent drinking was defined as alcohol drinking 3 days or more per week for continuous 6 months before recruited, excluded drink everyday to alcoholic intoxication. Light drinking was defined as alcohol beverages less than 3 days per week. Former drinking was defined as patients who had quit drinking for more than 6 months before recruited. They were all defined as drinking. Non-drinking was defined as never or rarely drinking. Mixed food was defined as the composition of diet include vegetable and meat for at least 6 months before recruited. The severity of UC was assessed by Truelove and Witts criteria [11]: Severe diarrhoea (six or more motions a day) with macroscopic blood in stools. Fever (mean evening temperature more than 37.5°C, or a temperature of 37.8°C, or more on at least two days out of four). Tachycardia (mean pulse rate more than 90 per minute). Anaemia (haemoglobin 75% or less-allowance made for recent transfusion). ESR much raised (more than 30 mm in one hour). Mild diarrhoea (four or less motions a day) with no more than small amounts of macroscopic blood in stools. No fever. No tachycardia. Anaemia not severe. ESR not raised above 30 mm. in one hour. Moderately-Intermediate between severe and mild [11]. The severity of CD was classified by Best CDAI: Index values between 150 and 220 are associated with mild disease; values between 220 and 450 are moderate disease; and values above 450 are seen with severe disease [12]. Drugs use was defined as taking related drugs for a period of at least 2 months to the time recruited.
Ethics statement
The ethic committee of Zhongnan Hospital of Wuhan University approved the study. Consent was written by all subjects. Consent informed in current ethics statement was obtained from all participants involved in this study.
Extraction of DNA 5 ml venous blood was taken in anticoagulated tubes with ethylenediamine tetraacetic acid (EDTA) from all subjects, followed by centrifugation. 2 ml of sera was taken and stored at −80 degrees for further anti-CMV IgG and IgM assay. Genomic DNA was extracted using proteinase K and phenol/chloroform method, which was then subsequently stored at −80 degrees.
CMV-UL93 fragment detection
The CMV-UL93 fragment was retrieved from NCBI and BLAST, and was imported into Primer 5.0, according to the primer design principle. Lateral primers consisted of an upstream primer 5 0 -GGCAGCTATCGTGACTG GGA-3 0 , and a downstream primer 5 0 -GATCCGACC CATTGTCTAAA-3 0 . PCR conditions included 40 cycles of 95°C for 10 min, 95°C for 30 s, 57°C for 30 s, 72°C for 60 s, and then followed by 72°C for 10 min. Inner primers were an upstream primer 5 0 -TTAGCGCGT GACCTGTTACG-3 0 , and a downstream primer 5 0 -TCTAAATTGTTACGCAGTCCG-3 0 . PCR conditions included 40 cycles of 95°C for 10 min, 95°C for 30 s, 58°C for 30 s, 72°C for 60 s, and finally followed by 72°C for 10 min. Then electrophoresis using 2.5% agarose gel, of the medial segment was done to identify the products of PCR. Direct sequence for PCR products was done for detection of positive and negative PCR products. DNA-PCR+ was according to the result of electrophoresis of nested PCR, and ensured the result by DNA sequencing.
Serum anti-CMV IgG and IgM detection
ELISA kit (DIESSE Diagnostica senese, Italy) was used to detect serum anti-CMV IgG and IgM in IBD patients and healthy controls. CMV IgG+ and CMV IgM+ were according to the kit: Positive defined as optical density (OD) ratio of sample to standard threshold value greater than 1.1, negative below 0.9.
Histology and hematoxylin and eosin (H&E) staining
Colonoscopy and/or enteroscopy were conducted in anti-CMV IgM-positive IBD patients, and biopsies from pathological sites were taken. H&E staining for the detection of CMV was done.
Statistical analysis SPSS 13.0 software (SPSS for Windows version 13.0, Chicago, IL, USA) was used to conduct the statistical analysis. Measurement data are presented as mean ± standard deviation (SD), and numeration data are expressed as percentage and the number of cases. Independent samples were analyzed by Levene's test. χ 2 (Chisquare) test with Yates continuity correction or Fisher's exact test was performed to compare frequencies of risk factors between the IBD patients and healthy controls. Multiple logistic regression analysis was performed to evaluate multiple risk factors for IBD. Odds ratio (OR) and 95% confidence intervals (CI) were calculated. All calculated P-values were 2-sided and P < 0.05 was considered significant.
Demographic and clinical profile
As shown in Table 1, Patients with inflammatory bowel disease and heathy controls were age-and sex-matched (P > 0.05), demographic and clinical profile were included in this table.
CMV-UL93 fragment, and CMV IgG, IgM detection As shown in Table 2, prevalence of CMV-UL93 and anti-CMV IgG were significantly higher in IBD patients, than in healthy controls (84.07% vs 59.66%, P < 0.001; 76.11% vs 50.69%, P < 0.001), However, prevalence of anti-CMV IgM was no different with healthy controls (1.77% vs 0.34%, P = 0.235). For UC patients, CMV-UL93 and anti-CMV IgG were all significantly higher than in healthy controls (P < 0.001 and P < 0.001, respectively), While there was no difference between UC and healthy controls for anti-CMV IgM (P = 0.344). For CD patients, CMV-UL93 and anti-CMV IgG were higher than in controls (P = 0.027 and P < 0.001, respectively), while anti-CMV IgM was not increased as compared to healthy controls (P = 0.540). However, in biopsies taken from the pathological sites of intestinal mucosa of anti-CMV IgM positive IBD patients, no inclusion bodies were detected in H&E stain.
Univariate Analysis of Risk factors for CMV positive in patients with IBD
In UC patients, elevated CMV-DNA was mainly associated with the severity of disease activity (P = 0.048), use of ASA/SASP (aminosalicylic acid/salicylazosulfapyridine) and corticosteroids therapy (P = 0.041 and P = 0.035, respectively), while other factors, such as age, sex, smoking, alcohol consumption, type of diet, fever, anemia, albumin level, disease location, treatment with immunosuppressive agents had not shown significant association (P > 0.05). As for CD patients, CMV DNA positive was positively associated with use of 5-ASA/ SASP (P = 0.014), as seen in Table 3.
In UC patients, anti-CMV IgG level was associated with the use of 5-aminosalicylic acid (P < 0.001) as shown in Table 4. CD patients on vegetarian diet had much lower anti-CMV IgG positive rate, than those on non-vegeterian diet (P = 0.010). Other factors had no statistically significant impact on anti-CMV IgG positive rate (P > 0.05).
The positive rate of anti-CMV IgM in UC patients, was associated with low albumin level (P = 0.031), severe UC (P = 0.031), use of corticosteroids (P = 0.015), and the use of immunosuppressive agents (P < 0.001), while other factors did not cause any statistically significant changes in anti-CMV IgM positive rate (P > 0.05). In CD patients, no statistically significant association with anti-CMV IgM was found, as shown in Table 5.
Multivariate analysis by logistic regression for CMV positive in IBD
As shown in Table 6. In multivariate analysis of risk factors in UC, pancolitis was significantly associated with CMV DNA elevated (P = 0.001). Severe UC and pancolitis seemed to be related with IgG positive (P = 0.021 and P = 0.017, respectively). For CD, there was just single factor associated with CMV positive in each group, multivariate analysis was unnecessary.
Discussion
CMV is an opportunistic pathogenic microorganism. In vivo it can proliferate in epitheliums, white blood cells and sperm cells, and it is prone to cause latent infection of salivary gland, mammary gland, kidney and white blood cells. In IBD patients, immunosuppressive therapy, impaired absorption of nutrients, dysfunction of the immune system, render them susceptible to CMV infection [13], which is consistent with the high infective rate of CMV in immunosuppressed individuals, such as acquired immunodeficiency syndrome (AIDS), transplant recipients, cancer, chemotherapy, but rare in immunocompetent individuals.
The detection methods of CMV infection in IBD patients included DNA detection, serological tests (serum anti-CMV IgM, IgG), histopathology (inclusion bodies detection) in this study. Body fluids or tissue sample was feasible to CMV culture, but it was timeconsuming and also had low sensitivity, which limited clinical application [14]. The detection of CMV-DNA was considered as the most sensitive method [15], but it was associated with false positive results, thereby decreasing its specificity. Increase in serum anti-CMV IgM level occurred in recent CMV infection and it had high sensitivity and specificity [9], while anti-CMV IgG indicated past CMV infection. Sensitivity of H&E was 10% to 87% [3], CMV inclusion bodies could be found in biopsy specimens from colon with inflamed and ulcerated mucosa [16]. We used CMV-DNA specific fragment UL93, anti-CMV IgG and IgM to detect CMV infection in IBD patients and healthy controls. Positive rates in IBD patients were 84.07%, 76.11%, 1.77% for CMV DNA, anti-CMV IgG and IgM respectively, and 59.66%, 50.69%, 0.34% for healthy controls. CMV-UL93 and anti-CMV IgG were significantly higher in IBD patients as compared to controls, thereby indicating the association between IBD and CMV.
The positive rate of CMV UL-93 or anti-CMV IgG in the healthy controls was about 50-60%, whereas in IBD patients it was around 70-80%. A research conducted in India showed the CMV DNA positive rate was just 12.70% in IBD patients [17], which was remarkably lower than in our country. The small number of subjects enrolled (63 IBD patients) could account for the low positive rate in the Indian study. Another research done in France showed 60% positive rate in IBD patients [18], which was consistent with our research. In developed countries, the anti-CMV IgG positive rate was found to be above 70% [19].
Anti-CMV IgM positive rate is only 1.77% in our research, which was lower as compared to an Indian study, including IBD patients with both active disease and in remission [17], where the rate was 9.52%. Anti-CMV IgM positive rate in healthy controls (0.34%) was no different with IBD patients (P = 0.235), which was related with small amount of people included.
Presence of anti-CMV IgM indicated recent infection of CMV. However in biopsies taken from the pathological sites of intestinal mucosa of anti-CMV IgM positive IBD patients, no inclusion bodies was detected. This was probably due to the low sensitivity of histological examination. The site from where the biopsy was taken, and the amount of tissue retrieved, may also influence the sensitivity of finding inclusion bodies. Next stage we should recruit enough biopsies and screen CMV by immunohistochemistry.
In a model of multivariate analysis adjusting for multiple factors for UC. Disease location seemed to be significantly associate with CMV infection or re-activation, this may be associated with the theory that CMV was prone to proliferate in granulation tissue [20]. Pancolitis involved with larger areas of ulcerative mucosa, which promote proliferation of CMV. Some researches [21] found that CMV was readily discovered in granulation tissue and tissue from deep ulcers, which suggested that CMV could penetrate inflamed mucosa via mononuclear cells, and then proliferate in the mucosa. A recent study [22] showed that the murine CMV (MCMV) could not induce acute colitis, but the latent MCMV infection could increase the severity of the dextran sulfate sodium (DSS) induced colitis. Moreover, acute MCMV infection could significantly increase the serum and intestinal natural killer cells, interleukin (IL)-6, TNF-α, IFN-γ, indicating that CMV infection can modulate mucosal immunity, thereby increasing susceptibility to inflammation. CMV infection can also activate oncogenes, kinases, transcription factors inducing tumorigenesis, which may be one of the reasons of the higher incidence of colorectal cancer in IBD patients [23]. CMV played a role in the initiation and progression of inflammation in IBD. The treatment of 5-ASAs, corticosteroids and immunosupressents were no longer significant associated with CMV infection in multivariate analysis.
Currently there is no absolute indication for antiviral therapy in CMV-positive IBD patients. However Eddleston recommends antiviral therapy in immunocompetent [24]. Pfau [21] found out that ganciclovir could reduce mortality rate and surgical intervention rate, while de Saussure P [6] showed that antiviral therapy had no effect on the disease course.
In summary, as compared to healthy individuals, IBD patients have a predisposition to CMV infection. No risk factor was found to be significantly correlated with CMV infection in risk factors analysis. | 2016-05-04T20:20:58.661Z | 2013-02-01T00:00:00.000 | {
"year": 2013,
"sha1": "4782a98a156a94a8b3a44022610d813d2aef51be",
"oa_license": "CCBY",
"oa_url": "https://virologyj.biomedcentral.com/track/pdf/10.1186/1743-422X-10-43",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4782a98a156a94a8b3a44022610d813d2aef51be",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
203345856 | pes2o/s2orc | v3-fos-license | Safety of Enoxaparin as Venous Thromboembolism Prophylaxis After Rhytidectomy
PURPOSE: Patient-reported outcomes after female cosmetic genital surgery have been well documented. Methods for assessing patient-reported outcomes after female cosmetic genital surgery vary widely between studies, and these methods are often very detailed, time consuming, and difficult to reproduce. This article aimed to assess patientreported outcomes after female cosmetic genital surgery using a novel and efficient method and survey.
RESULTS: Seventy-seven women underwent female cosmetic genital surgery during the study period. All patients underwent central wedge excision for labia minora hypertrophy with or without extension for clitoral hood hypertrophy. Over a mean follow-up of 37.4 months, the overall postoperative complication rate was 35.1% (27 patients), which included wound dehiscence, asymmetry or redundancy, hematoma, decreased sensation, and dyspareunia, and the revision surgery rate was 27.3% (21 patients). The patient-reported outcomes survey response rate was 50.6% (39 patients), with a mean age of 30.0 ± 11.4 years and a mean body mass index of 22.2 ± 3.6 kg/m 2 , a mean time since surgery of 55.6 months, a revision surgery rate of 25.6% (10), and an overall complication rate of 35.9% (14 patients), which included wound dehiscence, asymmetry or redundancy, decreased sensation, and dyspareunia. With regard to satisfaction with outcome, despite the high complication and revision surgery rate, 97.4% (38 patients) felt overall the surgery was a good experience and were satisfied with the results after surgery and only 2.6% (1 patient) did not. When compared with preoperative assessment, patientreported outcomes after female cosmetic genital surgery showed significant improvement, with regard to physical well-being ( CONCLUSIONS: This novel and efficient method and survey can be used to assess patient-reported outcomes after female cosmetic genital surgery, with respect to 4 important domains. Despite a high potential complication and need for revision surgery rate, the vast majority of patients who undergo female cosmetic genital surgery feel that it is a good experience, are satisfied with the results after surgery, and show significant improvement in patient-reported outcomes after surgery with regard to physical well-being, psychosocial well-being, and sexual well-being.
Affiliation: Michigan Medicine, Ann Arbor, MI
PURPOSE: Venous thromboembolism (VTE) is a recognized and highly morbid complication of plastic surgical procedures. Although rare after cervicofacial rhytidectomy, it is a potential complication of this procedure and significantly more likely in instances of combined procedures. We are concerned that some surgeons may elect not to give deep venous thrombosis (DVT) prophylaxis postoperatively, in rhytidectomy or combined procedures patients, out of concern about the potential for hematoma at the facelift site. We aim to examine whether postoperative VTE prophylaxis with enoxaparin increases the risk of postoperative bleeding complications after these procedures. Thirteen of these patients (15%) received postoperative DVT prophylaxis with enoxaparin 40 mg within the 24 hours after surgery (range, 6.5-19.8 hours). The rate of hematoma was 7.7% in the group that received enoxaparin and 6.8% in the group that did not; the difference was not significantly different (P = 1.0). The groups were otherwise similar, except that the group receiving enoxaparin had a higher mean body mass index than the group that did not (28.2 versus 25.0; P = 0.01). No VTE was observed in either group, and the mean Caprini score was similar between groups (4.5 versus 4.6; P = 0.66). In multivariate logistic regression controlling for age, gender, and body mass index, enoxaparin administration was not associated with hematoma development (odds ratio = 1.30; P = 0.84; 95% confidence interval = −2.24 to 2.76).
CONCLUSIONS:
In patients undergoing cervicofacial rhytidectomy, administration of enoxaparin 40 mg beginning ≥6 hours after surgery does not seem to significantly increase the rate of hematoma requiring intervention.
Affiliation: Hofstra Northwell School of Medicine, New York, NY
PURPOSE: Minimally invasive cosmetic procedures are very popular with over 17 million procedures occurring in 2017. 1 Botulinum toxin and injectable fillers are the 2 most popular procedures because they help patients achieve a younger, more attractive appearance. Numerous studies have indicated that patients and physicians alike are highly satisfied with the results of botulinum toxin and injectable fillers. However, it remains unclear how the public responds to individuals after they are treated with these procedures. This study intends to first identify if botulinum toxin and hyaluronic acid fillers impact the way the public perceives a patient and second measure the impact of the public's change in perception by assessing if the public would behave differently toward a patient after treatment with botulinum toxin and hyaluronic acid fillers.
METHODS:
A total of 40 patients were recruited for this Institutional Review Board-approved study. Eligible patients were over 18 years old and had not received any cosmetic procedures in the past year. Patients were divided into 2 treatment groups. One group received 1 syringe of Juvéderm applied to their lips, and the other group received 50 units of botulinum toxin applied to their glabella, forehead, and crows feet. Each patient answered a survey about their interactions with others before treatment and returned for follow-up in 1 week to take the same survey. Demographics and surgical history were recorded, and before and after photographs were taken. Photographs were used to create a crowdsourced survey which asked responded to assess patients on different personality traits and indicate how likely they would be to engage in a particular action with the patient.
RESULTS:
A total of 1,000 survey responses were received. On average, the public perceived patients as significantly more attractive, trustworthy, intelligent, youthful, naturally beautiful, and likeable following treatment with botulinum toxin and Juvéderm (P < 0.05 for all). The public was also more likely to invite patients to social events and ask the patient on a date following treatment with botulinum toxin and Juvéderm (P < 0.05 for all). There were not significant changes in the public's likelihood to hire a patient, ask them for help with a work project, or lend the patient money following either treatment. Patients also reported that they felt more likely to be asked on a date following both treatments.
CONCLUSIONS:
This study suggests that treatment with minimally invasive cosmetics such as botulinum toxin and Juvéderm may impact the way the public both perceives and interacts with patients. Patients may be perceived more favorably in many ways. However, minimally invasive procedures are unlikely to impact how individuals interact with patients in a professional capacity. | 2019-09-17T03:02:13.855Z | 2019-08-01T00:00:00.000 | {
"year": 2019,
"sha1": "b9e1ef7c3cd4f9221f0dcf6241711b46d503d4e0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1097/01.gox.0000584240.38837.ec",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad0b08f7171dcec5899b88ac4869c60f2c05e2a8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237678940 | pes2o/s2orc | v3-fos-license | Education and bioethics teaching in times of pandemic
Introduction: Due to the suspension of face-to-face activities during the COVID-19 pandemic, Higher Education Institutions had to discuss and plan alternative actions in an attempt to readequate themselves to emerging educational demands in order to offer remote accessibility to the academic community and, consequently, reduce social and digital exclusion. Development: Having that in mind, this article aims at offering a reflection on the teaching of bioethics under the perspective of social justice and education. Within this context, the students’ socioeconomic profile cannot be ignored in the planning of online education, since it directly affects students’ accessibility to academic activities through the use of computers and the internet. Therefore, this article proposes the use of moral intelligence skills as learning goals, in addition to revising and contextualizing pre-existing problems prior to the new reality of didactic contents. Moreover, it proposes a reflection on how bioethics may contribute to the discussions on the increase in social inequalities during this moment of crisis. Conclusion: The reflections presented in this article can be used in both remote, face-to-face and hybrid teaching contexts.
INTRODUCTION
In March 2020, the World Health Organization (WHO) announced the pandemic of the new coronavirus (SARS-CoV-2), leading to the adoption of several measures in an attempt to reduce the contamination curve of the world's population. One of them was the recommendation for social distancing, followed by social isolation, which, among other consequences, led to the need to interrupt activities in several sectors considered non-essential to society. Consequently, all educational institutions had their in-person classes suspended.
In Brazil, with the suspension of educational activities, the Ministry of Education (MEC) issued several ordinances, and the last one, N. 544, of June 17, 2020, authorized the replacement of disciplines that were taught in person by classes taught in digital environments, using information and communication technologies (ICTs) or other conventional means 1 .
Undoubtedly, despite the fact that Emergency Remote Learning (ERL) and Distance Learning (DL) are already a reality for some Higher Education Institutions (HEIs), for countless others this has brought several challenges 2 . Universities in the health area had to reinvent themselves in relation to the educational process and, almost simultaneously, they have discussed and implemented the best options to offer remote access and alleviate both social and digital exclusion, aggravated by the pandemic.
The discipline of Bioethics also had to reinvent itself, since, in person, it is characterized by dialogic discussions and is often approached through active methodologies strategies or moral practices. How can we adapt the virtual teaching of Bioethics in times of pandemic without aggravating the inequities? This question guided our research, which resulted in the following objective: to carry out a purposeful reflection on remote teaching in Bioethics in times of pandemic, from the perspective of education concerning values and social justice.
SOCIO-DIGITAL INCLUSION AND SOCIAL JUSTICE IN HIGHER EDUCATION IN TIMES OF PANDEMIC
The contemporary discussion of education aimed at citizenship and values cannot be separated from the debate on the inclusion of a cultural, racial, ethnic, gender and social diversity in the university. This citizenship education, based on democratic, egalitarian and equitable assumptions, envisions a fairer society with pedagogical and curricular choices linked to values that are intrinsically related to the students' moral education.
The interiorization of public universities and the implementation of the affirmative actions through the quota system brought changes to the socioeconomic profile of medical students, which cannot be ignored when planning online education during the pandemic. A good example of this situation is Universidade de Campinas (UNICAMP), which has, since 2005, implemented the Affirmative Action and Social Inclusion Program (Paais, Programa de Ação Afirmativa e Inclusão Social), aiming at the inclusion of high school students from public schools. Since 2018, the Institution has observed a higher percentage of admission of brown and black students from the public school network 3 . In 2019, in the medical course of the same institution, it was found that this percentage reached 85%, changing their socioeconomic profile, with a predominance of strata C1 and B27 3 where 30.9% of the medical students declared a family income of 1 to 3 minimum wages and 3.9% up to 1 minimum wage 6 .
Also, according to the UNICAMP survey during the pandemic, it was observed that the majority (80%) of students accessed the classes via computer, notebook, or cell phone; that only 10% accessed classes exclusively by cell phone, and 10% accessed by tablet 3 . The main accessibility problems were: unstable internet connection and/or access exclusively via mobile networks; greater difficulty in following activities transmitted through web conferences and virtual meetings, as well as difficulties in accessing activities on digital platforms and image applications 3. These data, although not generalizable, do not exclude the relevance of observing these students who entered an elite course, and who cannot be ignored when planning an online education. Mainly in a period in which social inequalities lead to a demand for more equitable choices, especially in educational institutions which, at their origin, cannot be an additional cause of exclusion for groups that are already socially oppressed. In all of them, but especially the higher education ones (the object of our research), must be committed to establishing a culture of inclusion and legitimization of diversity, training for citizenship, in an attempt to minimize inequalities and increase inclusion.
Equality is not exclusively related to the distribution of goods among individuals (such as, for instance, distributing mobile devices to socially vulnerable students and concluding that this isolated action would be the solution for the issue of remote access to education); equality is closely associated Next, we will describe the skills that constitute the Moral Intelligence framework according to Puig 11,12 .
One of the purposeful reflections of this article is that these skills can be used as learning objectives in Bioethics disciplines and can be planned and stimulated in pedagogical strategies or in moral practices developed in the classroom, whether in-person or virtual ones.
Moral practices for Puig 10 can be considered as: "an established course of cultural events that allows us to face significant, complex or conflicting situations from a moral point of view". Also, according to the author, moral practice is a means of teaching and learning that problematizes usual life situations; it is a situation that has been though of and willing to learn in social practice 10 . Later, we will establish its correlation with ERL and DL. It should be noted that from Puig's perspective, there is no possibility of moral construction without the presence of contextualized moral problems 12 .
BIOETHICS TEACHING STRATEGIES IN A VIRTUAL ENVIRONMENT
With the pandemic, the entire discussion about ERL and DL, which would take decades, was accelerated due to its urgent attribute. This led the Ministry of Education (MEC) to authorize the replacement and adaptation of in-person disciplines into classes that use digital media, while the pandemic situation lasts 1 .
Before continuing our purposeful reflection on the teaching of Bioethics in a virtual environment, one needs to differentiate between ERL and DL. The DL modality occurs when students and teachers are not together at the same time 14 .
The term ERL, on the other hand, refers to the rapid change This teaching modality is considered to be remote because it requires a temporary geographic distance between students and teachers and was adopted at different levels of education by educational institutions around the world, so that school activities were not interrupted in the midst of the pandemic 15 .
As part of the reflexive proposition, we can use, in Bioethics disciplines, moral practices that are related to the following skills: self-knowledge, empathy, moral judgment, dialogical skills, critical understanding and self-regulation. Some of the disciplines already held these discussions in their syllabuses; what is needed now is to discuss in medical courses how to advance the discussion for the insertion of the several spectra of human diversity -biological, subjective, ethnic-racial, gender, sexual orientation, socioeconomic, political, environmental, cultural, ethics -provided for in the National Curriculum Guidelines (NCGs) 21 . It is necessary to improve this debate on the social responsibility of medical schools, not only in this period of pandemic, but in the sense of promoting social justice to reduce inequities 21 . All these topics should be contextualized within the current moment, as well as adapt the discussions of content that already existed before the pandemic in the curricula, such as: the beginning and end of human life, principles of bioethics, secrecy and confidentiality, and professional relationship with the patient.
Added to these attempts to readjust the teachinglearning content and objectives, it is also necessary to discuss the socio-digital inclusion strategies with the teaching staff and the HEIs. It is noteworthy that these measures cannot be eventual and understood as a solution for the question of remote access to education; close and continuous monitoring of the students identified as being in a situation of social vulnerability is essential, so that they can be offered pedagogical, emotional, socioeconomic and digital support.
FINAL CONSIDERATIONS
The Covid-19 pandemic, due to its urgent nature, accelerated the inclusion of remote teaching in education, but, at the same time that it brought advances regarding the inclusion of information and communication technologies for the curricula in the health area, it also brought many uncertainties. For the discipline of Bioethics, it was no different, not only regarding education questions about socio-digital, economic and health inequalities, but also in learning, due to the loss of contact and face-to-face discussions, essential for the construction of moral personality. The challenge now is to incorporate the learning objectives consistent with moral practices and contents in Bioethics disciplines and maintain dialogical and participatory skills in the virtual environment.
We hope that our purposeful reflections can contribute to new considerations on the teaching of Bioethics, so that we can envision the exponentially increased bioethical problems and dilemmas in society at this time of pandemic, under the lens of social justice and, thus, assist the moral development of our students.
AUTHORS' CONTRIBUTION
Waldemar Antônio das Neves Júnior was in charge of the study concept, data curation, formal analysis, investigation, methodology and writing of the manuscript.
Lumaira Maria Nascimento Silva da Rocha Marques and Michelle
Cecille Bandeira Teixeira participated in the investigation, methodology, writing and content review of the manuscript. | 2021-08-27T17:03:35.830Z | 2021-07-16T00:00:00.000 | {
"year": 2021,
"sha1": "b61e90939fe9e2969e53bf76f41649dbf95533bb",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/rbem/a/RHsdwscHFhkMdTDxs6mDcLQ/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b54cbce9c489548a25b262ab098c18c05e05fdcb",
"s2fieldsofstudy": [
"Education",
"Philosophy"
],
"extfieldsofstudy": [
"Sociology"
]
} |
259631533 | pes2o/s2orc | v3-fos-license | The Platformisation of Scholarly Information and how to Fight It
The commercial control of academic publishing and research infrastructure by a few oligopolistic companies has crippled the development of the open access movement and interfered with the ethical principles of information access and privacy. In recent years, vertical integration of publishers and other service providers throughout the research cycle has led to plat-formisation, characterised by datafication and commodification similar to the practices on social media platforms. Scholarly publications are treated as user-generated contents for data tracking and surveillance, resulting in profitable data products and services for research assessment, benchmarking and reporting. Meanwhile, the bibliodiversity and equal open access are denied by the dominant gold open access model and the privacy of researchers is being compromised by spyware embedded in research infrastructure. This article proposes four actions to challenge the platformisation of scholarly information after a brief overview of the market of academic journals and research assessments and their implications for bibliodiversity, information access, and privacy: (1) Educate researchers about commercial publishers and APCs; (2) Allocate library budget to support scholar-led and library publishing; (3) Engage in the development of public research infra-structures and copyright reform; and (4) Advocate for research assessment reforms.
Introduction
Platformisation is understood as "the penetration of the infrastructures, economic processes, and governmental frameworks of platforms in different economic sectors and spheres of life" (Poell et al., 2019). For creators (e.g. artists, game developers, musicians), platforms are essential for the hosting and promotion of their works; without platforms, they may not be able to reach a wide audience and earn a living. When some platforms become dominant in a market (e.g. Spotify, YouTube), switching to alternatives can become unviable. Platformisation is mainly characterised by datafication and commodification: platforms generate advertising incomes by tracking personal data, some also sell packaged data to third parties. Many creators, who produce the contents, barely make ends meet as platforms capture much of the revenues and profits (Giblin & Doctorow, 2022). Increasingly, creators develop their work and marketing strategies to align with the algorithms and standards, yielding to the monopolistic powers of the platforms (Nieborg & Poell, 2018). In the last decade, platform studies have mainly focused on the cultural production and platform governance of social media, the data surveillance practices and platformisation of scholarly information have only attracted attention very recently (Deutsche Forschungsgemeinschaft [DFG], 2021; Ma, 2022;Williamson, 2021;Wood, 2015).
Similar to cultural production, the platformisation of scholarly information has two major features: one is the datafication and the commodification of user contents (scholarly publications) and personal data, and the other is the loss of negotiating powers in creating standards, values and norms of knowledge production (Ma, 2022). The datafication and commodification of scholarly publications are similar to the practices of platforms such as Spotify and YouTube: data products and services are derived from the traffic, including citations, downloads, and behavioural data (DFG, 2021). There are two main revenue sources: one based on subscriptions and sales of publications, and the other is data products and services including a wide range of metrics for benchmarking, ranking and reporting, as well as the sale of packaged data (Lamdan, 2023). What distinguishes platformisation of scholarly information from social media platforms is that, first, the data products and services are mostly sold right back to research institutions and universities-that is, the content producers. Data products and services (e.g. Journal Citation Reports, SciVal) are then used to assess the quality and impact of research, meaning that the data products and services can significantly influence the norms and values of research. Second, the copyright (or exclusive publishing rights) of the contents is often transferred to the publishers, meaning that researchers and research institutions have no control over how their publications are disseminated, or whether they are archived or preserved, whilst the data derived and captured are owned by the platforms.
As some publishers become platform owners, they boost and boast the quantity of scholarly information with minimal concerns about quality. It is because more publication-and citation-based data can be generated if there are more publications and interactions (Ma, 2023;Pooley, 2022). While these companies do not produce the contents of scholarly information or conduct peer review, they generate revenues and profits by selling access (subscriptions or article processing charges (APCs)) and data services and products for their valueadded services such as copyediting and typesetting. Ma (2022) argues that information is platformised when platforms transform the ways by which (1) information is produced, curated, and disseminated and (2) personal data are tracked, packaged and sold. The platformisation of scholarly information, however, entails weakened negotiation powers of libraries to obtain and grant access to scholarly information. The platformisation of scholarly information also means that data about research activities are being tracked and collected and then shared with or sold to third parties (Lamdan, 2023).
The platformisation of scholarly information should be of utmost concerns for research libraries for two main reasons: firstly, the ethical principles concerning information access, as well as privacy and confidentiality of librarians and information professionals (American Library Association [ALA], 2021a; CILIP, 2018; International Federation of Library Associations and Institutions [IFLA], 2012) are breached; secondly, the open access movement can be sabotaged when commercial platforms take control of what and how scholarly information is organised, disseminated and accessed. The following section will provide a brief overview of the market of academic journals and research assessments and their implications for bibliodiversity, information access, and privacy, followed by four actions to fight the platformisation of scholarly information: (1) Educate researchers about commercial publishers and APCs; (2) Allocate library budget to support scholar-led and library publishing; (3) Engage in the development of public research infrastructures and copyright reform; and (4) Advocate for research assessment reforms.
The Market of Academic Journals
The majority of academic journals are published by commercial publishers. Over the decades, some (not all) publishers have increased subscription fees and/or APCs at rates much higher than inflation, and some track and spy on research activities (DFG, 2021;Wood, 2015). The Big Deals publishers, Elsevier, Springer Nature, Wiley, Taylor & Francis and American Chemistry Society (ACS), each publishes over 2000 journals (Fyfe et al., 2017) and together they occupy over 50% of the market share Stoy et al., 2019). Together, their subscription costs exceed 75% of total expenditures on journal publications in Europe, with the median price per article range from €1,344 to €2,658 (Table 1). In 2021, the Association of Scientific, Technical and Medical Publishers reported that the estimated growth of new scholarly journals is 2-3% annually and the global market is expected to reach the value of $28 billion by 2023 (Bhosale, 2022).
The gold open access (GOA) option in hybrid journals introduces extra revenue streams for academic publishers. Fully open access (OA) journals are less expensive than hybrid journals, averaging around 59% of hybrid average APCs in 2022 (Pollock, 2022). The bigger publishers are charging higher APCs when some smaller journals are charging no fees ( Table 2). Seventeen journals were in the range of $10-44.7 million revenue between 2015-2020. Table 3 shows the nine publishers with the highest APC revenues. It is estimated that more than two-thirds of all revenue (68%) goes to 6% of journals that are charging more than $2,000 per article (Crawford, 2021).
There is no question that academic publishing is a big business for a small number of publishers whether in terms of subscription fees or APCs. Until recent years, the business model had been to expand the catalogues to increase revenues and profits, which is a cause of the serials crisis. However, some of these companies are not just publishers: they also provide products and services embedded in the research infrastructure and expand their business through vertical integration with a focus on data businesses (Andrews, 2020). The Innovations in Scholarly Communication: Changing Research Workflows diagram created by Jeroen Bosman and Bianca Kramer 1 shows that Elsevier's products, including Mendeley, Scopus, SSRN, CellPress, SciVal, PlumX Metrics are used in the research process, discovery, analysis, writing, publication, outreach and assessment 2 (see also Figure 1). Researchers and research institutions are dependent on these commercial publishers and their products. These companies exploit the need for information access and take advantage of metrics-based research assessments. To a certain extent, the business of scholarship is becoming a solely data-driven commodified business, resembling that of the giant internet companies which extract data and profits through monopolising the infrastructure.
The Choke Point: Research Assessments
Research assessments are necessary for academic recruitment, tenure and promotion, and acquiring funding at the individual level, on the one hand, and the allocation of research budgets (e.g. block grants) and strategic planning at the institutional level, on the other. In principle, the criteria for research assessments should be aligned with the values, missions, and norms of the scholarly community and are set to assure the quality and impact of scholarly work (see, for example, Larivière & Sugimoto, 2019). Nevertheless, they are currently heavily dependent on publication-and citation-based metrics despite their limitations and misuses. Journal impact factor (JIF), CiteScore, h-index, field-weighted citation impact (FWCI), and source-normalised impact per paper (SNIP) are some of the most used metrics in research assessments such as university rankings and national research assessment exercises. At the same time, they can also be used in decisions related to redundancy and closing of subject areas and departments in universities. For instance, forty-seven researchers at the University of Liverpool were notified that their jobs were at risk in January 2021 and the criteria used for redundancy include grant income targets and Scopus's FWCI (Else, 2021).
University rankings, journal rankings and lists of highly cited researchers are created using metrics.
Researchers are hence pushed to publish in publications indexed in major indexing services, Web of Science or Scopus, meaning that they are less likely to submit articles to journals without a track record of citations. The trust in citations and citation-based metrics entails that the legitimacy of knowledge is held in the hands of commercial indexes largely consisting of English language journals in Western countries, with a strong focus on STEM. At the same time, metrics are also embedded in search algorithms of Google Scholar and other indexing services, which perpetuates the importance of citations and citation-based metrics.
Further, the stronghold of metrics in research assessment exercises reinforces the power of platforms involving data providers and commercial publishers, while stifling the growth of alternative publishers including non-profit, scholar-led, and library publishers. Laakso et al. (2021) have shown that journals affiliated with academic institutions or scholarly societies or those published social sciences and humanities research represent a larger share of vanished open access journals, partly because they struggled to attract submissions and subscriptions for they were not indexed on WoS or Scopus-the presumptive authority of research quality and knowledge.
It is evident that the misuses of metrics in research assessments have negatively influenced research culture and knowledge production (Wellcome, 2020). More broadly, metrics can perpetuate systemic and structural inequalities in knowledge production (Ma, 2022) and reinforces the power over knowledge production in the so-called scientific periphery (Beigel, 2021). The responsible metrics movement 3 attempts to avert the effects and reinforces the importance of peer review in evaluating the quality and impact of research outputs. Less has been discussed, however, is metrics (data products) in the context of platformisation, especially the power and control seized by the few monopolistic publishers and data providers. The use of metrics in comparing and benchmarking individual achievement to university performance becomes the choke point in the further development of open research infrastructure, while consolidating the market share and power of platforms.
The Loss of Bibliodiversity, Information Access and Privacy
To a large extent, platforms such as Scopus and Web of Science wield power over what is considered as knowledge (or information) by including and excluding journals and publishers (Ma, 2023). Their authority and legitimacy are granted by research assessments-the very fact that researchers in many parts of the world are evaluated based on publications indexed on these platforms. Publications not indexed on these platforms are deemed lower quality, and sometimes even predatory (Mills et al., 2021). However, these perceptions can be misguided by the dominance of English language publications and the overemphasis of citations and citation-based metrics. In fact, the platformisation of scholarly information will further lead to the loss of bibliodiversity 4 and create a monoculture (see, for example, Demeter & Toth, 2020) because these indexing criteria are essentially adverse to bibliodiversity and multilingualism in knowledge production. There are also systemic biases that lead to rejection of research findings in non-Western countries. As a researcher of indigenous African food crops recalled, her publications were rejected by traditional journals "[N]ot because the research was not good, but because they regarded the crops I was writing about as weeds." 5 Platformisation does not only interfere with the norms, values, and diversity of research, it also affects information access. The open access movement is primarily concerned with scholarly information and the reason is a simple one: if research is publicly funded, then scholarly information should be publicly accessible. The ideal of open access can be traced to scientific internationalism as "a result of progressive and egalitarian commitments to the universality of knowledge and its service to the common good" in the late 19th and early 20th century (Wang, 2022, p. 57). Currently, the dominance of the gold open access model, especially those with the highest APCs in traditional journals is hindering access to works by authors who cannot pay. At the same time, universities, libraries, researchers, and the general public should be dumbfounded that publicly funded research should become the property of private companies who charge access or subscription fees when it is not supported by APCs due to the fact that most publishing contracts require the transfer of copyright or granting of exclusive licence to publish.
Further, if the platforms cease operation due to business decisions, there is no guarantee that all scholarly information can be accessed continuously.
Although there are safeguards measures such as LOCKSS, 6 it is absurd that publicly funded research outputs are not centrally preserved and that research communities, libraries, and the general public have little power to restore access. Wiley's removal of 1,300 ebooks from academic libraries in Autumn 2022 should be a cautionary tale (Library Association of Ireland, 2022). It is deeply frustrating that the fruits of research are bestowed upon platforms when they have no interest in upholding the values and mission of research or libraries but to maximise profits. Information access should be guaranteed when the labor of research, writing, and peer review are provided by public funds.
Last, the right to privacy and confidentiality has been held in libraries to encourage all members of the community and society to access information without the fear of surveillance or repercussions. Data collection and tracking by platforms fundamentally violate privacy and confidentiality; in fact, these data can be leaked or sold to third parties including government agencies and departments. Although libraries are not collecting or sharing these data, they should actively oppose to these practices. For instance, ALA (2021b) has issued a resolution in response to data surveillance by vendors, including the clause "in every circumstance the library user's information is protected from misuse and unauthorised disclosure, and ensuring that the library itself does not misuse or exploit the library user's information." Platformisation, however, can undermine the privacy of all those who access information when there are no alternatives to their products and services.
Educate Researchers about Commercial Publishers and APCs
Most researchers are not aware of the business models of commercial publishers, nor do they know about the budgetary issues faced by academic libraries. The majority cannot tell the differences between the green, gold and diamond models of open access and, in truth, they usually do not bother until there are compliance issues due to funding mandates or when the open access quota has been used up under a transformative agreement.
For decades, the so-called 'publish or perish' academic culture and eventually the push for high citations and high impact have left little room for researchers to consider the epistemic and ethical aspects of academic publishing. For many, the first rule of thumb is to produce as many publications as possible and to publish in high impact journals in order to attract citations. These practices are hinged on the use of metrics in research assessment. Researchers tend to pay little, if any, attention to the academic publishing market and their practices.
The very fact that some publishers are making gigantic profits is not well acknowledged amongst researchers. It is also very unlikely that they are informed about the surveillance activities embedded in products and services throughout the research lifecycle (see, for example, Fried, 2022). When researchers try to survive in a highly competitive academic job market, they do not register the reality that the chase after high impact publications has implications for inequalities in global knowledge production and the loss of bibliodiversity. Most also do not know about librarians' contributions in facilitating information access and negotiating subscription or read-and-publish (i.e., transformative or transitional) contracts.
Scholarly communication and related roles in academic libraries aim to provide advice and guidance on research data management, research impact and some also include bibliometric services. By and large, these activities are to support researchers at various stages of the research process with considerations of research assessment frameworks and institutional development plans. Transformative (or transitional) agreements have been negotiated with the best interests of researchers in mind. However, it is apparent that the platformisation of scholarly information is affecting research culture, research integrity and, most importantly, the authority as to what is knowledge or information. It is hence of utmost importance that librarians educate senior university management and researchers about commercial publishers and APCs.
Allocate Library Budget to Support Scholar-Led and Library Publishing and Open Infrastructure
For libraries, transformative agreements have been negotiated in good faith to support open access. Librarians understand the need for researchers to increase visibility and citations; and they are also keen to promote the benefits of open access. However, the open access movement seems to have taken a wrong turn with the increasing dominance of the gold open access model, especially considering the increases in APCs over the last few years. There is a danger that transformative agreements will exacerbate the so-called serials crisis-the gold open access model does not alleviate the pressure on library budgets when libraries feel obligated to support researchers to read and publish articles in traditional, paywalled journals. Meanwhile, publishers outside of the big deals may lose subscriptions required for their survival, similar to the situation where local businesses become unviable due to the monopolisation of big companies. The more libraries succumb to the pressure and control by the big deals publishers, the less negotiation powers can be retained for a balanced and healthy knowledge production and scholarly communication ecosystem.
In the world of academic publishing, libraries can play a role in leveraging the powers by allocating a portion of their budget to support open access programmes other than transformative agreements or APC support. The 2.5% commitment initiative proposes that academic libraries commit to invest 2.5% of their total library budget to support a common open infrastructure (Lewis, 2017), which involve the following (Lewis et al., 2018): 1) Open infrastructure projects and organisations such as DSpace, Fedora, Omeka, Open Journal Systems (OJS), the Digital Preservation Network, LOCKSS, the Directory of Open Access Journals (DOAJ), CrossRef, and advocacy organisations like SPARC or Confederation of Open Access Repositories. 2) Hardware, software and staff that support institutional repositories, including funds to external organisation that support locally installed systems or host repositories. 3) Platforms that support open content such as ArXiv and Hathitrust.
The long-term goal of the 2.5% commitment is to divert and repurpose library budgets for the common open infrastructure which would be feasible for libraries with larger budgets. Similarly, the preparedness model for the future of open scholarship (Goudarzi et al., 2021) calls for the examination of 'local first' and 'build vs. buy' decisions in terms of time and resourcing, as well as effects on staffing and interoperability of shared systems. There are existing examples where library budgets are allocated to support scholar-led and library publishing that support diamond open access monographs, journals, and open educational resources. KU Leuven, for example, has diverted less than 1% of the entire operating budget to support open scholarship initiatives, including contributions to diamond OA programmes, as well as the running of the mission-driven university press (Verbeke & Mesotten, 2022). The Library Publishing Coalition has put together useful resources and training materials on their website. 7 The independent expert report commissioned by the European Commission (Johnson, 2022) shows a clear willingness to deliver a non-profit publishing service, Open Research Europe (ORE). The development requires considerations of organisational and financial models, involving social value proposition, size and sale, operating model, legal form, governance, and financing. A common open infrastructure is a long-term investment starting with allocating library budget to scholar-led and library publishing and support for nonprofit open infrastructure initiatives.
Engage in the Development of Public Research Infrastructures and Copyright Reform
The development of public research infrastructures is not simply about moving scholar works from one platform to another. The complexity is rooted in the long history of scholarly and scientific publishing and scholarly communication (Blair, 2010;Csiszar, 2018). Publishers have long held their important position and functions in the knowledge production system. The invention of the internet and the oligopoly of publishers, however, have called for changes in the development of the research infrastructure. For example, what would be fair contributions to publishers for their services? Are academic journals still necessary when articles can be published on an open platform (see Brembs et al., 2021)? The development of public research infrastructures does not necessarily entail the demise of publishers with appropriate copyright reform. Fundamentally, there are considerations about, first, the ownership of knowledge: whether knowledge should be regarded as a public good when it is publicly funded; second, the ownership of personal data currently being harvested by some publishers and data companies.
Recently, there have been strong advocates for public access to research. The White House Office of Science and Technology Policy (OSTP) released a statement on 25 August 2022 that there should be no delay or barrier for research findings to be made available to the public. 8 The Action Plan for Diamond Open Access published by Science Europe 9 advocates for an ecosystem that respects the cultural, multilingual, and disciplinary diversity of scholarly publications. These directives recognise the very nature of publicly funded research as a public good. However, there is still a lack of understanding of digital tracking and data mining on commercial platforms. The dangers of further platformisation of scholarly information using machine learning techniques demand more attention and awareness in the development of public research infrastructure. Public research infrastructures would value privacy and do not need to collect users' data at all.
The development of public research infrastructures also demands changes in copyright laws. ALLEA (All European Academies) has issued a statement that supports rights retention and further changes in copyright law, indicating developments in EU countries including the 2019 Directive on Copyright in the Digital Single Market and the Secondary Publication Rights. 10 It is also possible to reconsider the intellectual property rights of publicly funded research as a public good or public resource, meaning that the ownership-copyright-should not be held by commercial or private entities. Lamdan (2023) suggests that, at the bottom line, the first-sale doctrine can be applied to digital resources, meaning that "library-like online platforms can lend materials, and law should also ensure that digital information purchasers can enjoy at least some of the intellectual property rights that physical ownership conveys" (p. 140).
Advocate for Research Assessment Reforms
Librarians can play an active role in advocating for responsible metrics and research assessment reform. On the one hand, they can educate university management and researchers about the appropriate uses of metrics and the role of metrics in the platformisation of scholarly information. On the other hand, librarians can highlight the tension between research assessment and open research. For instance, the Science Europe Open Science Conference 2022 11 has a strong focus on research assessment reform with the aim of encouraging and supporting open research.
The appropriate and responsible uses of metrics are important for research culture and research integrity. A positive research culture is collaborative and supportive. Healthy competition can lead to innovation and productivity. However, the overuses and misuses of metrics-based research assessment are not conducive to research culture. In the recent Wellcome Report (2020) on research culture, nearly 60% of the respondents disagreed that metrics had a positive impact on research culture but a hypercompetitive environment. Studies have shown clear evidence that researchers are motivated and rewarded to chase after the number of publications and citations and they sometimes forgo interesting and complex research ideas (Ma & Ladisch, 2019;Müller & de Rijcke, 2017). The stronghold of metrics-based research assessment is a part of the business models of publishers turned data analytics companies. Advocating responsible uses of metrics are not only essential for supporting a collaborative and positive research culture, but also an antidote to the 'data cartels' (Lamdan, 2023).
Relatedly, there are many research integrity issues related to the hypercompetitive research culture. The chase after publications and citations can lead to honest mistakes that result in research publications of lower quality and sometimes retraction. There have also been reports of fraudulent research, fabricated data and images, and citation cartels (Biagioli & Lippman, 2020). Retraction Watch 12 and PubPeer 13 are examples of watchdog organisations. The increasing instances of misconduct and malpractices have raised concerns about research integrity as negative consequences of research assessments.
The criteria of research assessments have significant implications for the market of information and open research. Predatory journals take advantage of the overemphasis of the number of publications in research careers. Similarly, established commercial publishers increase subscription fees and APCs at will, notwithstanding they do not pay for the labour of the production of contents, nor do they compensate for the work of peer review. Research assessment reform can push for recognition in publications in diamond open access and green open access journals with no embargo period. This change is not only beneficial for research culture, it can also lead to reallocation and repurposing of budgets to support a diverse publishing environment including scholar-led and library publishing, institutional repositories. Advocating for research assessment reform is necessary to avert the power and control of the big deals and data analytics companies. DORA, 14 for example, provides toolkits and tips for implementing responsible metrics.
Conclusion: Support Bibliodiversity and a Healthy Knowledge Production Ecosystem
Since the launch of the Budapest Open Access Initiative in 2002, the open access movement has gained momentum. Preprint servers in biomedical research, especially during the Covid-19 pandemic, are essential for scientific collaboration and has resulted in the rapid development of vaccines and cures. Open access was once not possible because of the limitations of print materials confined in physical locations; however it is still not commonly practised with the common use of the Internet today. Over the years, different open access models have been proposed: green, gold, diamond (or platinum) and the open access movement has somewhat taken a wrong turn towards the gold route, reinforcing the market share of a few commercial publishers because researchers are locked in to publish in prestigious journals, and libraries are locked in to provide access by either subscription or transformative agreements.
There is an urgent need for researchers, librarians, university management, funders and the general public to understand the very fact that some (not all) publishers-turned-platforms do not treat knowledge as a public good, nor do they have ethical concerns for open access or data privacy. Rather, they create technologies of control to create a hypercompetitive environment with the purpose of increasing the volume of publications. It is because the higher the number of publications, the more data can be collected for data products and consultancy services that can be sold right back to research institutions. Meanwhile, they deny those who are less privileged in the knowledge production ecosystem, particularly researchers who are not affiliated with resourceful research institutions. The open access movement cannot succeed when platforms hold power and control over not only scholarly information, but also data about researchers and research activities. The fight for the ethical principles of information access and privacy and against platformisation of scholarly information is critical and pressing. | 2023-07-11T19:34:56.502Z | 2023-06-07T00:00:00.000 | {
"year": 2023,
"sha1": "c7cceb2c56f7faf69ec9399278fba47cc055e1bc",
"oa_license": "CCBY",
"oa_url": "https://liberquarterly.eu/article/download/13561/16327",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5d45059a4e225142c75f35270ff6d14dbe6c70dd",
"s2fieldsofstudy": [
"Computer Science",
"Sociology"
],
"extfieldsofstudy": []
} |
219989240 | pes2o/s2orc | v3-fos-license | Factors influencing adoption of improved structural soil and water conservation measures in Eastern Ethiopia
Agriculture remains to be the leading sector that contributes enormously to economic development in Ethiopia. Despite its significant contribution to livelihoods, the sector faces persistent challenges due to depletion of natural resources and soil erosion that resulted in diminishing crop and livestock productivity. In order to curb the effects of land degradation, the Government of Ethiopia has been taking serious measures to expand Soil and Water Conservation (SWC) practices throughout the country. Despite the efforts made, the adoption of new practices by farmers have been generally low. This study was aimed at assessing factors influencing smallholder farmers’ decision on the use of improved structural SWC practices in Haramaya district, eastern Ethiopia. A multi-stage sampling technique was used to select 120 farm households and 248 plots. A structured interview schedule was used to collect primary data. Descriptive and inferential statistics and Multinomial Logit (MNL) regression model were used to analyze the data. The key findings showed that a host of socio-demographic, economic and institutional factors significantly affected smallholders’ decision to adopt improved structural SWC. In this study, we found that education, farming experience, plot area, distance of the plot from dwelling unit, number of economically active household members, and extension contact were the significant predictors of using improved SWC structures. Based on our findings, we concluded that improved SWC measures should be scaled up through a concerted effort of extension workers, local administration and other relevant non-state actors. In particular, the extension system should encourage rural communities on sustainable management and use of natural resources. Moreover, the need to create learning opportunities through facilitating appropriate educational and training programs for farmers and focusing on proper management of available economically active household members should be emphasized.
Background
Agriculture remains to be the leading sector that contributes enormously to economic development in Africa (Belachew et al. 2020;Collier and Dercon 2014). More importantly, throughout the Sub-Saharan Africa (SSA) region, the sector is hailed as the main engine of economic growth and poverty reduction. Despite its significant contribution to livelihoods, the sector faces a persistent challenge due to depletion of natural resources and soil erosion (Belachew et al. 2020;Kagoya et al. 2017), climate change induced phenomena, and scarcity of modern/productive inputs, to mention but a few. As an agrarian nation, Ethiopia's fast-growing economy is facing similar challenges due to lingering soil erosion and land degradation (Asnake et al. 2018;Fontes 2020). This Wordofa et al. Environ Syst Res (2020) 9:13 has resulted in reduced crop and livestock productivity and increased food insecurity and poverty.
Soil degradation especially in the highlands of Ethiopia continues to be a serious threat to subsistence agriculture, which is the backbone of the economy. It has to be noted that 90% of the population lives in the highlands where land is continually cultivated and, as a result, is highly prone to soil erosion and land degradation (Daniel and Mulugeta 2017). The situation in Haramaya district, the study area, is not different from the rest of the country. Haramaya district faces food production problems, mainly due to physical and man-made causes. The manmade problems include overgrazing, overcultivation, deforestation and inappropriate agricultural practices. The physical factors include climate change, intensity of rainfall, topography and others. These have resulted in enormously degraded land, which seriously threatens smallholders' welfare in the district.
In order to curb the effects of land degradation, the Government of Ethiopia (GoE) has been taking serious measures. One of the strategies has been expanding SWC practices throughout the country (Adimassu et al. 2014). The GoE, through its Productive Safety Net Program (PSNP) and other initiatives, has been promoting terracing, soil and stone bund, mulching, composting etc. to individual and communal lands (Yitayal and Adam 2014). Such practices are proved to be effective in reducing soil erosion and improving soil nutrient availability (Haregeweyn et al. 2015). However, the effectiveness of the government's efforts to promote improved structural SWC measures has not been adequately studied across the various agro-ecological zones of the country. Hence, a scanty empirical evidence exists on the status of adoption and impact of improved structural SWC measures across various contexts.
Recent empirical investigations highlight the usefulness of SWC practices in enhancing productivity and improving smallholder livelihoods (e.g., Haregeweyn et al. 2015;Karidjo et al. 2018). Nevertheless, in spite of the efforts made to popularize the use of such measures, adoption and wider usage is not widespread among farmers in Ethiopia (Asnake et al. 2018;Kirubel and Gebreyesus 2011). This is partly due to lack of active participation of smallholders. In Haramaya district, both traditional and modern SWC structures have been practiced by some farmers, but not to a satisfactory level. More importantly, the rate of adoption of improved structural SWC practices in the district has not been sufficient enough to safeguard smallholder livelihoods against crop loss, food insecurity and abject poverty.
There are several studies documenting the sociodemographic, economic, institutional and biophysical factors that influence farmers' decision to use improved agricultural technologies (for instance, Daniel and Mulugeta 2017;Yitayal and Adam 2014). However, research on adoption of improved SWC practices among smallholders are limited. In Haramaya district, such studies have not yet been sufficiently conducted to provide policy-informative recommendations. Such investigations are vital for selecting relevant conservation methods and interventions to encourage active participation as well as designing and implementing appropriate policies and strategies (Asnake et al. 2018). Therefore, the current study was conducted to identify factors affecting adoption of improved structural SWC practices among smallholder farmers in the study area.
Description of the study area
This study was conducted in Haramaya district which is located at a distance of 510 km away from Addis Ababa along the main road towards Harar town. It is one of the 19 food insecure districts of east Hararghe zone of Oromia regional state. It has 33 rural kebeles. The district lies between 9° 09′ and 9° 32′ N latitude and 41° 50′ and 42° 05′ E longitude to the west of Harar town. It is bordered by Dire Dawa Administrative Council in the north, Kombolcha district in the north east, Harari Peoples' National Regional State in the east, Fedis district in the south east, Kurfachele district in the south west and Kersa district in the west. From the total area of 521.63 km 2 , 36.1% is arable or cultivable, 2.3% is pasture, 1.5% is forest, and the remaining 60.1% is considered built-up, degraded or otherwise unusable. Of its total area, 90% is mid-highland while the remaining 10% is lowland (Haramaya District Agricultural and Rural Development Office (HDARDO) 2014).
The total population of the district was 271,018,of which 138,282 were men and 132,736 were women; with an average family size of five. The majority of the population (96.7%) are Muslims, while 2.7% of the population practices Orthodox Christianity, and is the remaining following other religions. The predominant soil types of the district are Rigo soils (Haramayan series-60%) and heavy black clay soils (Vertisols-40%). The soil texture of the district is sandy loam (HDARDO) 2014).
Rainfall in the district is bimodal, and the mean annual rainfall is 492 mm ranging from 118 mm to 866 mm. The short season (Badheessa), usually starts in March and ends in May, and the long season (Ganna) occurs between June and September. Relative humidity varies from 60 to 80%. Minimum and maximum annual temperatures range from 6 °C to 12 °C and 17 °C to 25 °C, respectively (Haramaya District Finance and Economic Development Office (HDFEDO) 2014).
Agriculture is the mainstay of the population of the district. It is carried out by those who have land and livestock. Some landless are engaged in sharecropping and other non-agricultural income generating activities like daily laboring, petty trading etc. The dominant crops grown in the district are sorghum, maize, potato, sweet potato, haricot beans, vegetables and khat. Vegetables and khat are the two major cash crops grown in the area (HDARDO 2014).
Food crops commonly assume poor production status because of the fragmentation of land, shortage of motor pumps and diversion of attention to the cash crop khat. Livestock are also valuable components of the farming system contributing enormously to achieving household food security. The major livestock production practices are: cattle production for milk, animal fattening, small ruminants (sheep and goat), poultry, and donkey for transport facility. The main problems in the district's livestock production are shortage of feed because of over degradation and scarcity of grazing lands (HDARDO 2014).
Agricultural extension services are important to assist farmers by identifying and analyzing their production problems and by making them aware of opportunities for improvement. It plays significant role in increasing crop production by promoting the use of improved seeds, fertilizers, chemicals and other improved farming practices. Currently, the focus on the extension services in the district is on crops, livestock and natural resources in an integrated development approach. There are 126 Development Agents (DA), who live within the Kebeles (i.e., lowest administrative units) and provide extension services to the farmers. The farmer-DA ratio is one important issue, which needs attention (HDARDO 2014).
Sampling technique
A multi-stage sampling technique was used to select study sites and draw households for the study. First, Haramaya district was selected purposively due to the high soil erosion problem. Then, three Kebeles were selected randomly. Finally, Probability Proportional to Size (PPS) and simple random sampling were used to draw sampled households from each Kebele. To identify households, a list of names of the household heads was taken from the District Office of Agriculture and Natural Resources as well as the records of DAs. In all the sampled Kebeles the upper, middle and lower slope reaches of the watershed were covered during data collection. This study applied a simplified formula provided by Yamane (1967) to determine the required sample size. where: n is the sample size, N is the population size, and e is the level of precision (0.09). Based on this formula a total of 120 sample households and 248 plots were used in this study (Table 1).
Data collection and analysis
Both qualitative and quantitative data were collected from primary and secondary sources through field observation, structured interview schedule, and Focused Group Discussions (FGDs). Qualitative data were collected from elders, selected farmers and key informants, who have adequate knowledge and information about the past and present condition of the study area. The knowledge and information from these sources include natural resources, agricultural production, land use, land management practices, causes, extents, and consequences of soil erosion, SWC practices, local labor organization and institutional support. The quantitative primary data include household characteristics (age, education, farming experiences, family size, marital status), farm characteristics (number of plots, source of farm plot, slope, soil fertility, soil colour, farm size, distance of farm plots from home), perception on soil erosion, causes, extents and consequences of soil erosion, SWC practices, labor availability, land tenure issue, agricultural extension and credit. Secondary data were reviewed from published and unpublished sources.
Qualitative data were analyzed through interpretation and conceptual generalization. For quantitative data, both descriptive statistics and the standard Multinomial Logit (MNL) model were implemented on STATA 11 software. The MNL model was used in this study to assess factors affecting farmers' adoption of improved SWC practices because the dependent variable takes more than two values: (1) traditional or no conservation strategy, (2) improved soil bund, (3) improved stone bund, and (4) improved check dam. Households and plots were used as units of analysis because the focus of the study was on SWC technologies that were observed at the plot level and the dependent variable was also measured at the same level. This level of analysis is advantageous because it captures more spatial heterogeneity and also helps to control for plot level characteristics and hence helps to minimize the omitted variable bias that would confound household level analysis (Saratakos 1999). Investigating the factors affecting farmers' decision on adoption of improved SWC technologies inherently requires a multivariate analysis. Attempting bivariate modeling excludes useful economic information contained in the interdependent and simultaneous adoption practices (Wagayehu and Drake 2003). However, the use of such bivariate models to analyze factors affecting farmers' decisions to adopt technologies and best practices is still prevalent. For instance, in a recent study in Kenya and Ethiopia, Ng'ang'a et al. (2020) used Probit model to understand factors influencing farmers' decision on soil carbon sequestration. Likewise, binary Logit model was used in a study that looked into determinants of adoption of SWC in Ethiopian highlands (Mekuria et al. 2018). Asfaw and Neka (2017) also implemented binary Logit to find out the predictors of adoption of SWC measures in Wereillu district, Northern Ethiopia. Other empirical works related to adoption of SWC measures that employed binary choice models (i.e., Logit/Probit) include Moges and Taye (2017) (Tarfasa et al. 2018) to analyze farmers' choice of/preference for soil and water management in various developing countries.
In our investigation, it was more appropriate to treat adoption of improved SWC measures as a multiple choice decision. Hence, a MNL model was used to estimate the coefficients and marginal effects of farmers' adoption of improved soil bund, improved stone bund, and improved check dam in the study area. The use of such models is not uncommon in adoption studies with a dependent variable that has many categories. For example, Sileshi et al. (2019) employed a Multivariate Probit model to analyze the determinants of adoption of SWC measures in Deder, Goro Gutu, and Haramaya districts of eastern Ethiopia. A similar model was also used in a very recent study investigating factors affecting adoption of SWC practices in northwest Ethiopian highlands (Belachew et al. 2020). Mengistu and Assefa (2019) also used Multivariate and Ordered Probit to understand farmers' decision process associated with watershed management in Gibe basin of southwest Ethiopia. Further, Multinomial Logit (MNL) was used to assess determinants of smallholder farmers' decision in the Muger Sub-basin of the Upper Blue Nile basin of Ethiopia (Amare and Simane 2017).
In this study, adoption is regarded as the existence of one or more improved structural SWC structures on farmers plot. The independent variables, hypothesized to have relationship with the dependent variable, were carefully chosen based on previous empirical research ( Table 2).Prior to running our MNL model, as recommended by Gujarati (1995) multicollinearity problem among continuous explanatory variables was assessed using Variance Inflation Factor (VIF) and Tolerance Level (TOL). Similarly, in order to see the degree of association among dummy and discrete variables, Contingency Coefficient (CC) was computed. The result of these tests showed the absence of multicollinearity problems in the dataset.
Descriptive results
The results of descriptive analyses on personal and demographic, economic, biophysical, institutional and behavioral characteristics of the sampled farm households is given in Table 2. The results showed that 85% of the respondents are male household heads who possess a very low level of education. However, they have large family size (six, on average) and rich farming experience (23 years, on average). It is widely acknowledged that family size and composition affect the amount of labor available for farm, off-farm and household activities. It also determines the demand for food. Similarly, more experienced farmers are found to be able to identify soil erosion problems better than less experienced farmers (Shiferaw and Holden 2008).
Looking at the economic variables, the data showed that only 34.2% of the sample households are engaged in off-/non-farm activities. Off-/non-farm activities have served farmers in the study area as sources of additional income to purchase food crops mainly and other non-food commodities. Involvement in petty trading and wage labor accounted for 29.2 and 5.0% of off-farm employment opportunities, respectively. Majority of the respondents (about 93%) possess livestock (TLU). Number of economically active household members who live in and work for the household also determines the labor available in the household which in turn may determine the type of SWC measures used by the farm households. Households with abundant labor may decide to use conservation measures which require more labor force but are effective and efficient.
Concerning biophysical characteristics, it is unquestionable that SWC measures require some area that would have been used for cultivation (growing) of crops or allocated for other purposes. Hence, it is assumed that farmers with larger farm plot area are more likely to use improved SWC measures to reduce soil erosion and conserve water in their farm plots than farmers with small farm plots (Semgalawe 1998). The survey result showed that the average size of farm plot for the sample households is 0.43 ha. This indicates that there is a serious shortage of farmland in the study area. Slope is one of the farm attributes that can aggravate land degradation in general and soil erosion in particular. Farmers who have farms in areas which are more prone to soil erosion are expected to experience more soil erosion and therefore recognize the impact of topsoil loss more easily than farmers with farms located on gentle slopes. In this study, 15.7%, 39.8%, and 44.5% plots were located on flat, medium and very steep slopes, respectively. It is expected, thus, that the steeper the slope of the farmland, the higher the probability of the farmers to adopt improved SWC technologies. Distance between farm plots and a homestead is important in which a considerable amount of time can be lost in walking long distances. In addition, it is easier for farmers to take care of their farm and to construct and maintain structural SWC practices and for manure application on the fields near their homesteads than fields that are far away. As it is indicated Table 2, about 15% of the farms are located more than 20 min away from the homestead. During the FGDs, it was indicated that leaving crop residues on the cultivation field enhances soil fertility. However, when the land is located far away from homestead, other people may take the residues for home use (fuel energy), for animal feed, for fencing and even for sell. Thus, if the farm field is located near the farmhouse, it becomes easier to be managed and can receive better attention.
The issue of tenure security is among the institutional variables considered in this study. Farmers in the study area have four major sources of land. These are (1) inheritance from family, (2) receiving from Kebeles, (3) sharecropping, and (4) renting system. The survey result revealed that more than 90% of the respondents feel secure about their land holding. Further, it was found that 76% of the respondents believe that land belongs to the government; 89% expect to use the land throughout their lifetime; 94% think that they have the right to inherit the land to their children; and 93% believe that they can decide to invest on SWC. Land tenure has important implications for agricultural development in general and SWC in particular (Woldeamlak 2006). Land tenure arrangements in rural Ethiopia have undergone frequent changes since the 1974 revolution. The land reform proclamation, "Land-to-the tiller", which proclaimed that land cannot be sold or mortgaged is one in the Dergue regime. Then, in 1995 a new constitution has been enacted. In this proclamation farmers have been given the right to use their land indefinitely but selling or mortgaging of land is still prohibited (Kebede 2006). It is generally concluded that a more secure tenure system provides the necessary incentives for farmers to decide on adoption of SWC measures on their farm plots (Tesfaye 2011).
The other institutional characteristics is contact with Development Agents (DAs). Having good relationship with DAs helps farmers to be aware of improved SWC practices in reducing hazard associated with soil erosion. The DAs can provide technical information and advice as well as training on improved SWC practices. In the survey, we found that about 43% of the farmers have interacted with DAs at least once a month.
Farmers' perception on severity and causes of soil erosion
During the survey, farmers in the sample were asked to classify their farm plots, depending on their perception of degree of erosion problem (i.e., extent or severity of occurrence of soil erosion on the plots).The respondents were given three alternatives: low, medium and high to indicate the severity of the problem. The findings, depicted in Table 3, showed that majority of the farmers (45.8%) experienced frequent and severe soil erosion problem, whereas 41.5% of the respondents encountered mild/medium soil erosion problems. The remaining 12.5% respondents characterized the occurrence of soil erosion on their plot as 'low' , occasional or limited. A Chi square test was performed to assess the existence or lack of statistically significant difference among the three groups of responses. The result, χ 2 = 14.10 with p < 0.05, clearly indicated the existence of significant differences related to farmers' perception on the severity of soil erosion problem they encountered on their farm plots. However, since 87.3% of the respondents agreed that they experienced medium to high levels of soil erosion, it can be concluded that the threat of soil erosion is real in the study area. Moreover, one can rationally expect majority of the farmers to implement some kind of SWC measures to safeguard their farm plots from the adverse effects of soil erosion.
Soil erosion is a naturally occurring process on all land. The agents of soil erosion are water and wind, each contributing a significant amount of soil loss each year in the study area. The role of water in eroding the land is very high during rainy season. On the other hand, wind causes erosion during dry/windy season. Among the interviewed farmers, about 36% and 30% ranked cultivation of steep slopes and poor agricultural practices as the main causes of land degradation, respectively ( Table 3). The respondents also indicated heavy rainfall and continuous cultivation as additional factors contributing to soil erosion in the study area.
Farmers' perception on structural SWC measures
The variables considered here were related to the respondents' perception towards risks and comparative advantages of SWC technologies. These variables are important factors in influencing households' participation in improved/new SWC practices. The relative superiority of the technologies in terms of their advantages enable farmers to have favorable perception about the technologies, which in turn enhances decision in favor of adoption of the technologies. In order to get essential information and insight concerning farmers' decision on the adoption of improved SWC practices, examining their perception on each practice to which they are employing is quite important. Hence, knowledge of farmers' evaluative perception on technology attributes in the study area is an appropriate issue. In this study, a five-point Likert scale was used for this purpose and the result is depicted in Table 4.
As indicated in Table 4, almost all the respondents indicated that traditional structural SWC measures are more flexible than introduced SWC structures. On the other hand, more than 70% of the farmers stated that improved soil bund increases soil fertility; more than 90% of the sampled households agreed that improved stone bunds need more inputs/materials; and, more than 90% of the respondents stated that improved check dams require frequent maintenance. These perceptions imply that farmers in the study area are generally ready to implement improved structural SWC measures. This calls for a more concerted effort by the government and other development partners to promote such SWC structures.
Factors affecting use of improved structural SWC measures
The results of the Multinomial Logit (MNL) analysis conducted to assess factors affecting smallholder farmers' adoption of improved structural SWC measures is given in (18). There were 12 explanatory variables that entered into the MNL model. As can be seen in the lower part of Table 5, the MNL model is significant with a reasonable explanatory ability. Overall, the econometric analysis indicated that educational level, farming experience, number of economically active household members, contact with extension service providers, plot area, and plot distance from dwelling were found to affect farmer's decision on the use of improved structural SWC measures significantly. However, these variables appear to affect the use of one, two or all of the conservation structures at different sign, magnitude and significance level. In what follows, we discuss these significant predictors of farmers' use of improved structural SWC measures in the study area.
Educational level of household head
Education level of the household head was found to positively and highly significantly associate with the use of improved soil bund, stone bund and check dam. More precisely, our estimation result showed that a one-year increase in education will increase the probability of a household to use improved soil bund, stone bund and check dam by 0.55%, 0.3%, and 0.6%, respectively. This result implies that household heads with relatively better formal educational attainment are more likely to use appropriate improved structural SWC practices and they are also able to anticipate the consequences of soil erosion than non-educated farmers. In addition, they have better understanding of their environment and risks associated with cultivation of marginal lands. Our finding is in line with earlier empirical evidence obtained Table 5 Multinomial Logit (MNL) model estimation results Source: Own analysis from survey data, 2017 Dependent variable = existence of improved structural SWC structure on the farm plot *, **, *** significant at p < 0.1, p < 0.05, and p < 0.01 probability level, respectively from different parts of the country (e.g., Anley et al. 2007;Tizale 2007). Education level of farmers was also found to be strongly associated with their perception to invest in SWC technologies in North-Western highlands of Ethiopia (Moges and Taye 2017). More importantly, our result corroborates the findings of recent studies conducted in Ethiopia that documented the positive and significant effect of education in fostering adoption of introduced SWC measures ( Asfaw and Neka 2017;Belachew et al. 2020;Sileshi et al. 2019). Household's educational status was also found to raise awareness about SWC practices as well as enhance their adoption in Southern Africa-Mozambique, Malawi and Zambia (Mango et al. 2017). Hence, it is of utmost importance to promote adult education and training among rural communities in order to enable them to make informed decisions pertaining to conservation of natural resources and their sustainable use.
Farming experience
Farmers' experience in agriculture is another important factor related to the use of improved technologies and best practices. In this study, we found that farming experience of the household head is positively and significantly related to the adoption of improved structural SWC measures in the study area. The result indicates that experienced farmers tend to appreciate the value of improved conservation strategies than non-experienced farmers. In relation to this result, Shiferaw and Holden (2008) asserted that experienced farmers are capable of detecting soil erosion problems more than non-experienced farmers. Similarly, Fekadu et al. (2013) pointed out that those farmers who have better farm experience have high chance of being participants in conservation measures. As observed from our econometric analysis result, a 1 year increase in farming experience increases the probability of farmers' adoption of improved soil bund, stone bund and check dam conservation by 0.6%, 0.1% and 0.2%, respectively. Nevertheless, we exclaim that young and less-experienced farmers should deserve equal, if not more, attention in the process of adoption and diffusion of improved structural SWC measures.
Extension contact
Extension service on SWC practices was found to have a positive effect on adopting improved soil bund. However, it did not affect the adoption of improved stone bund or check dams. This suggests that extension service providers in the study area need to vigorously embark on the promotion of improved SWC structures. Other stakeholders should also support the diffusion of improved SWC measures by educating, financing and encouraging farmers in the area. Farmers who receive extension message on SWC from Development Agents will be more encouraged to use improved SWC practice on their farm plots than those who do not have the opportunity to interact with extension personnel. Similarly, Yitayal and Adam (2014) and Tizale (2007) reported that households with access to extension services and information have better understanding of land degradation problem and soil conservation practices and hence may perceive SWC practices to be profitable. As observed from the model result, as farmers get extension message/contents on SWC practices, the probability of using improved soil bund increases by 0.52%.
In recent studies, access to extension service was found to have a significant effect on the adoption of SWC practices in different parts of the country: northwest Ethiopian highlands (Belachew et al. 2020;Moges and Taye 2017); Wereillu district, northern Ethiopia (Asfaw and Neka 2017); Gibe basin, southwest Ethiopia (Mengistu and Assefa 2019); Lemo district, southern Ethiopia (Bekele et al. 2018); and, Gusha Temela watershed, Arsi, Ethiopia (Biratu and Asmamaw 2016). Likewise, contact with extension service providers was found to have positive effect on the adoption of SWC measures in Techiman Municipality of Ghana (Darkwah et al. 2019), in Tanzania (Lasway et al. 2020;Shrestha and Ligonja 2015), and in the Rwizi catchment of south western Uganda (Mugonola et al. 2013).
Plot area
The MNL model result indicated that plot area has a positive and significant effect on the likelihood of adopting all types of improved structural SWC practices. This is because farmers with larger farm plot are more likely to be able and willing to use improved SWC measures to reduce land degradation problems in plots located on sloppy areas. This result is in line with empirical studies that have shown a positive and significant effect of area of a plot on the decision to use conservation measures (for instance, Amsalu and De Graaff 2007;Kassa et al. 2013). Hence, plot size promotes conservation. The result shows that as plot area increases by one hectare, the probability of deciding to use improved soil bund, stone bund and check dam increases by 0.28%, 1.84% and 4.26%, respectively.
Plot size was also found to have a positive effect on farmers' perception to invest in SWC technologies in northwestern highlands of Ethiopia (Belachew et al. 2020;Moges and Taye 2017;Teshome et al. 2016), in eastern Ethiopia (Sileshi et al. 2019), and in Lemo district, southern Ethiopia (Bekele et al. 2018). It was also found to exert a positive and significant effect on the adoption of soil management practices in southwestern Uganda (Mugonola et al. 2013) and in West African Sahel (Kpadonou et al. 2017).
Distance of the plot from dwelling
Distance of the plot from the farmers' dwelling was related negatively with improved check dam and improved stone bund. The result from the model output indicated that as the distance of the plot from dwelling of the household increases by 1 km, the probability of using improved check dam and improved stone bund decreases by 1.65% and 1%, respectively. This result in line with the findings of Derajew et al. (2013). Plot distance from the homestead was also found to have a negative and significant effect on farmers' perception to invest in SWC technologies in northwestern highlands of Ethiopia (Moges and Taye 2017). It was also found to negatively affect adoption of SWC measures in South Wollo zone, northern Ethiopia (Asfaw and Neka 2017).
Number of economically active household members
SWC activities demand labor which is a critical problem in a peak period of production and livestock rearing. In this study, number of economically active household members who participate in improved structural SWC was found to positively and significantly relate with adoption of improved soil bund. The model result indicated that as the number of economically active household members increases by one person, the probability of using improved soil bund increases by 0.13%. However, we did not find any significant relationship between this variable and adoption of the other improved structural SWC measures. This result is in line with the findings of Tadesse and Belay (2004). Availability of adequate labour was also found to positively influence farmers' participation in SWC activities in Gusha Temela watershed, Arsi, Ethiopia (Biratu and Asmamaw 2016). In general, household size was also shown to have a positive effect on adoption of SWC in northwest Ethiopian highlands (Belachew et al. 2020), in Gibe basin, southwest Ethiopia (Mengistu and Assefa 2019), in Ghana (Darkwah et al. 2019), and in Kondoa, Tanzania (Shrestha and Ligonja 2015).
Conclusion and policy implications
Farmers' conservation decisions are shaped by several factors. In order to understand the factors affecting adoption of improved structural SWC measures at smallholder farm level, this study was conducted in Haramaya district of eastern Ethiopia using a randomly sampled 248 plots. Quantitative and qualitative data were collected from primary and secondary sources. The results of the MNL analysis indicated that education level, farming experience, plot area, number of economically active household members, and extension contact were found to significantly affect the use of improved structural SWC strategies on farm plots. Hence, development policy and program interventions designed to enhance agricultural productivity through promoting structural SWC measures in the study area need to take into account these most important variables with respect to the type of innovation and farmers' preference.
The findings of this study showed the importance of education among households' characteristics. Therefore, stakeholders who work on SWC programs and projects should use those educated farmers as models to others in order to demonstrate the importance of improved SWC measures. Likewise, agricultural extension services in the study area have lasted for more than three decades. However, the findings of this study indicated that the contribution of extension service to the adoption of improved conservation technologies by farmers is not satisfactory. Thus, there is a need to emphasize conservation of resources in the existing extension system in order to enhance the use of improved conservation measures by farmers.
The results of the study also indicated that plot area increases the probability of using improved structural SWC measures. Thus, programs working on SWC should focus on farmers having relatively larger farm plots as point of entry to acquaint the practice more by small farm owners. Such model farmers can be used as proponents of improved structural SWC measures. The study also revealed that number of economically active household members has a significant and positive association with improved soil bund adoption. Therefore, extension planners should give attention to proper management of labor in order to attain SWC goals.
Distance of the plot from the household dwelling also showd significant and negative relationship with improved check dam and improved stone bund conservation strategies. Therefore, district development planners and implementers should consider the issue while program planning and implementation to realize the required result. Finally, the result of the study indicated that the farming experience of the household head affects the use of improved SWC measures. Thus, the district bureau of agriculture and natural resources and other relevant stakeholders should focus on farmers having relatively better farming experience in order to scale-up the practices and benefit younger and relatively less experienced farmers through a trickle-down effect. | 2020-06-24T15:10:16.959Z | 2020-05-07T00:00:00.000 | {
"year": 2020,
"sha1": "74ea4293747f967a0a14b154d95402b879e96db9",
"oa_license": "CCBY",
"oa_url": "https://environmentalsystemsresearch.springeropen.com/track/pdf/10.1186/s40068-020-00175-4",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "74ea4293747f967a0a14b154d95402b879e96db9",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
} |
1905949 | pes2o/s2orc | v3-fos-license | Quasi-Fixed Points and Charge and Colour Breaking in Low Scale Models
We show that the current LEP2 lower bound upon the minimal supersymmetric standard model (MSSM) lightest Higgs mass rules out quasi-fixed scenarios for string scales between 10^6 and 10^{11} GeV unless the heaviest stop mass is more than 2 TeV. We consider the implications of the low string scale for charge and colour breaking (CCB) bounds in the MSSM, and demonstrate that CCB bounds from F and D-flat directions are significantly weakened. For scales less than 10^{10} GeV these bounds become merely that degenerate scalar mass squared values are positive at the string scale.
Introduction
For many years, string and unification scales were thought to be high ( > ∼ 10 16 GeV).The perturbative heterotic formulation of string theory had the fundamental string scale Λ s ∼ O (10 17 ) GeV close to M P lanck ∼ 10 19 GeV because of its constrained description of the gravitational interaction.The grand unification (GUT) scale was around Λ GU T ∼ 10 16 GeV, motivated by the apparent convergence of the gauge couplings when they were evolved to this value.Recently however, attention has turned to models that have lower string and/or unification scales [1,2,3,4,5,6] and this has raised some interesting questions to do with renormalisation group evolution of parameters.
The most immediate is of course whether gauge or Yukawa unification is still possible or even necessary with a lower string scale.One example that achieves gauge unification at the string scale [2] has the couplings experience power law 'running' [2,4,5] above a compactification scale due to the presence of additional Kaluza-Klein modes.A Kaluza-Klein spectrum with the same ratios of gauge beta functions as those in the MSSM leads to a logarithmic running up to the compactification scale with rapid power law unification taking place very rapidly thereafter [2].An example that does not achieve gauge unification is 'mirage' unification [6].In mirage unification the gauge couplings at the string scale receive moduli dependent corrections that behave as if there were continued logarithmic running above the string scale up to unification at the usual Λ GU T .'Mirage unification' refers to this fictitious unification 1 .
A particularly attractive choice for the string scale (albeit one that is not immediately accessible to experiment) is Λ s ∼ 10 11 GeV [3].In this case the hierarchy between the weak scale and the Planck scale arises without unnaturally small ratios of fundamental scales.It was also noted in the first reference of [3] that Λ s ∼ 10 11 GeV gives neutrino masses of the right order.We return to this model below and refer to it as the Weak-Planck (WP) model.
In this paper we consider two other related issues in the Minimal Supersymmetric Standard Model (MSSM), with a low string scale.The first concerns the top quark Quasi-Fixed Point (QFP).
The QFP is characterised by a focusing of some MSSM parameters to particular ratios as the renormalisation scale Λ is decreased towards the top quark mass, m t [7,10,12].Formally it is defined to be the point in parameter space where there is a Landau pole in the top Yukawa coupling h t at the string or GUT scale (whichever is the lower).
In practice however this focusing behaviour can occur for a large but finite h t (Λ s ), still treat-able by perturbation theory.The coupling h t focusses to some value at m t independent of h t (Λ s ) provided it is large enough.In low scale models, with their foreshortened logarithmic running, one naturally expects this behaviour to be very different.If the pole is at Λ s < Λ GU T , we expect the quasi-fixed value of the top Yukawa at m t to be larger than for the usual GUT scale unification.Conversely, for a given value of top mass and tan β at the weak scale the model will be further from the QFP for Λ s < Λ GU T .We shall determine the QFP prediction for h t (m t ), on which experimental constraints from LEP2 can be brought to bear in order to empirically constrain Λ s assuming the QFP scenario.In particular, we consider the empirically derived lower bound upon the lightest CP-even MSSM Higgs mass, which in the canonical GUT scenarios has been shown to be a strong restriction upon the QFP scenario [12].
The second issue we consider is the possibility of minima that break charge and colour lying along F and D flat directions [10,13,14,15,16,17,18].The constraints found by requiring that there be no such (CCB) minima are dependent on the distance from the QFP.They are most severe at the QFP itself [10,15,16] and indeed, in the usual MSSM at the QFP, CCB constraints exclude half the parameter space.With a lower string scale it seems likely that such constraints will generally be less restrictive for two reasons.First, a given point in (weak-scale) parameter space will be further from the QFP as noted above.Second, the CCB minima are generated radiatively when the mass-squared parameter for H 2 becomes negative.When there is a lower string scale there is less 'room' for a minimum to form at vacuum expectation values (VEVs) much greater than the weak scale.(More specifically, there are positive mass-squared contributions to the potential along the flat direction that become dominant at lower VEVs.)We shall demonstrate that this is indeed the case and that for Λ s < ∼ 10 10 the CCB constraint (at least along the F and D flat directions) is merely that scalar mass squared values are positive.
We will throughout be discussing these aspects by assuming that there is the standard logarithmic running of the MSSM upto a scale, Λ s , that we rather loosely refer to as the string scale.This scale may be much lower than Λ GU T .We define the QFP to be where the top Yukawa has a Landau pole at this point, since any variation in the Yukawa couplings above Λ s is expected to be drastically changed by string physics.As for the CCB bounds, we derive them on the soft breaking parameters at Λ s since this is close to the scale at which we expect the supersymmetry breaking parameters to be derived in any fundamental string model (although we will have more to say on this in due course).
The Quasi-Fixed MSSM
The QFP [7,10,12] constraint, i.e. that the top Yukawa coupling h t has a Landau pole at the string scale, gives important predictions in terms of the couplings and masses of supersymmetric particles [7,10,12].We now examine the prediction for h t (m t ) numerically, paying special attention to its dependence on the string scale.Fermion masses and gauge couplings are set to be at their central values in ref. [19] except for α s (M Z ), which is varied to show the induced uncertainty.Below m t , we run using a 3 loop QCD⊗1 loop QED effective theory with all superpartners integrated out.
In order to illustrate the quasi-fixed behaviour we first make a rough calculation.To this end, we approximate the superparticle spectrum to be degenerate at m t , allowing us to use the (two-loop) MSSM renormalisation group equations above that scale.Fig. 1 The h t (m t ) QFP prediction can be turned into a prediction of the MSSM parameter tan β (the ratio of the two neutral Higgs VEVs) through the relation and the known value [19] of the top quark mass, m t = 175 ± 5 GeV.We obtain the running top mass m t (m t ) from m t by employing the 1-loop QCD correction, thus assuming that supersymmetric corrections to it are small.v refers to the Standard Model Higgs VEV of 246.22 GeV.Low values of 1 < tan β < 3 result from eq. ( 2.3) when a quasi-fixed value h t (m t ) > 1.05 is used.The range of tan β relevant here is constrained by the non-observation of the lightest MSSM Higgs boson at LEP2 [12].The current limits [20] exclude m h 0 < 107.7 GeV for the low tan β < 3 scenario.Quasi-fixed tan β predictions are illustrated in table 1, where they are displayed with estimated uncertainties for the WP and GUT quasi-fixed scenarios.The uncertainties are induced by those quoted in the h t (m t ) predictions in Eqs.(2.1),(2.2).
Here, we set h t (Λ s ) = 5, close to its Landau pole and near the edge of perturbativity.In ref. [21], the limit h t < 3 was used to define a perturbative regime and we will use the point h t (Λ s ) = 3 as an estimator of sensitivity to h t (Λ s ).A central value of α s (M Z ) = 0.119 [19] was used.We display the results for m t = 170, 175, 180 GeV to illustrate the large dependence upon the top mass.We use the two-loop diagrammatic result in ref. [22] to calculate the MSSM lightest Higgs mass with the state-of-the-art program FeynHiggsFast2.Corrections to the values of h t (m t ) displayed in fig. 1 from including sparticle thresholds are expected to be small because the majority of change in h t (µ) occurs in the running between Λ s and 1000 GeV, identical in both cases.We therefore use the prediction for h t (m t ) as calculated with a degenerate sparticle spectrum at m t .To within small errors, this value should still be applicable for a non-degenerate spectrum, which is what we assume here.
Ideally, we would now perform a parameter scan through the low energy supersymmetry breaking parameters in order to determine the maximum value of m h 0 consistent with the QFP.This is impractical however, and we resort to using a benchmark point in low energy supersymmetry breaking parameter space.The value of m h 0 obtained by the benchmark corresponds in practice to be very close (within one GeV) to a more general upper bound on m h 0 [22], given an upper bound on sparticle masses.For generality, this benchmark corresponds to non-universal SUSY breaking parameters.For a given value of Λ s , tan β is predicted by the QFP as in Fig. 1.We then set µ and the parameter (2.4) As is argued in [22], X t ≈ 2m t2 corresponds to the maximal-mixing case, where m h 0 is maximised.A t is then specified by eq 2.4, and therefore the gluino mass will be set by the QFP prediction of A t /M 3 .For Λ s = 2 × 10 16 GeV for example, we obtain A t /M 3 = −0.59[10].However to the order in perturbation theory used here, the Higgs mass is independent of the gluino mass.Fixing M A then sets B through the relation [11] M The two electroweak symmetry breaking conditions are [11] where m2 i = m 2 H i +µ 2 plus loop corrections.Together, they determine the Higgs mass soft breaking parameters m 2 H 1 and m 2 H 2 (conservatively assumed to be uncorrelated and free).Following the authors of ref. [22], the maximum value of m h 0 is assumed to be acquired by taking3 M 2 = 100 GeV, M A = 1000 GeV, µ = −100 GeV and m t2 = 2000 GeV in order to get a conservative estimate.Dependence of the upper bound on m h 0 is logarithmic in this parameter and therefore slowly increasing as m t2 increases.Therefore, to obtain a sizeable effect on the bound, unnaturally high values of m t2 would have to be taken.Using the above procedure, the soft breaking parameters that m h 0 depends most sensitively upon are fixed near the weak scale without reference to any further unification assumptions, such as minimal supergravity for example.
Fig. 2 for m t = 170 − 180 GeV and h t (Λ s ) = 3 − 5.If we take m t = 175 GeV, the QFP is ruled out for any Λ s > 10 5 GeV.As noted above, the h t (Λ s ) = 3 curves give an estimate of the uncertainty in the QFP prediction.The figure shows that this dependence is small for Λ s > 10 9 GeV but that it increases for Λ s < 10 9 GeV.However, we note that in this latter range, h t being less than 5 (but still in the quasi-fixed regime) actually strengthens the upper bound upon m h 0 .h t (Λ s ) = 5 thus gives a reasonably accurate bound for Λ s > 10 9 GeV and a conservative one for Λ s < 10 9 GeV.
Analytic CCB Bounds at low string scales
We now turn to the discussion of CCB bounds.Unphysical CCB minima present some of the most severe bounds for supersymmetric models [13,14,10,15,16,17,18].Indeed, for a number of models it has been found that they exclude much of the parameter space not already excluded by experiment; for example the MSSM where supersymmetry breaking is driven by the dilaton [14], SUSY GUTS at the low tan β quasi-fixed point (QFP) [10], M-theory in which supersymmetry breaking is driven by bulk moduli fields [16,17] and several other string/field theory scenarios [17,18].All of the above work, however, assumed a logarithmic evolution of the gauge couplings with unification at a high scale ≥ 10 16 GeV.
In this section we shall be considering the effect of truncating this logarithmic evolution at a low string scale.For completeness, we first recall the three types of CCB minima that can occur in supersymmetric models: • D-flat directions which develop a minimum due to large trilinear supersymmetry breaking terms.
• F and D flat directions corresponding to a single gauge invariant.
• F and D flat directions which correspond to a combination of gauge invariants [24] involving H 2 [23] Since the first type are important at low scales [13] and the second type are only important when there are negative mass-squared terms at the GUT scale, we shall concentrate on the constraints coming from the last type of minimum.These occur at intermediate scales due to the running H 2 mass-squared even if all the masssquared values are positive at the GUT scale.Hence the resulting constraints are very dependent on renormalisation group running at high scales and are particularly interesting from the point of view of models with a lower string scale.As discussed above, our initial expectation is that the CCB bounds will be far less severe than in the usual versions of the MSSM.We will consider the F and D-flat direction in the MSSM corresponding to the operators where the suffices on matter superfields are generation indices.With the following choice of VEVs; the potential along this direction depends only on the soft supersymmetry breaking terms (neglecting a small D-term contribution); In the usual MSSM we can reasonably assume that, since the CCB minimum forms at VEVs corresponding to a ≫ 1, the largest relevant mass, and therefore the appropriate scale to evaluate the parameters at, is φ = h U 33 h 0 2 ≡ h t h 0 2 .This minimises the top quark contributions to the effective potential at one-loop.Further corrections to the potential are assumed to be small.Once we lower the string scale however we encounter the problem that the CCB minimum moves towards low scales and that consequently this approximation breaks down.Evidently, from eq. ( 3.3), this happens precisely where the positive m 2 L ii + m 2 L 33 + m 2 E terms begin to dominate, and so we do not anticipate that CCB minima will be formed when a < 1.In order to check this however, our approach will be to construct the constraints using the above assumption on φ and observe that they get far less restrictive as we move to moderately low string scales, say Λ s ∼ 10 8 GeV.We then check the approximate oneloop analytic results obtained with a more accurate two-loop numerical analysis at certain parameter points and observe numerically that CCB minima do not reappear as we move to very low string scales where a < 1.
In the above potentials, h 0 2 = −a 2 µ/h E 33 so that the eq.( 3.3) is of the form where is the LLE combination of mass-squared parameters (also evaluated at φ) that appears in the potential, φ = φ/Λ (3.5) and Λ is an arbitrary scale which we shall take to be the usual unification scale Λ GU T ∼ 10 16 GeV.The bound is therefore governed by A, B and the parameter for the LLE, LH 2 direction described above, or for the equally dangerous LQD, LH 2 direction.
To estimate the bound, we now adapt the results of Refs.[15,16].At large values of a ≫ 1 the potential is governed by the first term.Whatever the string scale may be, we require that m 2 2 be positive there and negative at M W (for successful electroweak symmetry breaking).A CCB minimum radiatively forms close to the value φ p where A first becomes negative (typically at a scale of f ew × µ/h E 33 ) [15,16].
In Refs.[15,16] it was shown that once we are able to estimate φ p the bound follows fairly easily, and this was done for models with degenerate gaugino masses.Bounds were derived for all non-universal scalar masses and couplings.In the present case however, the gauge couplings and the gaugino masses are also non-degenerate at the string scale Λ s .
This makes a general analytic treatment of the RGEs extremely difficult, so in order to simplify matters we shall henceforth assume the 'GUT gaugino relation'.That is we assume that at the scale Λ s we have the usual GUT expression for gaugino masses, .8)This relationship has the useful property that the gaugino masses as well as the gauge couplings would be degenerate if we continued the evolution of the MSSM RGEs upto Λ GU T .We shall call this fictitious degenerate value M a (Λ GU T ) = M 1/2 .Note that eq.(3.8) is only valid to one-loop order, and indeed in this section we present analytic results to one-loop order only (contrary to the last section).Although eq.(3.8) may seem like a rather brutal requirement, it holds for a number of interesting cases, for instance in models with power law unification as shown in ref. [5].In these models the scale Λ s in our analysis should really be interpreted as the compactification scale at which the first Kaluza-Klein states appear in the spectrum, rather than the string scale which is where we expect the real gauge unification to take place after a short period of power law 'running'.An assumption such as degenerate soft terms at the compactification scale Λ s is consistent with, for example, the Scherk-Schwarz mechanism of supersymmetry breaking.
Eq. (3.8) is also expected to hold in the mirage unification models of ref. [6] when there is no S/T -mixing and in the limit T + T → ∞.In this limit we have where we use the subscript-0 to represent values at the usual Λ GU T unification scale (i.e.α 0 ≈ 1/25), and where we have neglected terms of order α a m 3/2 which is consistent to one-loop accuracy.In this case we have M 1/2 = √ 3m 3/2 sin θ.
Eq. (3.8) allows us to adapt the expressions of ref. [15] with only a modest amount of effort by writing the parameters at Λ s in terms of their values at Λ GU T .In order to proceed, we next spend a little time discussing the analytic solutions to the renormalisation group running.The solutions of all the parameters may easily be expressed in terms of those combinations with infra-red QFPs; R = h 2 t /g 2 3 , A t and 3M 2 = m 2 2 + m 2 U 33 + m 2 Q 33 .These may be written as functions of so that Taking α 3 (m t ) = 0.108 means that 0.37 < r < 1 with r = 1 corresponding to the GUT scale.If the string scale is at Λ s = 10 11 GeV as in the WP model, then the corresponding value of r s ≡ r(Λ s ) is r s = 0.82.It is useful to define (3.12) Solving for R in terms of its value R s at the string scale (we use subscript-s to denote string-scale values) we find where the QFP value (where the Yukawa couplings blow up at the string scale) is given by 1 We also, for later use, define the distance from the real QFP, This can be rewritten in terms of a fictitious renormalisation of R down from a Λ GU T scale value of R 0 ; i.e. defining 1 .17)This is the usual expression for R (c.f.ref. [16]); however it should be noted that R 0 is here merely a parameter that is negative in the region 1/R QF P > 1/R s > 0. In the usual MSSM with unification at the GUT scale, this would of course be an unphysical (non-perturbative) region.For A t and M 2 we now define the distance from the usual QFP (i.e.where couplings blow up at the usual unification scale Λ GU T ) ρ = R R QF P (3.18) and also We then obtain expressions for Ãt = A t /M 1/2 and M2 = M 2 /M 2 1/2 in terms of their fictitious values, Ã0 and M2 0 , at Λ GU T ; where It is important to note that, since A t and M 2 retain their QFP behaviour since when σ = 1 (or R s → ∞) they are both independent of their values at the string scale, Λ s .In addition, factors of 1/(1 − ρ s ) cancel so that there is no divergent behaviour at the usual QFP.Also note that this QFP is at lower tan β than in the usual MSSM unification.We can estimate the difference in tan β at the QFP by using Eqs.(3.14,3.16)then give tan β QF P ≈ 1.2 in the WP model with Λ s = 10 11 GeV, in agreement with the full two-loop numerical result presented in fig. 1.
With all parameters expressed in terms of GUT scale parameters, we are now simply able to apply the bounds derived in ref. [16] for non-universal SUSY breaking directly.Consider for example the LH 2 , LLE direction.The cosmological bounds in this case are .25) where ρ p is the value of ρ at the scale φ p and for µ = 500 GeV.(The small dependence of f and g on µ, which we must choose by hand, is discussed in ref. [16].)To a good approximation the value of ρ p is given by [16] 1 ρ p = 1 + 1 2R 0 = 1 + 3.17(sin 2 β − sin 2 β QF P ).(3.27)In order to relate the quantities to their string scale values, we use the one loop RGE solutions for A and B; ( (3.28) where 2s − (3.29) and where The general behaviour of the bounds is clearly similar to that in the usual unification scenario.The bounds are on the particular combination (2 m2 s and are most restrictive at the QFP, decreasing as tan β increases.Away from the QFP there is a quadratic dependence on Ãs with a minimum at Ãs = O(1).
We can now see why the bounds at low scales are far less severe than in the MSSM with unification at the GUT scale.First, close to the QFP, the bound is 3s − 3δ for Λ s = 10 11 GeV.Thus the non-degeneracy of gauge couplings and gauginos contributes negatively to the bound even at the QFP.Second, away from the QFP, the bound asymptotes to the values with ρ p = 1 1 + 3.17 cos 2 β QF P ∼ 0.57.(3.32)However, the quantity multiplying M2 s in the bound is now (σ p − 1) which is a larger negative factor than (ρ p − 1).
We now further specialise to the mirage unification models with V 0 = 0, which have degenerate A-terms and degenerate scalar masses at the string scale; the WP model value of Λ s ∼ 10 11 GeV, there are no CCB minima appearing along the LH 2 , LLE direction except close to the QFP (tan β < ∼ 3) or for negative scalar mass squared values (m 2 s < 0).At the QFP we find that the bound at Λ s = 2 10 16 GeV is m2 s > ∼ 0.95 but drops rapidly towards smaller values of Λ s , as shown in fig. 4. A full numerical determination of the bounds for specific points in parameter space is in accord with Figs. 3 and 4. It also shows that the bounds are in fact not overly sensitive to the precise values of α 1 and α 2 at Λ s since the running is dominated by α 3 .
Moreover, this behaviour is expected to be a general feature resulting from the low string scale pushing the CCB minimum to low scales.For example we can analyse the bound at large tan β where eq.(3.32) holds.Choosing M 2 s = 0 and adjusting A s to make A 0 = M 1/2 , one finds that, away from the QFP, there are no CCB minima for any positive choice of non-universal mass-squared parameters at the string scale for Λ s < ∼ 10 10 GeV.In other words, for these intermediate and low string scales one may always adjust A s to remove CCB minima.Conversely, choosing a large enough value of A s forms a CCB minimum at any Λ s .
For Λ s < ∼ 10 7 GeV the analytic approximations we have been using break down for reasons outline above.Specifically, instead of evaluating the parameters at the renormalisation scale φ = h t h 0 2 , it is now more accurate to evaluate them at the scale φ = g 2 l (in the LLE, LH 2 direction) since this would be the largest relevant mass.Using this definition for φ we find numerically that minima do not reappear when Λ s is lowered still further, as expected due to the dominance of the positive m 2 L ii + m 2 L 33 + m 2 E contribution to the potential at low VEVs.
Summary
To summarise, we have examined constraints on the MSSM coming from the QFP scenario and CCB bounds when the string scale is lower than the canonical unification value of 10 16−17 GeV.The quasi-fixed behaviour is weakened somewhat as the scale is reduced, i.e. weak MSSM parameters retain more information about their high energy boundary conditions.Very strict bounds upon the string scale are obtained from the LEP2 lower bound upon the lightest Higgs mass in the QFP scenario.Current limits exclude the QFP scenario for string scales between 10 6 and 10 11 GeV for m t = 175 ± 5 GeV.This range of exclusion will increase by the end of running of LEP2, as the bounds improve.Run II of the Tevatron is expected to decrease the errors upon m t significantly, with important implications for the range of Λ s ruled out in the quasi-fixed scenario.For example, an error of 1 GeV upon m t would rule out the QFP scenario for all Λ s > 10 5 GeV.CCB bounds also give important constraints upon the quasi-fixed scenario.We provided an analytic treatment of CCB bounds with lower string scales which we confirmed with a more accurate numerical check.It is clear from our results that lowering the string scale significantly weakens the CCB bounds.As an example, we considered the most restrictive case of the QFP.In this case the lower bound upon string-scale, degenerate, scalar mass-squared values m2 s is weakened by 30% in the WP model, Λ s = 10 11 GeV.Remarkably, for tan β > 2 and Λ s < 10 10 GeV, the CCB bound is merely m 2 > 0 for any non-universal pattern of supersymmetry breaking.
Although we have concentrated on a particular subset of models (i.e.those that preserve the 'GUT gaugino relation'), we argue that our conclusions are true in a more general case.As the string scale is lowered, provided that all mass-squared values are initially positive, the CCB minima are inevitably pushed to lower VEVs.At these low scales, the negative m 2 2 term no longer dominates the potential along the most dangerous F and D-flat directions.
Figure 2 :
Figure 2: Theoretical upper bound on lightest MSSM Higgs mass in the quasi-fixed scenario with varying string scale Λ s .Bounds for quasi-fixed top Yukawa couplings h t (Λ s ) = 3, 5 and α s (M Z ) = 0.119 are shown.The copies of each curve are for m t = 180, 175, 170 GeV from top to bottom respectively.For m t = 175 GeV and h t (Λ s ) = 3, we have displayed the variation due to the error on α s (M Z ) = 0.117 − 0.122 via the lighter dashed curves.The area underneath the experimental limit has been excluded for the MSSM by LEP2.See text for a description of the other MSSM parameters used.
illustrates the quasi-fixed behaviour for two values of string scale. The dependence of the low scale h t on its string scale value is shown for canonical QFP SUSY GUT framework with string/unification scale Λ s = 2 × 10 16 GeV. The almost horizontal part of the lines represent the QFP regime: where, for input values
Prediction of low energy top Yukawa coupling h t (m t ) for string scale input h t (Λ s ).Two string scales Λ s = 10 11 , 2 × 10 16 GeV are used.The pair of lines represent the range produced by varying α s (M Z ) = 0.115 − 0.122 (the upper lines corresponding to higher α s (M Z )).
Table 1 :
tan β prediction for a top-Yukawa QFP at the GUT scale or the WP scale.mh 0 predicted by the benchmark by varying Λ s .Uncertainties induced by the 1σ error on α s (M Z ) are shown for one particular case.It is larger for higher Λ s , but always less than 0.5 GeV and much smaller than the uncertainty induced by the empirical error on m t .In fact, we see from the figure that the QFP is ruled out to better than 1σ for the range 10 6 < Λ s /GeV < 10 11(2.7) displays the QFP value of Λ s (GeV) m t = 170 GeV m t = 180 | 2014-10-01T00:00:00.000Z | 1999-09-21T00:00:00.000 | {
"year": 1999,
"sha1": "83787f5b4921f88a39b128989071ef57715b7270",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/1126-6708/2000/07/037/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "83787f5b4921f88a39b128989071ef57715b7270",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
904839 | pes2o/s2orc | v3-fos-license | Predicting Train Occupancies based on Query Logs and External Data Sources
On dense railway networks "such as in Belgium" train travelers are frequently confronted with overly occupied trains, especially during peak hours. Crowdedness on trains leads to a deterioration in the quality of service and has a negative impact on the well-being of the passenger. In order to stimulate travelers to consider less crowded trains, the iRail project wants to show an occupancy indicator in their route planning applications by the means of predictive modeling. As there is no official occupancy data available, training data is obtained by crowd-sourcing using the iRail web app and the mobile Railer application for iPhone. Users can indicate their departure & arrival station, at what time they took a train and classify the occupancy of that train into the classes: low, medium or high. While preliminary results on a limited dataset conclude that the models do not yet perform sufficiently well, we are convinced that with further research and a larger amount of data, our predictive model will be able to achieve higher predictive performances. All datasets used in the current research are, for that purpose, made publicly available under an open license on the iRail website and in the form of a Kaggle competition. Moreover, an infrastructure is set up that automatically processes new logs submitted by users in order for our model to continuously learn. Occupancy predictions for future trains are made available through an api.
INTRODUCTION
In Belgium -as well as in other countries with dense railway networks-train travelers are frequently confronted with overly occupied trains. In 2016, the ceo of the national railway company suggested peak-load pricing 6 in the hope that a more uniform load over the course of the day could be achieved. As an alternative, travelers that have the luxury to take a train earlier or later could be informed about the crowdedness of that train. To that extent, the iRail initiative, an independent non-profit project founded by Pieter Colpaert, to stimulate digital creativity concerning mobility in Belgium, wants to introduce a feature that indicates the occupancy of each train in its data feeds.
A system that accurately predicts the occupancy level of a train in the near future can have positive implications as the capacity of that train could be adapted, if possible, according to these predictions. This results on the one hand in a decreased probability of crowdy trains, thus an increase in the quality of service. On the other hand, a decrease in operational costs can be realized by reducing the capacity of trains that are expected to have a low occupancy.
The continuously increasing use of smart cards for automated fare collection offers a unique opportunity to understand passenger behavior at a massive scale. Unfortunately, in Belgium, such an automated system is not yet used. Thus, the Belgian railway company does not have real-time occupancy data at their disposal. IRail can therefore only rely on usage statistics of their api, feedback from their users, as well as other public datasets.
The most popular user agents reusing this api are the Railer App for iPhone, the BeTrains app for Android 7 , and the iRail Web app. These are classic route planning applications following a straight-forward user story: a user selects a departure stop, a destination stop and a desired time of departure; the app then suggests up to 6 possible itineraries. Other user agents exist, such as Next Train 8 (an app for the Pebble smart-watch), chat bots, data harvesters or search engine bots. As they only represent a minor part of the query logs, they are discarded in this research. Colpaert et al. [4] showed that, when enough query log data is gathered, it looks similar to actual travel demand, as illustrated in Figure 1. In this paper, we go one step further: we try to classify the occupancy level of a train using these query logs and external data sources.
RELATED WORK
Tirachini et al. [13] provide many interesting insights concerning the impact of high occupancy levels in public transport systems regarding different dimensions. First, when the occupancy level of a train is low, passenger transfer occurs smoothly and passenger-related disruptions that impose unexpected delays are less likely to happen. As the number of passengers increases, some users need to stand inside vehicles, hindering the movement of other passengers. This in turn results in an increase in riding time or an increase in the probability that an unexpected delay arises. Puong [11] showed that the average boarding time in uncrowded conditions is on average 2.3 seconds per passenger. This increases to 4.4 seconds per passenger when the number of standees per door reaches a threshold of 15 or more. Milkovits [9] showed that this effect is even more significant for the alighting time, explained by the difficulties of alighting passengers walking among too many standees. Second, high occupancy levels can give rise to a phenomenon called train bunching. When a train is too full, not all passengers can board it, leading to an increase in waiting time for these passengers and a higher number of expected passengers for the next train. Finally, crowding has a significant negative impact on the passengers' well-being. Authors have documented increased anxiety, stress, feeling of exhaustion and perceptions of risk to personal safety amongst others [7,1].
A few private initiatives exist to predict occupancy scores. As an example, the Open Capacity project 9 is a consultancy firm that creates occupancy scores for public transport agencies. It does this by measuring passenger load using existing public transport data sources, such as weight sensors, cctv cameras, door sensors, and ticketing information. The Dutch railway system introduced a feature in its app to report the occupancy of a train 10 . Three scores are possible, based on the sentiment of the passenger: positive, neutral and negative. The railway system uses this commuter feedback as a transparent means to research occupancy on trains. When many high occupancies are reported on a certain train, caused by a structural problem, its capacity is increased if possible.
Over the past years, a few research attempts tried to map the occupancy levels of public transport. In Nuzzolo et al., Short Term Occupancy Prediction (STOP) is proposed [10]. STOP is a system that enables predicting the number of passengers on a bus in the nearby future, using available real-time information on smart cards of passengers. The system was evaluated by integrating it in the bus management system of the public transport company in Santander, using data collected in a limited timespan of one day. The system achieved a mean squared error of 3.25, in predicting the number of occupants on a bus, which is equal to a relative root mean squared error of 46% due to the rather small capacity of a bus.
Silva et al. conducted an analysis of data collected by the Transport for London railway system [12]. Hundreds of millions smart-card readings over 70 days from February 2011 till February 2012, containing a time stamp, location code and event code were used to create regression models that predict the number of passengers on a train. In this research, real-time information was used as well, leading to a small increase in the predictive performance when the forecasting horizon decreases. A RMSE of 6.76 and 6.82 are reported, using 5-fold cross-validation, for 1-minute-ahead and 30-minute-ahead forecasts respectively. The authors are also able to quantify the effects of a shock in the system, such as a line segment or station closure.
Zhang et al. [14] collected over 6.5 million records in China by using crowdsensing over a timespan of five months. Crowdsensing is the collection of data through different kinds of sensing devices by a large mass of users. In contrast to crowd-sourcing, which was used to collect our data, the collection of this data happens automatically and requires no human input. They tried to solve two tasks using machine learning techniques: (i) predict whether a certain passenger will take public transport within a given week and (ii) forecast the number of passengers on a bus. For the classification task, F1 scores of around 0.43 are reported. For the regression task, an RMSE of around 25 is reported. Two main contributions were done by this research. First, they showed that weather and semantic trajectory information (such as the number of companies within a certain radius of a station) have a positive impact on the predictive performance of the machine learning model. Second, their results show that the eXtreme Gradient Boosting (XGBoost) algorithm outperforms other prominent algorithms on both tasks.
While the discussed research attempts provide many interesting insights, there are some fundamental differences with our research. First, the exact numbers of passengers was always available and, thus, a regression problem could be solved. Second, the amount of samples used is several orders of magnitude larger than the amount of samples in this research, due to smart card and crowdsensing mechanisms. Third, real-time information about the trains was often used. While this information could indeed be very useful in predicting the occupancy, it no longer enables a railway company to increase or decrease the size of the train, since the train must have already departed for this information to be available. Moreover, this information is not available to predict the occupancy of a train in the future.
GATHERING FEEDBACK
First, we ran a questionnaire to retrieve initial trains that would be structurally occupied. The questionnaire was disseminated by the Belgian railway company (sncb) over Twitter 11 and helped us to gather 334 trains that usually have a high occupancy. With this initial data, the iRail api was extended with two features. The first is an occupancy indicator, which provides the occupancy on the following levels: • Low occupancy: there are plenty of seats left.
• Medium occupancy: it is hard to find a seat and it is difficult to sit together.
• High occupancy: there are no seats left and people have to stand up.
• Unknown occupancy: the occupancy of the train is currently not known.
A second feature introduced in the api is the ability to post feedback. On a specific departure of a train, a user would then be able to specify the occupancy level. This feature, launched in August 2016, was picked up by the Railer App and the iRail.be web app by September, as can be seen in Figure 2. This led to 3818 feedback entries by the 19th of December 2016.
PREPARING THE DATASET
Of the 3818 collected records up until the time of writing, 256 contained wrong information (such as wrongly formatted station and vehicle ids) and could not be parsed, resulting in a dataset with a size of 3562 rows. All these records occur in the start of our dataset, and can probably be explained by the fact that the occupancy indicator was still being tested during that time. An occupancy log entry contains the following information: • Querytime: the time at which the record (or log entry) was sent to the system. From this timestamp, many different features such as the seconds since midnight, the day of the week and the month are extracted. Moreover, two binary variables indicating whether or not a morning or evening jam is ongoing are used. These variables are equal to one when it is not a day in the weekend and the departure time is from 6 to 10 AM or 3 to 7 PM respectively. • Vehicle: a structured identifier of the train the user is taking. This identifier is composed of three components: the vehicle type, the line number (line category) and the hour of the departure time from the first station on the line. The departure hour is added to the line number and the vehicle type is prepended. As an example, the IC500 line are the intercity trains going from Oostende to Eupen, while the IC507 train is the train going from Oostende to Eupen at 07:40 AM.
• From & To: an identifier of the station from which the user departs or where the user wants to go. It is important to note that it is the departure and arrival location of the user, not the train.
• Connection: a uri that links to connection information of that train, such as delay time and the stations where it stops.
• Occupancy: the reported occupancy level (low, medium or high). This is the target variable.
All categorical variables are one-hot encoded, which is defined as a mapping of a variable to a binary vector of length equal to the number of categories. All elements in the vector are equal to zero except at the index corresponding to the category of that sample. The from-and to-station identifiers could be one-hot encoded as well but this leads to an explosion of the dimensionality of the features and deterioration of the predictive performance of the model. Therefore, information from two external sources was used. First, a file from iRail with the name, the identifier and the coordinates for each station in Belgium. Second, a static file published by the Société Nationale des Chemins de fer Belges (Belgian railway company, i.e. sncb) containing the number of passengers visiting a station on a weekday, Saturday or Sunday. This number of visitors and the coordinates for the from-and to-station were used as features. Moreover, by using these coordinates, we requested different weather parameters, such as the weather type (which required onehot encoding), the temperature and the humidity, through an api. A calendar api was used to provide a holiday type feature.
To gather additional data, the connection URI -provided in the feedback data-was used. For each sample, we extracted the delay of the train on that departure time and created a vector, with a length equal to the total number of stations in our dataset. For every sample in our dataset and every station in Belgium, we calculated the following function f : if v will stop in s in k stations from c −k, if v stopped in s, k stations ago from c With v the vehicle identifier, c the current station identifier and s the station for which we want to calculate whether or not v stops there. As an example, the first three stations of the IC507 from Oostende to Eupen are Oostende, Brugge and Gent-Sint-Pieters. If the log entry is created from Brugge (i.e. the second station on the line), then the indices corresponding to Oostende, Brugge and Gent-Sint-Pieters contain a -2, -1 and 1 respectively. This procedure applied to each sample in the dataset results in a very sparse matrix. Again, to avoid an explosion in dimensionality, and thus a deterioration of the generalization capability of the model, these vectors are not directly used as features. For each station -which are the columns in this sparse matrixwe count its frequency or the number of times it occurs in the matrix, which is equal to the number of elements not equal to zero in the corresponding column in the matrix. Then for each train-ride (or row in the matrix), the sum of frequencies and a weighted sum of frequencies are calculated and used as features. For the weighted sum of frequencies, we multiply the frequency by the inverse of its element in the matrix. The intuition behind this is that in the morning, a lot of commuters get on the train at smaller stations and alight the train in a larger station. Close to these larger stations, this value becomes large. Since the size of our dataset is still rather small compared to the total number of different lines in Belgium, these features were also calculated with the reported visitors per station from SNCB in 2015 instead of their frequencies in our dataset in order to get a less biased view on the crowdedness of a station.
In total, 1270 features are used in the model, including all binary variables due to one-hot encoding. To measure the quality of a feature, we calculated the feature importances using XGBoost [3], which is also used to create our predictive model. XGBoost calculates the feature importance by counting in how many trees of the constructed forest a certain feature occurs, taking into account the depth of the nodes where the features occur. A bar plot of the 40 most important features and their corresponding value can be found in Figure 3. Here we can clearly see that the number of seconds since midnight of a train departure (i.e. the departure time expressed in seconds) is the most important feature, followed by the calculated frequency features, the number of visitors per day and the humidity in the departure and arrival station. The delay features, which are realtime information and can thus not be used to predict the occupancy of future trains, do not have a significant impact on the model and can therefore be discarded. We tried to apply two prominent feature selection techniques, Boruta [6] and LASSO [5], but it did not increase the predictive performance of our model.
MACHINE LEARNING AND RESULTS
The designed machine learning approach is composed out of multiple steps, depicted in Figure 4. In a first phase, logs that have the same vehicle identifier and from-station identifier on the same day are grouped together. The equality of these three parameters also implies a similar query time, since the departure hour is incorporated in the vehicle identifier. Then, the mode of the labels is calculated. This enables a simplistic form of anomaly detection, as a wrong label can get corrected if more correct labels are given for that train on that day. Moreover, the labels are mapped to an integer where low is equal to 1, medium equal to 3 and high equal to 5 in order to calculate a mean score. Of the 3562 records collected from 1 September 2016 until 19 December 2016, 506 duplicate logs were combined with others, resulting in 3056 samples. For each of these samples, a feature vector was calculated, as explained in Section 4. The feature extraction failed for 25 records, because of external api errors, resulting in a 3032 × 1270 matrix to train our predictive model on. The predictive model consists of two components. In the first component, a neural network is trained for a regression task using the feature vectors and the calculated mean score. The neural network consists of three inner layers with 750, 250 and 100 neurons and dropouts of 0.33, 0.25 and 0.1 after each layer respectively. Then, the out-of-sample predictions, which are predictions of samples where the model is not trained on, of this neural network are used as an extra feature for our final XGBoost model. Since the XGBoost algorithm contains a lot of different hyper-parameters that can have values in a large range, the search space becomes too large in order to feasibly optimize these hyper-parameters with a brute-force GridSearch technique. Therefore, a Bayesian optimization library, BayesOpt [8], was used, which also supports the optimization of hyper-parameters. Moreover, to deal with the imbalance in our dataset (41% low, 28.6% medium and 30.4% high), more weight was given to medium and high samples in our dataset. We also experimented with smote [2] to balance our dataset, but it did not increase the precision and recall scores for the lower populated classes that much, while deteriorating the total predictive performance significantly.
In order to evaluate the predictive performance of our model, we measured the mean and standard deviation of the accuracies together with the mean and standard deviation of the precision and recall for each class using the result of twenty trials of 3-fold, 5-fold and 10-cross validation on the 3032 × 1270 data matrix. The results can be found in Table 1. For completeness, a confusion matrix averaged over these twenty trials is plotted for 10-fold cross-validation in Figure 5. As expected, the predictive performance increases slightly when the number of folds are increased. When we use 10-fold cross validation, we achieve an average accuracy of 54.012%, which is quite low but already better than random guessing or always predicting low occupancy. Moreover, there is a big difference between the precision and recalls of the three classes, even with higher weights for the medium and high occupancy class. The low recall and precision scores for the medium occupancy class could perhaps be explained by the fact that people do not know the definition of the medium occupancy class well and tend to classify the occupancy of a medium-filled train as low or high.
Although the results are not yet what we expected, we are convinced that given more data, these results will improve. This is confirmed by the increasing trend of the learning curve of our model, which can be seen in Figure 6.
Benchmarking
A first effort to create a public benchmark with this data is done by launching a Kaggle competition. A Kaggle competition is a machine learning competition wherein every contestant has to use the same data as any other contestant to create and test their machine learning model. Data from July 2016 till October 2016 serves as the training set and data from the end of October 2016 till the end of December 2016 serves as testing set. The Kaggle competition provides a leaderboard which lists the scores contestants achieved using their approaches. Very often, these contestants gladly share their approaches, such that we can extract all interesting insights and implement them in our system.
Architecture and Web resources
In order for the predictions generated by our model to be available for everyone and to enable data re-use, an api was set up that allows users to query the occupancy for a certain train from a station on a certain day. It continuously polls the iRail api to check for new occupancy records. When such a record is found, it is automatically processed and added to our NoSQL MongoDB. Every night, two processes are run. On the one hand our predictive model is re-trained with the newly collected data. On the other hand, the hyperparameters are tuned using Bayesian optimization. The api is accessible through the following ip: 193.190.127.247
CONCLUSION AND FUTURE WORK
In this paper, the first steps towards a system that can predict the occupancy level of a train in the nearby future based on query logs are presented. Such a system can have a significant positive impact on the quality of service while decreasing the operational costs. We discussed the different phases of constructing such a system: (i) adding a functionality to a widely used application in Belgium in order to collect data through crowd-sourcing; (ii) extracting numerical features from these raw JSON logs and (iii) creating a predictive model on this extracted data. Moreover, an API was created in order to expose the predictions of our model and a Kaggle competition was set up to enable collaborative benchmarking.
We conclude that, in this early phase, our predictive model, which is trained on a limited amount of data, is good at predicting trains with a low occupancy. This comes at no surprise, as the low occupancy of trains outside peak hours is easy to predict and as it is the largest populated class (currently, around 41% of all samples have the low occupancy label). When more samples are collected, we are convinced that the system's predictive performance will increase. The strength of the approach in this paper is that the data used can be gathered for any public transport system. At this moment, data has only been collected over a limited timespan. The current dataset thus contains only a limited amount of samples, but is growing steadily with more than 1000 query logs per month. | 2017-04-12T00:33:07.908Z | 2017-04-03T00:00:00.000 | {
"year": 2017,
"sha1": "85bb7e0186d4f2c187cccd4c624c28fbcac23b31",
"oa_license": "CCBY",
"oa_url": "https://biblio.ugent.be/publication/8518282/file/8518283.pdf",
"oa_status": "GREEN",
"pdf_src": "ACM",
"pdf_hash": "85bb7e0186d4f2c187cccd4c624c28fbcac23b31",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
9991477 | pes2o/s2orc | v3-fos-license | Standard supersymmetry from a Planck-scale statistical theory
We outline three new ideas in a program to obtain standard physics, including standard supersymmetry, from a Planck-scale statistical theory: (1) The initial spin 1/2 bosonic fields are transformed to spin 0 fields together with their auxiliary fields. (2) Time is defined by the progression of 3-geometries, just as originally proposed by DeWitt. (3) The initial (D-1)-dimensional"path integral"is converted from Euclidean to Lorentzian form by transformation of the fields in the integrand.
In earlier work it was shown that a fundamental statistical theory (at the Planck scale) can lead to many features of standard physics 1-3 . In some respects, however, the results had nonstandard features which appear to present difficulties. For example, the primitive supersymmetry of the earlier papers is quite different from the standard formulation of supersymmetry which works so admirably in both protecting the masses of Higgs fields from quadratic divergences and predicting coupling constant unification at high energy. Also, the fact that the theory was originally formulated in Euclidean time seems physically unsatisfactory for reasons mentioned below. Here we introduce some refinements in the theory which eliminate these two problems. The ideas in the following sections respectively grew out of discussions of the first author with Seiichirou Yokoo (on the transformation of spin 1/2 to spin 0 fields) and Zorawar Wadiasingh (on the transformation of the path integral from Euclidean to Lorentzian form).
Transformation of Original Spin 1/2 Fields Yields Standard Supersymmetry
In Refs. 2 and 3, the action for a fundamental bosonic field was found to have the form at energies that are far below the Planck energy m P (with = c = 1) and in a locally inertial coordinate system. This is the conventional form of the action for fermions, described by 2-component Weyl spinors, but it is highly unconventional for bosons, because a boson described by ψ b would have spin 1/2. We can, however, transform from the original 2-component field ψ b to two 1-component complex fields φ and F by writing
Substitution then gives
where ∂ µ = η µν ∂ ν , η µν = diag (−1, 1, 1, 1), and V is a 4-dimensional normalization volume. This is, of course, precisely the action for a massless scalar boson field φ and its auxiliary field F . With the fermionic action left in its original form, we now have the standard supersymmetric action for each pair of susy partners: There is a major point that will be discussed at length elsewhere, in a more complete treatment of the present theory: The above transformation works only for ω + | p| ≥ 0, since otherwise the sign of the integrand would be reversed. However, a stable vacuum already requires ω ≥ 0, so we must define time for would-be negative-frequency fields in such a way that this condition is satisfied.
Time is Defined by Progression of 3-Geometries in External Space
In our earlier work, the time coordinate x 0 was initially defined in exactly the same way as each spatial coordinate x k , so x 0 was initially a Euclidean variable. For reasons given in the following section, however, this does not seem to be as physically reasonable as a picture in which time is Lorentzian when it is first defined. In this section, therefore, we move to a new picture in which the initial "path integral" Z E still has the Euclidean form but there is initially no time. We are then confronted with the well-known situation in canonical quantum gravity 4 , where the "wavefunction of the universe" is a functional of only 3-geometries, with no time dependence. Roughly speaking, cosmological time is then defined by the cosmic scale factor R (except that there can be different branches for the state of the universe, corresponding to, e.g., expansion and contraction, as well as different initial conditions). More precisely, the progression of time is locally defined by the progression of local 3-geometries. An analogy is a stationary state for a proton with coordinates X passing a hydrogen atom with coordinates x. The time-independent Schrödinger equation can be written with Ψ required to satisfy Then the equation for ψ is The first term involves a local proton velocity For a state in which the proton is moving rapidly, with and in which 2 /2m p ∇ 2 p ψ is relatively small, we obtain One then has an "internal time" defined within a stationary state 5 . Similarly, one can define time as a progression of 3-geometries, just as proposed 40 years ago by DeWitt, whose formulation of canonical quantum gravity (following the classical canonical decomposition of Arnowitt, Deser, and Misner, and the work of Dirac, Wheeler, and others) involves the local canonical momentum operator which corresponds to the proton momentum operator − i ∇ p in the analogy above. After introducing the 3-dimensional metric tensor in the way described in Refs. 1-3, and the gravitational action in a way that will be described in a more complete treatment, we move from the original pathintegral quantization to canonical quantization, with a state and time is defined essentially in the same way as in the analogy.
Transformation of 3-Dimensional "Path Integral"
Changes Euclidean Factor e −S to Lorentzian Factor e iS A Euclidean path integral with the form of (9), but with time included, is formally transformed into a Lorentzian path integral through an inverse Wick rotation x 0 has the usual form of a classical action, and it leads to the usual description of quantized fields via path-integral quantization. In other words, the standard equations of physics follow from S D L , and are therefore formulated in Lorentzian time. The Euclidean formulation, in either coordinate or momentum space, is ordinarily regarded as a mere mathematical tool which can simplify calculations and make them better defined.
Hawking, on the other hand, has suggested that Euclidean spacetime may actually be more fundamental than Lorentzian spacetime. In his wellknown popular book, he says 6 "So maybe what we call imaginary time is really more basic, and what we call real is just an idea that we invent to help us describe what we think the universe is like." And in a more technical paper he states 7 "In fact one could take the attitude that quantum theory and indeed the whole of physics is really defined in the Euclidean region and that it is simply a consequence of our perception that we interpret it in the Lorentzian regime." However, there is a fundamental problem with this point of view, because the factor e iS D L in the Lorentzian formulation results in interference effects, whereas the factor e −S D E in the Euclidean formulation does not. Also, a formal transformation from t E to t L mixes all of the supposedly more fundamental Euclidean times in the single Lorentzian time that we actually experience. Finally, it appears difficult to formulate a mathematically well-founded and physically well-motivated transformation of a general path integral from Euclidean to Lorentzian spacetime.
Here we adopt a very different point of view: (1) Nature is fundamentally statistical, essentially as proposed in Refs. 1-3, but the initial path integral (or partition function) does not contain the time as a fundamental coordinate. Instead time is defined by the local 3-space geometry (or more generally, (D-1)-space geometry). (2) It is, however, still necessary to transform from the Euclidean form (9), with e −S , to the Lorentzian form (18), with e iS (but also with no time coordinate, so that D → D − 1 in (18)), and this is our goal in the present section.
Consider a single complex scalar field φ with a 3-dimensional "Euclidean path integral" In a discrete picture, the operator A is replaced by a matrix with elements A ( x, x ′ ): A can be diagonalized to A k, k ′ = a k δ k, k ′ . Then The Gaussian integrals over Re φ k and Im φ k may be evaluated as usual at each k to give Here, and in the earlier papers, two representations of the path integral are taken to be physically equivalent if they give the same result for all operators A (including those which produce zero except for arbitrarily restricted regions of space and sets of fields). For example, we might define a path integral Z ′ with fields φ ′ andφ ′ which are treated as independent and which each vary along the real axis. It is then appropriate to include the formal Jabobian, with a value of 1/2, which would correspond to a transformation from Re φ and Im φ to for any operator A, we regard Z E and Z ′ as being physically equivalent. Now let us define a "Lorentzian path integral" Z L by Diagonalization of A gives Then Z E can be replaced by Z L , which involves the original operator A and the original spatial coordinates x, but a different form for the integrand. This replacement is possible because time is introduced only after Z is in Lorentzian form.
The transformation from Z E to Z L can be regarded as a transformation of the fields in the integrand, with the lines along which Re φ and Im φ are integrated each being rotated by 45 • in the complex plane 9 .
Outline of Broad Program: From a Planck-Scale
Statistical Theory to Standard Physics with Supersymmetry The ideas above are part of a broad program to obtain standard physics, including supersymmetry, from a description at the Planck scale which is purely statistical. The major steps in the complete program are as follows: (1) The fundamental statistical picture gives a D −1 "Euclidean action" for bosons only (and with no time yet): (2) Random fluctuations then give a "Euclidean action" with bosons, fermions, and a primitive supersymmetry: (30) (3) Transformation of the integrand in the "path integral" changes the "Euclidean factor" e −S to the "Lorentzian factor" e iS : (31) (4) The 3-dimensional gravitational metric tensor g kl and SO(N ) gauge fields A k (and their initial, primitive supersymmetric partners) result from rotations of the vacuum state vector, in both 3-dimensional external space and D − 4 dimensional internal space.
(5) Time is defined by the progression of 3-geometries in external space. (6) The Einstein-Hilbert action for the gravitational field (as well as the cosmological constant), the Maxwell-Yang-Mills action for the gauge fields, and the analogous terms for the gaugino and gravitino fields are assumed to arise from a response of the vacuum that is analogous to the diamagnetic response of electrons. (9) Transformation of the initial spin 1/2 bosonic fields, followed by definition of standard gaugino and gravitino fields, gives standard supersymmetry.
(10) One finally obtains an effective action which is the same as that of standard physics with supersymmetry, except that particle masses, Yukawa couplings, and self-interactions are assumed to arise from supersymmetry breaking and radiative corrections.
A more complete treatment will be given in a much longer paper. | 2007-11-24T23:00:34.000Z | 2007-11-24T00:00:00.000 | {
"year": 2008,
"sha1": "09f54583a94d809d08af74327e41d70ed41d27ec",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0711.3816",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "09f54583a94d809d08af74327e41d70ed41d27ec",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
229282902 | pes2o/s2orc | v3-fos-license | Is Gluten the Only Culprit for Non-Celiac Gluten/Wheat Sensitivity?
The gluten-free diet (GFD) has gained increasing popularity in recent years, supported by marketing campaigns, media messages and social networks. Nevertheless, real knowledge of gluten and GF-related implications for health is still poor among the general population. The GFD has also been suggested for non-celiac gluten/wheat sensitivity (NCG/WS), a clinical entity characterized by intestinal and extraintestinal symptoms induced by gluten ingestion in the absence of celiac disease (CD) or wheat allergy (WA). NCG/WS should be regarded as an “umbrella term” including a variety of different conditions where gluten is likely not the only factor responsible for triggering symptoms. Other compounds aside from gluten may be involved in the pathogenesis of NCG/WS. These include fructans, which are part of fermentable oligosaccharides, disaccharides, monosaccharides and polyols (FODMAPs), amylase trypsin inhibitors (ATIs), wheat germ agglutinin (WGA) and glyphosate. The GFD might be an appropriate dietary approach for patients with self-reported gluten/wheat-dependent symptoms. A low-FODMAP diet (LFD) should be the first dietary option for patients referring symptoms more related to FODMAPs than gluten/wheat and the second-line treatment for those with self-reported gluten/wheat-related symptoms not responding to the GFD. A personalized approach, regular follow-up and the help of a skilled dietician are mandatory.
Introduction
Over the last 30 years, the gluten-free diet (GFD) has gained increasing popularity associated with an exponential growth in the sales of gluten-free (GF) products [1]. The global market for GF food, driven by North America and Europe, but now spreading across the Asia-Pacific countries (APAC), was valued at USD 3.88 bn in 2016, and is foreseen to expand to USD 6.47 bn in 2023, at a compound annual growth rate (CAGR) of 7.60% [2] (Figure 1).
In the USA, a follow-up analysis of the National Health and Nutrition Examination Survey (NHANES) revealed that self-adoption of a GF diet without a diagnosis of celiac disease (CD) tripled from 2009-2010 (prevalence 0.52%) to 2013-2014 (prevalence 1.69%) [3] and NPD's Dieting Monitor, which tracks nutrition-related issues of consumers, in 2013 reported that nearly 30% percent of adults claimed to cut down on or avoid gluten [4]. In Italy, where bread and pasta are the foundation of food culture, is in the vanguard of the European GF sector with the range of products jumping from 280 in 2001 to the current 6500 and a market amounting to EUR 320 million, of which only 215 are dispensed on prescription for celiac patients. The launch of innovative products containing no or less The GFD is recommended as lifelong treatment for CD. However, neither government awareness campaigns and initiatives nor the improvement of diagnostic tools and increasing prevalence of CD [5] account for the overwhelming adoption of a GF lifestyle. Clinical application of GFDs continues to escalate as a therapeutic option for non-celiacs who seem to react negatively to gluten ingestion, are trying to lose weight [6] or simply want to reduce bloating after meals [7]. The reasons given for a self-imposed GFD include irritable bowel syndrome (IBS) and lactose intolerance [8]. Other persons spontaneously limit or eliminate gluten intake as a "healthy" dietary regimen without previous clinical tests due to the widespread consumer interest in free-from products and the growing adoption of specific eating patterns in pursuit of health and wellness [9]. In an Australian cross sectional population survey [10], symptomatic wheat avoidance was highly correlated with dairy avoidance, female gender and lesser and greater receptiveness to conventional and complementary medicine, respectively. While perception of the potential harm and expected benefits of gluten consumption/avoidance are high, real knowledge of gluten and GF-related implications for health is scarce. An American survey [11] found that over 30% of respondents had no specific reason for adopting a GF regimen, while less than 10% self-reported gluten sensitivity (GS). The other reasons were a healthy lifestyle, improvement of intestinal health or the presence of a gluten-sensitive family member. Different factors lie at the basis of the GF movement, mostly driven by non-scientific sources of information. While Google searches containing "low carb" and "low fat" have declined since 2004, worldwide searches for "gluten" showed a sharp upward trend, reaching the peak of food concerns from 2005 to 2014. From then on, they have remained generally steady. In Italy, an increase in the number of searches was observed until mid-2019, then there was a decline. This far exceeds lactose, genetically modified organisms (GMO) and palm oil, with ratios approaching 16:1, 6:1 and 2:1, respectively. Marketing campaigns aimed at extending the appeal of GF to every health-conscious consumer despite the high costs of products. Moreover, athletes and celebrities, together with mass media messages and social network platforms, all contribute to increasing awareness of gluten intolerance and fuel the interest in dietary treatments. Consumers commonly select GF products from aisles in major supermarkets and health food shops [12]; for many consumers, the front of package claims are more important determinants of GF product choices [13] than nutritional labeling [14]. Several studies have shown an A significant proportion of prolamins are represented by repetitive sequences of glutamine and proline. The various wheat varieties differ in terms of prolamin molecular weight and microstructure (junction density, branching rate, lacunarity). These characteristics influence the strength of the network and the dough quality and, in turn, determine the tensile and cooking properties [19,25,27]. Due to its unique biochemical and functional features (water-binding and visco-elastic properties, gas retention), gluten is essential for baking but also widely used as an additive in processed food.
Besides its commercial value, the detrimental effects of gluten on human health have been described, mediated by immunological or toxic reactions [28]. Due to the high number of glutamine-and proline-rich periodic sequences, gluten peptides are highly resistant to gastric and intestinal proteolytic Nutrients 2020, 12, 3785 4 of 23 degradation, thus giving rise to potentially immunogenic fragments. In addition, gluten alters intestinal permeability, promotes oxidative stress, exerts cytotoxic and pro-inflammatory effects and negatively affects the microbiome; cell apoptosis is increased and cell differentiation is reduced [29]. In celiac patients carrying HLA DQ2/DQ8 haplotypes, gluten triggers an innate, as well as adaptative, Th1-driven immune response, amplified by transglutaminase-mediated synthesis of negatively charged glutamate residues from glutamine [26]. Since the Neolithic Age agricultural revolution, 10,000 years ago, ancient grasses have been domesticated and spread from the Fertile Crescent of the Middle East westward through Europe [30]. Agricultural techniques increased the abundance and availability of wheat, but it is only in the past 500 years that the gluten content of foods containing wheat has significantly increased. Modern hexaploid wheat cultivars have three different genomes (A, B and D) and evolved from the original diploid wheat, called einkorn (Triticum monococcum), through thousands of years of selective breeding and the development of tetraploid varieties [31] (Figure 2c). It has been posited that the genetic evolution, introducing new sequences into the wheat genome, could have potentially led to an increase in toxic and immunogenic epitopes responsible for the increased prevalence of CD [32] and, in general, of gluten-related disorders. A high-quality genome sequence was established from the reference wheat Chinese Spring, which made a complete set of gluten protein genes available from a single hexaploid cultivar [33,34]. Nevertheless, the large number of different wheat cultivars around the world, the high allelic variation in gluten genotypes among cultivars and the large number of immunogenic gluten epitopes make it difficult to draw firm conclusions, and the real contribution of modern wheat breeding practices to the increased prevalence of CD is still a matter of debate. Data on the reduced immunogenicity of old wheat genotypes because of the absence of the D-genome [32] have not been confirmed by more recent studies [35][36][37][38] and their health-promoting properties emerging from recent studies appear to rely on other features rather than their low immunogenicity. The macro-and micronutrient contents of ancient grains seem to decrease the risk of cardiovascular disease and metabolic syndrome, ameliorate the glycolipid profile and reduce oxidative stress and the level of pro-inflammatory cytokines. Furthermore, their consumption has been reported to curtail the extent and severity of IBS-related symptoms [39].
In the absence of convincing evidence for a role of wheat breeding in the increasing prevalence of gluten-related diseases, change in per capita consumption of wheat flour and the usage of vital gluten as a processed food additive have been postulated [40].
Is There a Role for Microbiota?
In recent years, the impact of gut microbiota on the loss of gluten tolerance has received increasing attention. The intestinal microbial communities represent a complex ecosystem, which plays a central role in modulating both innate and adaptive immune responses [41,42]. They are also involved in the maintenance of mucosal barrier function, which is a crucial mediator between our body and the external environment, and prevent the entry of toxic/immunogenic molecules across the intestinal wall [43,44].
In both stools and mucosal biopsies of celiac patients, a shift toward Bacteroides, Clostridium and Escherichia coli, with reduction in protective Bifidobacteria, Firmicutes and Lactobacilli, in comparison with non-celiac controls, has been described [45,46] as being partially restored by a GFD [47][48][49]. Both genetic makeup and environmental factors contribute to shaping the composition and diversity of the intestinal microbiota. Infants with a high genetic risk of developing CD harbor a higher proportion of Firmicutes and Proteobacteria and a lower proportion of Actinobacteria [50], resulting in an increased prevalence of pathogenic bacteria compared to those with a low risk [51]. According to the hygiene hypothesis, the decreased infectious pressure observed in industrialized countries over the last several decades should prevent the development of a functional immune system during early childhood, leading to an imbalance between pro-inflammatory and anti-inflammatory responses. Additional main drivers of microbial gut colonization, such as mode of delivery, infant feeding practice and antibiotic use, were not confirmed as risk factors for CD [52][53][54][55]. Although most studies report major differences in the composition of microbiota between celiac patients and healthy controls, a specific microbial profile cannot be identified in CD [56]. Evidence on the causal relationship between dysbiosis and disease occurrence is highly heterogeneous and controversial due to inter-individual variability, small sample sizes and different methodologies, which all hamper the interpretation of results [57]. Finally, it is still unclear whether an altered microbiota in CD patients is the cause or the consequence of mucosal inflammation [58]. The exact mechanisms by which a dysbiotic status could contribute to CD development are also still unknown and include the processing of gluten peptides, activation of innate immune response and modulation of intestinal permeability [59,60].
Gluten Consumption
The phenomenon of globalization is driving a revolution in food systems (supply, marketing and distribution) as well as in dietary patterns. Major changes in food culture are closely associated with urbanization, increasing incomes, capital flow and market liberalization, and are characterized by dietary convergence, a phenomenon occurring as a result of increased reliance on a narrow base of staple foods, among which the dominant staple grain is wheat [61]. Palatability, ease of large-scale cultivation, industrial food processing and low prices have all contributed to the global spread of wheat gluten consumption. Wheat production has increased sharply since 1955, showing an impressive tenfold increase in the annual rate of yield improvement, particularly in the 1960s, and gradually afterwards. This was thanks to a technology shift commonly labeled the "green revolution" [62,63]. The green revolution resulted in the development of rust-resistant semi-dwarf, high-yield wheat. Between 1980 and 2013, the world's annual wheat yield increased by 1.41% [64]. Currently, North America maintains the leading position in the wheat gluten market, followed by Europe. The abundance of applications in the food industry and high demand for high-fiber and meat-free foods among an increasingly health-conscious and vegan/vegetarian population are considered key factors boosting the growth of the wheat gluten industry in Western countries [65]. The global wheat protein market was estimated to be valued at USD 2.04 billion in 2017 and it was foreseen to grow at a compound annual growth rate (CAGR) of 4.8% from 2017, to reach USD 2.58 billion by 2022 [66].
In highly populated, developing countries, particularly those in the Asian region, the growing middle class, adopting Western-style diets with a higher content of wheat products, have contributed to increasing its consumption [64]. Recently, global change in consumption patterns and consumer attitudes during coronavirus lockdowns, and in particular the boom of home baking, boosted a sharp increase in wheat consumption: the Spanish Minister of Agriculture, Luis Planas, revealed that sales of flour quadrupled during the third week of lockdown; Nielsen data showed that in March 2020, the retail flour sales in France, the US and Italy increased by 140, 154 and 185 percent, respectively, compared with the same period in 2019.
Vital wheat gluten (VWG) is obtained from wheat flour by removing soluble fibers and starch fractions and recovering gliadins and glutenins [67]. VWG is widely used as an additive in bakery products and pasta dough to increase yields and improve rheological, microstructure and quality characteristics [68,69]. Due to its visco-elasticity and the range of functional properties at a lower price than competitors, such as milk and soy proteins, have contributed to spreading its use in the food industry, leading to a tripled consumption since 1977, consistent with the epidemiology of CD [40].
Enzymatic breakdown of gliadin from wheat by intestinal pepsin, leucine aminopeptidase and elastase generates morphine-like peptides, also known as gluten exorphins [75]. In healthy volunteers, early research showed that gluten exorphins induced a significant increase in gastrointestinal transit time, reversible after administration of the opioid antagonist naloxone [75,76].
In rodents, orally administered gluten exorphin A5 suppressed the endogenous pain-inhibitory system induced by socio-psychological stress and modified spontaneous behavior and learning/memory processes during several laboratory stressors, indicating that the peptides may cross the blood-brain barrier [77]. It has been suggested that the effects of food exorphins could be amplified if they are absorbed in excess through a disrupted mucosal barrier [78].
Not Only Gluten
Although general attention has focused on gluten as the only culprit of symptoms occurring in patients on a gluten-containing diet, a variety of substances, belonging to the non-gluten components of wheat, are potentially harmful, including wheat α-amylase/trypsin inhibitors (ATIs), wheat germ agglutinins (WGAs) and fructans. Moreover, glyphosate, a non-selective herbicide extensively used in farming against weeds, could play an important role due to its interference with agricultural crops.
Wheat α-Amylase/Trypsin Inhibitors (ATIs)
ATIs are a family of at least 11 proteins belonging to the non-gluten protein fraction. They are classified in monomeric, dimeric and tetrameric forms and represent 2-4% of total wheat protein content [79]. ATIs are contained in the endosperm of wheat seeds, where they play the multifunctional role of a natural defense against insects and parasites, inhibiting enzymes with amylase and trypsin-like activities and the regulation of starch metabolism during seed development and germination [80,81]. Identified as major allergens in baker's asthma, as well as stimulators of innate immunity, ATIs promoted a strong innate immune response by engaging the TLR4-MD2-CD14 complex both in human and murine macrophages, monocytes and dendritic cells (DCs) and in vivo after oral or systemic challenge in mice. Furthermore, in duodenal biopsies from celiac patients in remission, ATIs induced an increase in IL-8 mRNA expression as well as a further increase in 33mer-induced IL-8 expression [82]. In line with these results, in gluten-sensitized mice expressing HLA-DQ8, ATI ingestion was recently shown to increase the inflammatory response to dietary gluten. Conversely, in ATI-fed control mice, a TLR4-mediated intestinal barrier dysfunction without mucosal damage was observed. In both cases, ATI-degrading lactobacilli decreased the inflammatory effects, suggesting new therapeutic strategies for wheat-related disorders [83]. ATIs are present and retain bioactivity in processed or baked foods. Wheat breeding practices aimed at developing high-yield, highly pest-resistant crops has led to an increased amount of ATIs in modern hexaploid wheat varieties; modern gluten-containing staples have been found to have higher levels of TLR4-activating ATIs than most gluten-free food; in mice, oral ingestion was shown to increase intestinal inflammation by activating gut and mesenteric lymph node myeloid cells [84]. Recently, a central role for ATIs has been proposed in the pathogenesis of NCG/WS within the context of a new theory which suggests a decrease in butyrate-producing intestinal bacteria as an initial trigger of the pathogenic cascade [85].
Wheat Germ Agglutinin (WGA)
A role of WGA as potentially responsible for many of wheat's related, and difficult to diagnose, ill effects has been postulated. WGA belongs to the lectin group, a superfamily of carbohydrate-binding proteins present in a variety of plants with a protective role against external pathogens. It is a homodimer composed of subunits and each protomeric unit consists of four structurally homologous domains with a high degree of amino acid homology. Four interlocking disulfide bonds result in a compact, stable protein highly resistant to degradation [86]. It is present in its highest concentrations in the germ tissue of wheat kernels (up to 0.5 g/kg) [87], especially in whole wheat. Through thousands of years of selective wheat breeding to obtain increasingly higher protein content, the concentration of WGA lectin in wheat Nutrients 2020, 12, 3785 7 of 23 has increased proportionately, offering additional pest resistance and contributing to wheat's global dominance as one of the world's favored monocultures. WGA may adversely affect gastrointestinal function in various ways: it binds specifically to carbohydrates expressed by human enterocytes and immune cells and to the glycocalyx, the sialic acid coatings of the epithelial layer. In human basophils, WGA induced interleukin 4 (IL-4) and interleukin 13 (IL-13) release [88]. In an experimental model of human intestinal immune/epithelial cell interaction, it exhibited toxic and inflammatory effects by disrupting epithelial integrity and inducing the synthesis of pro-inflammatory cytokines, including interleukin 1, interleukin 6 and interleukin 8 by peripheral blood mononuclear cells (PBMCs) [89]. In murine spleen cells, WGA induced a T and B cell-independent production of interleukin 12 (IL12) and, in turn, the production of interferon gamma (IFN gamma) by T/natural killer lymphocytes [90]. In WGA-treated murine peritoneal macrophages, the production of pro-inflammatory cytokines anti-TNF alfa, interleukin 1 beta (IL-1 beta), IL-12 and IFN gamma was reported [91]. Human data on the in vivo immune-stimulatory activity of WGA are lacking [92], both in healthy subjects [93] and CD patients. However, the presence of IgG and IgA antibodies to WGA has been described, not cross reacting with gluten antigens, at higher levels in CD in comparison with patients with other intestinal diseases and healthy subjects. For this reason, a correlation with the pathogenesis of CD has been suggested [94]. Nevertheless, antibodies to wheat albumin and globulin [95], as well as to other dietary antigens such as casein, beta-lactoglobulin and ovalbumin, have also been reported in celiac patients [96] and the role of WGA in the pathogenesis of CD remains elusive.
In rodents, WGA displayed anti-nutrient effects reducing digestibility and utilization of dietary proteins: it mimicked the effects of epidermal growth factor (EGF), inducing cellular hyperplastic and hypertrophic growth [97]. In rats, it also caused damage to the intestinal brush border membrane, reduction in surface area, acceleration of cell losses, shortening of villi via binding to the villous surface [98] and cytoskeleton degradation, contributing to cell death and increased turnover.
Owing to their small size, they are osmotically active and rapidly fermented by gut bacteria in the large intestine [102]. The combination of osmotic activity with fluid retention within the intestinal lumen and gas production by fermentation of oligosaccharides and polyols induces a variety of symptoms. These include bloating, abdominal pain and diarrhea [103].
Fructans increase the tolerance of wheat to drought and cold [104]. Their content in wheat is highly variable and depends on the final product; no significant difference was found between wheat breads and the gluten-free counterparts (approximately 1% in both) [105,106]. Furthermore, gluten-free products, like corn, can have quite a large amount of FODMAPs, mainly fructans, galactans and fructose [105].
Wheat Glyphosate
Glyphosate is a non-selective herbicide and, since the late 1970s, one of the most extensively used in farming against weeds that interfere with agricultural crops like soy, corn and wheat [107]. A role as a causal factor for the worldwide increase of CD incidence was initially proposed, based on the glyphosate effects on intestinal microbiota, micronutrient absorption, enzymatic detoxification and serotonin signaling, as well as on the increased risk of non-Hodgkin's lymphoma in celiac patients [108]. Nevertheless, the paper received criticism for being merely speculative. Other studies based on in vivo and in vitro animal models [109,110] and on cultured human and rat intestinal cell lines [111] have postulated a negative impact of glyphosate on intestinal microbiota, barrier properties and motility.
A strong limitation of these studies is that most of them, due to obvious ethical reasons, have been conducted on experimental models. In the absence of robust evidence, the causative link between glyphosate and gluten-related intestinal disorders remains hypothetical.
Gluten-Related Disorders
Wheat proteins are recognized as environmental triggers of two well-established immune-mediated disorders, CD and wheat allergy (WA). Furthermore, a gluten-related condition much debated in these last years is non-celiac gluten sensitivity (NCGS).
Celiac Disease (CD)
CD is a chronic small bowel enteropathy occurring in genetically susceptible individuals where dietary gluten peptides elicit both innate and adaptive Th1-driven immune responses, amplified by transglutaminase-mediated synthesis of negatively charged glutamate residues from glutamine [26]. Access of immunogenic gluten peptides to the small intestine lamina propria is fostered by gluten-induced up-regulation of zonulin, a modulator of intestinal tight junctions [112]. Besides the CD-predisposing HLA DQ2/DQ8 haplotypes, genome-wide association studies have identified 39 non-HLA loci affecting CD [113]. Environmental factors may also be of importance for CD development. A correlation with some viral infections, especially during early childhood, has been suggested, including rotavirus, reovirus, enterovirus A and B and acute respiratory infections [114][115][116]. Based on the evidence that the microbiota affects the immune response [117] and indications of microbiota alterations in celiac patients, a correlation between dysbiosis and the risk of developing CD has been postulated [118][119][120]. The role of other environmental factors, such as infant feeding practices, mode of delivery, age of gluten introduction and amount of gluten in early life or exposure to antibiotics, has not been confirmed or has given contradictory results [54,[121][122][123][124][125][126].
CD is diagnosed more frequently in females with a F:M ratio of 2:1 on average and an onset following a bimodal age distribution, with an initial peak in the first 2 years of life and a second peak in the second or third decade, although about 25% of all diagnoses occur at the age of 60 years or more [127]. Clinical presentation ranges from virtually asymptomatic cases, despite typical mucosal damage (silent CD), to severe malabsorption and includes a variety of mild to severe intestinal and/or extraintestinal symptoms [128], especially involving iron and bone metabolism, central and peripheral nervous systems and reproductive system. Multiple autoimmune diseases have been described in association with CD, most commonly autoimmune thyroiditis, type 1 diabetes and liver and rheumatologic disorders [129]. Diagnosis of CD relies on the assessment of specific circulating antibodies and on the demonstration of duodenal mucosal damage (ranging from lymphocytic enteritis to severe villous atrophy); in selected cases, HLA DQ typing is recommended. According to the European guidelines, duodenal biopsies can be avoided in symptomatic children with high titer serology [130]. Owing to a misleading clinical presentation and/or lack of clear-cut diagnostic tests in many patients, diagnosis of CD requires time and expertise to properly combine clinical, serologic, histologic and genetic data. About 1% of patients, especially those with late diagnosis, low adherence to diet and HLA DQ2 homozygosis, develop pre-malignant or malignant complications (refractory CD, ulcerative jejunoileitis, enteropathy-associated T cell lymphoma (EATL), small bowel adenocarcinoma) [131,132] or hyposplenism [133].
Wheat Allergy (WA)
WA has a prevalence in the range of 0.2% to 1% [134]. Although more common in children [135], most of whom outgrow it by the age of 16 years [136], symptoms may occur at any stage of life, including later adulthood. The prevalent immune mechanism is IgE mediated, but non-IgE-mediated Nutrients 2020, 12, 3785 9 of 23 reactions are also described [137,138], characterized by chronic infiltration of eosinophils and lymphocytes in the gastrointestinal mucosa ( Figure 3).
Nutrients 2020, 12, x FOR PEER REVIEW 9 of 24 including later adulthood. The prevalent immune mechanism is IgE mediated, but non-IgE-mediated reactions are also described [137,138], characterized by chronic infiltration of eosinophils and lymphocytes in the gastrointestinal mucosa ( Figure 3). WA is a classic food allergy characterized by cutaneous, gastrointestinal or respiratory manifestations. These include wheat-dependent, exercise-induced anaphylaxis (WDEIA), which results from the combination of wheat ingestion and physical exercise, baker's asthma and rhinitis, occurring after inhalation of wheat and cereal flours, which is one of the most common occupational allergies and contact urticaria [139]. Children with WA mainly display moderate-to-severe atopic dermatitis; wheat ingestion may also elicit IgE-mediated urticaria, angioedema, bronchial obstruction, nausea and abdominal pain, or even severe systemic anaphylaxis [140]. In adults, the most common variant is WDEIA, where symptoms range from urticaria to systemic reactions, including anaphylaxis [141]. Multiple allergens are involved in WA [142]: sera from patients with baker's asthma and rhinitis react with amylase inhibitors, germ agglutinin, peroxidase and nonspecific lipid transfer proteins (LTPs) [143]; WDEIA is induced by ω5-gliadins [144]; IgE from patients with atopic dermatitis, urticaria and anaphylaxis are reactive with α, β, γ, ω-gliadins, and low and high molecular weight subunits. Over 50% of patients with urticaria have IgE to ω5-gliadin [145]. The first-level diagnostic tests for WA are in vitro specific immunoglobulin E (sIgE) assays and skin prick tests (SPTs), which, however, have a low predictive value. Functional tests (bronchial challenge test in baker's asthma and food challenge in food allergy) are considered the diagnostic gold standard for WA [146], but they are impractical and potentially dangerous. Molecular-based allergy (MA) diagnostics and a flow cytometry-assisted basophil activation test (BAT), an in vitro functional test for the diagnosis of immediate type allergy for patients at risk of severe anaphylactic reactions, are a novel diagnostic approach to allergic disorders that in some cases may represent an effective alternative to the in vivo functional tests [141,146]. WA is a classic food allergy characterized by cutaneous, gastrointestinal or respiratory manifestations. These include wheat-dependent, exercise-induced anaphylaxis (WDEIA), which results from the combination of wheat ingestion and physical exercise, baker's asthma and rhinitis, occurring after inhalation of wheat and cereal flours, which is one of the most common occupational allergies and contact urticaria [139]. Children with WA mainly display moderate-to-severe atopic dermatitis; wheat ingestion may also elicit IgE-mediated urticaria, angioedema, bronchial obstruction, nausea and abdominal pain, or even severe systemic anaphylaxis [140]. In adults, the most common variant is WDEIA, where symptoms range from urticaria to systemic reactions, including anaphylaxis [141]. Multiple allergens are involved in WA [142]: sera from patients with baker's asthma and rhinitis react with amylase inhibitors, germ agglutinin, peroxidase and non-specific lipid transfer proteins (LTPs) [143]; WDEIA is induced by ω5-gliadins [144]; IgE from patients with atopic dermatitis, urticaria and anaphylaxis are reactive with α, β, γ, ω-gliadins, and low and high molecular weight subunits. Over 50% of patients with urticaria have IgE to ω5-gliadin [145]. The first-level diagnostic tests for WA are in vitro specific immunoglobulin E (sIgE) assays and skin prick tests (SPTs), which, however, have a low predictive value. Functional tests (bronchial challenge test in baker's asthma and food challenge in food allergy) are considered the diagnostic gold standard for WA [146], but they are impractical and potentially dangerous. Molecular-based allergy (MA) diagnostics and a flow cytometry-assisted basophil activation test (BAT), an in vitro functional test for the diagnosis of immediate type allergy for patients at risk of severe anaphylactic reactions, are a novel diagnostic approach to allergic disorders that in some cases may represent an effective alternative to the in vivo functional tests [141,146].
Non-Celiac Gluten/Wheat Sensitivity (NCG/WS)
For the sake of simplicity, the wide range of intestinal and extra-intestinal symptoms occurring after the ingestion of gluten-containing food in subjects who do not have either CD or WA has been collectively defined as NCGS, more recently renamed non-celiac wheat sensitivity (NCWS) [147,148]. The occurrence of gluten-related disturbances beyond CD was initially reported in 1980 [149] and later in 2000 [150], but it was only in 2011 that NCG/WS took center stage as part of the spectrum of gluten-related disorders [151]. Since then, rising public interest and a growing body of research have fueled constant debate regarding this issue, with an overwhelming discrepancy between media messages and scientific citations [17]. The internet, the popular press, marketing claims and celebrities endorsing their gluten-free choices represent common sources of information with no reliable scientific evidence. The clinical picture is heterogeneous and non-specific, ranging from "IBS-like" symptoms (diarrhea, constipation, bloating, nausea and epigastric pain) to extra-intestinal manifestations (malaise, anxiety, fibromyalgia, skin rash, tiredness and chronic fatigue, "foggy mind" and headache) [152]. Owing to the lack of specific biomarkers, prevalence data in the general population are highly variable, ranging between 0.6% and 10.6% [92]. Recently, in NCGS, a significant increase in anti-gliadin IgG2 antibodies was described in comparison with healthy controls and an increase in anti-gliadin IgG4 antibodies was reported in comparison with CD and healthy controls, suggesting their potential role as diagnostic biomarkers [153]. Furthermore, evidence was provided for an overexpression of selected miRNA in the intestinal mucosa and peripheral blood leukocytes (PBLs) of NCWS patients if compared to symptomatic controls with functional dyspepsia or CD. Hsa-miR-30e-5p proved to be the best predictor of NCWS vs. CD in biopsies and vs. controls in PBLs [154]. The absence of validated diagnostic criteria also explains the high rate of self-diagnosis [155,156] as well as the impact of patients' perception of symptoms and of the nocebo effect on the interpretation of study results [157]. In an attempt to establish the actual role of gluten, double-blind placebo-controlled (DBPC) gluten challenge trials have been suggested. Molina-Infante and Carroccio, in order to evaluate the accuracy of this approach, analyzed 10 of these trials including 1312 adults. The studies were different regarding the duration of the gluten challenge (1 day-6 weeks) and the wash out period (3 days-2 weeks), gluten daily dose (2-52 g) and the kind of placebo administered (gluten-free products, xylose, whey protein, rice or corn starch containing fermentable carbohydrates). Most of the trials reported that gluten was able to significantly aggravate symptoms when compared to placebo, but only 38 out of 231 patients (16%) specifically reacted to gluten. Moreover, a nocebo effect (similar or increased symptoms after placebo administration) was observed in 40% of the patients [158]. The heterogeneity of these studies should also be highlighted because this potentially affected the results, and these data prompt some doubts as to the role of gluten as a "trigger" food of symptoms because more than 80% of NCGS diagnosed on the basis of a positive response to a GFD cannot be formally diagnosed after a DBPC trial (not performed in all the studies according to the protocol recommended by the Salerno Experts [147]) and this sheds light on the possible importance of the so-called "nocebo" effect which cannot be excluded in studies involving a dietary approach.
Apart from gluten, potential dietary triggers include non-gluten wheat components such as ATIs, WGA and fructans. ATIs are highly resistant to intestinal proteolitic degradation and have been identified as strong activators of innate immune responses in human and murine macrophages, monocytes and DCs, eliciting the release of proinflammatory cytokines via the activation of TLR4 [82]. In mice, ATIs showed an additive effect on pre-existing low-level intestinal inflammation, with stimulatory activity increasing from the proximal intestine to the ileum and colon [84]. Involvement of an adaptive immune response by the migration of DCs to mesenteric lymph nodes and interaction with primed T cells could exacerbate the ongoing inflammation [84]. In vitro studies and in vivo animal models showed that WGA induces the release of pro-inflammatory cytokines and epithelial barrier disruption [89]. In vivo human studies are needed to better support the role of ATIs and WGA as triggering factors of NCG/WS. Unlike ATIs and WGA, fructans induce a variety of IBS-like symptoms, including bloating, abdominal pain and diarrhea, due to a combination of osmotic activity with fluid retention within the intestinal lumen and gas production by fermentation [102,159]. In a DBPC study, Biesiekierski et al. reported that patients with NCGS did not exhibit statistically significant effects after gluten was added to the diet in the presence of a low content of FODMAPs, indicating that symptoms may be due to fructans rather than gluten [160]. Another DBPC crossover study in patients with self-reported NCG/WS showed that fructans (rather than gluten) were more likely to induce symptoms, with no effect of gluten challenge [161]. However, Volta et al. argued that the authors had enrolled self-diagnosed NCG/WS, some extra-GI symptoms typical of NCG/WS had not been included in the evaluation, anti-gliadin IgG antibodies had not be assessed and only the prevalence of Hashimoto's thyroiditis had been reported as an autoimmunity marker [162,163]. Hence, numerous patients with NCG/WS could have a diagnosis of IBS.
Exposure to hidden ingredients such as chemical additives and preservatives, commonly added to processed food as antimicrobial agents or to improve appearance, flavor or texture, might contribute to generating intestinal symptoms by inducing pro-inflammatory cytokines, altering gut microbiota composition and disrupting the mucosal barrier [164].
In the absence of a definite mechanism of action, the pathogenesis of NCG/WS remains a matter of debate; IgE-independent WA involving mast cells, eosinophils and other immune cells [165] has been postulated on the basis of past or current history of food allergy [166], eosinophil infiltration of the intestinal mucosa and in vitro basophil activation induced by food antigens in patients with NCG/WS diagnosed by DBPC challenge [167]. Furthermore, an increase in mucosal lymphocytes has been detected in some NCG/WS patients [167,168]. In particular, infiltration with innate lymphocyte-1 cells producing IFN-γ and responsive to a wheat-free diet has been described in the rectal mucosa of patients [169]. Current evidence suggests a complex interplay among systemic immune response, impaired intestinal barrier function and dysbiosis [170]. The early findings of a reduced intestinal permeability [168] in NCG/WS patients have not been confirmed: further studies have definitely shown intestinal epithelial damage leading to compromised barrier function [171,172] and microbial translocation from the lumen to the intestinal mucosa, resulting in a systemic, mainly innate, immune response [173,174]. In wheat-sensitive patients, altered expression of markers of an innate immune response has been described [168,175]. Positive serology for native gliadin in a proportion of patients [176] suggested a concomitant role of an adaptive immune response [173]. In wheat-sensitive individuals, Uhde et al. demonstrated increased serum markers of systemic innate immune activation as well as B cell response to microbial antigens associated with markers of intestinal epithelial cell damage as indicators of the translocation of microbial products across the intestinal mucosa, reversible on a GFD [177]. Based on the recognized involvement of an impaired intestinal permeability in the pathogenesis of NCG/WS [112,178] and on the studies regarding the immune-stimulating activity of ATIs [82,84], a new hypothesis has been formulated implicating the Western diet and lifestyle which, inducing dysbiosis with low levels of intestinal butyrate-producing bacteria, could lead to a vicious circle involving a disrupted intestinal barrier function, microbial lipopolysaccharide (LPS), decreased intestinal alkaline phosphatase (IAP) and intact ATI translocation [85].
In recent years, the overlap between NCG/WS and IBS has drawn increasing attention [92,179]; an Italian multicenter study found IBS in about 50% of these patients [152]. With an estimated 11.2% worldwide prevalence [180], IBS is the most prevalent functional gastrointestinal disorder (FGID) [181], causing a significant impairment of patients' quality of life and productivity with a high social and economic impact [182]. Shared symptoms between IBS and NCG/WS are abdominal pain, altered bowel habits, bloating and/or extra-intestinal symptoms [183][184][185]. Food is regarded as a precipitating factor of symptoms by many IBS patients [186,187]. In the absence of specific tests, the diagnosis of IBS [188] essentially relies on symptom assessment, standardized in the Rome IV Criteria [189]. In recent years, a low-FODMAP diet (LFD) involving a global restriction of FODMAP intake followed by gradual re-introduction, according to individual tolerance and under the supervision of an expert dietician [190], has been widely employed for IBS treatment [191] and it is currently regarded as effective in reducing IBS symptoms, according to several studies [192][193][194][195].
Unfortunately, only a few studies dealing with LFDs have been based on randomized placebo-controlled double-blind trials [190]. A recent systematic review and meta-analysis of randomized controlled trials (RCTs) examining the efficacy of an LFD and GFD in IBS provided evidence that an LFD is more effective than a GFD in reducing IBS symptoms [194], although the evidence is of very low quality. The authors justified the very low quality of evidence regarding LFD efficacy because of the heterogeneity of the studies, i.e., the different types of comparators used in the different studies and the low number of patients reporting global symptom improvement (189 out of 397 patients, whereas the GRADE system would require at least 300 patients) [196]. The authors also underlined that the problems could be solved if further trials were be carried out using similar comparator groups in order to provide more data [194]. Unfortunately, there is a problem of economic resources because it is quite difficult to find subjects interested in financing such studies [190].
Differentiating IBS from NCG/WS can be cumbersome and needs to take into account the overlapping clinical picture, the lack of specific biomarkers, the putative role of the same dietary triggers and the influence of patients' perceptions. Waiting for validated biomarkers able to obtain a differential diagnosis, some authors suggest that the patients' opinions on the role of gluten in precipitating their digestive symptoms could be used as criteria to distinguish NCG/WS from IBS, although the gluten-related nocebo effect and the questionable reliability of patients in identifying the dietary culprit of their symptoms should be taken into account. With the aim of overcoming the limitation of a diagnosis merely based on exclusion criteria and to standardize the diagnostic procedure, an international group of experts elaborated the Salerno Experts' criteria for the diagnosis of NCG/WS based on a double-step approach. In Step 1, after exclusion of CD and WA, patients start a six-week gluten-containing diet and report their symptoms according to a modified version of the Gastrointestinal Symptom Rating Scale (GSRS). Then they start a GFD for at least six weeks. A decrease of at least 30% of the baseline score is considered a positive response.
Step 2 includes a one-week challenge (GFD and gluten or placebo) followed by a one-week washout of strict GFD and a crossover to the second one-week challenge. A variation of symptoms of at least 30% between gluten and placebo challenge discriminate a positive from a negative result [147] (Figure 4).
Nutrients 2020, 12, x FOR PEER REVIEW 12 of 24 LFD is more effective than a GFD in reducing IBS symptoms [194], although the evidence is of very low quality. The authors justified the very low quality of evidence regarding LFD efficacy because of the heterogeneity of the studies, i.e., the different types of comparators used in the different studies and the low number of patients reporting global symptom improvement (189 out of 397 patients, whereas the GRADE system would require at least 300 patients) [196]. The authors also underlined that the problems could be solved if further trials were be carried out using similar comparator groups in order to provide more data [194]. Unfortunately, there is a problem of economic resources because it is quite difficult to find subjects interested in financing such studies [190]. Differentiating IBS from NCG/WS can be cumbersome and needs to take into account the overlapping clinical picture, the lack of specific biomarkers, the putative role of the same dietary triggers and the influence of patients' perceptions. Waiting for validated biomarkers able to obtain a differential diagnosis, some authors suggest that the patients' opinions on the role of gluten in precipitating their digestive symptoms could be used as criteria to distinguish NCG/WS from IBS, although the gluten-related nocebo effect and the questionable reliability of patients in identifying the dietary culprit of their symptoms should be taken into account. With the aim of overcoming the limitation of a diagnosis merely based on exclusion criteria and to standardize the diagnostic procedure, an international group of experts elaborated the Salerno Experts' criteria for the diagnosis of NCG/WS based on a double-step approach. In Step 1, after exclusion of CD and WA, patients start a six-week gluten-containing diet and report their symptoms according to a modified version of the Gastrointestinal Symptom Rating Scale (GSRS). Then they start a GFD for at least six weeks. A decrease of at least 30% of the baseline score is considered a positive response.
Step 2 includes a oneweek challenge (GFD and gluten or placebo) followed by a one-week washout of strict GFD and a crossover to the second one-week challenge. A variation of symptoms of at least 30% between gluten and placebo challenge discriminate a positive from a negative result [147] (Figure 4). [147]. GFD: gluten-free diet.
Although a DBPC approach represents the gold standard for a rigorous identification of NCG/WS [147], it is cumbersome and impractical for clinicians. Furthermore, wheat is commonly identified as a trigger when IBS patients are specifically interviewed [197][198][199]. The clinical effects of the aforementioned compounds could explain the overlapping symptoms of NCG/WS and IBS, as well as the causative role of wheat in a subgroup of IBS patients, and their symptomatic improvement after wheat elimination [92,183,200]. In this regard, the term "wheat-sensitive IBS" has been coined to describe patients who meet the Rome IV criteria for IBS and report gluten/wheat-related symptoms [92]. In everyday practice, it is not easy to clearly distinguish NCG/WS from "wheat-sensitive IBS", which represents a "gray zone" where different concepts and symptoms can overlap. As a consequence, choosing the most appropriate dietary intervention within this overlap is challenging. Although a DBPC approach represents the gold standard for a rigorous identification of NCG/WS [147], it is cumbersome and impractical for clinicians. Furthermore, wheat is commonly identified as a trigger when IBS patients are specifically interviewed [197][198][199]. The clinical effects of the aforementioned compounds could explain the overlapping symptoms of NCG/WS and IBS, as well as the causative role of wheat in a subgroup of IBS patients, and their symptomatic improvement after wheat elimination [92,183,200]. In this regard, the term "wheat-sensitive IBS" has been coined to describe patients who meet the Rome IV criteria for IBS and report gluten/wheat-related symptoms [92]. In everyday practice, it is not easy to clearly distinguish NCG/WS from "wheat-sensitive IBS", which represents a "gray zone" where different concepts and symptoms can overlap. As a consequence, choosing the most appropriate dietary intervention within this overlap is challenging. However, the usefulness of such a distinction from a clinical point of view could be relatively unimportant. The GFD can be considered in patients with self-reported gluten/wheat-dependent symptoms, especially if associated with extra-intestinal manifestations, most likely not induced by fructans [6,92,201]. In non-responders or partial responders to a GFD, an LFD could be considered as a second-line treatment. Moreover, we think that in patients not reporting gluten/wheat as trigger of their symptoms and referring symptoms more related to FODMAPs other than fructans, an LFD could be the first dietary option. In any case, irrespective of the type of dietary approach, the patients' preferences must be taken into account.
Conclusions
In the last 30 years, the GFD and related GF products have gained increasing popularity. These have been supported by marketing campaigns, athletes and celebrities, media messages and social networks. Nevertheless, real knowledge of gluten and GF-related implications for health is scarce in the population.
The role of potential causative factors in the increasing prevalence of CD is under debate. Modern wheat breeding practices have not been confirmed; per capita vital gluten consumption, variation in the amount and types of wheat intake and agronomic practices affecting wheat protein content have been proposed as contributors to the toxicity of wheat in genetically susceptible individuals [40]. Although general attention has focused on gluten as the only culprit of symptom occurrence in non-celiac patients on a gluten-containing diet, the role of a variety of compounds (ATIs, WGA, fructans and glyphosate) belonging to the non-gluten components of wheat appears to be prevalent.
Despite the wide acceptance of the term by the scientific community, the existence of NCG/WS as distinct entity has been questioned, and more properly could be regarded as a collective term for a variety of different conditions where gluten is directly involved only in a small minority of patients [158]. Likewise, the pathogenesis seems to be multifactorial, including innate immune response, altered mucosal barrier function and dysbiosis [202]. In the absence of specific diagnostic markers, and under the influence of marketing and media claims, a high rate of self-diagnosis occurs [152,156].
A GFD might be an appropriate dietary approach for patients with self-reported gluten/wheat-dependent symptoms. An LFD should be the first dietary option for patients referring symptoms more related to FODMAPs than gluten/wheat, and the second-line treatment for those with self-reported gluten/wheat-related symptoms who do not respond to a GFD. In any cases, a personalized approach, regular follow-up and the intervention of an expert dietician are recommended. | 2020-12-17T06:16:33.801Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "810a8490c15ee591620a14f8c80da1fae8b7b743",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/12/12/3785/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "772497dd0125831db0686e8866a63ae912156a93",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233210564 | pes2o/s2orc | v3-fos-license | On trans-Sasakian $3$-manifolds as $\eta$-Einstein solitons
The present paper is to deliberate the class of $3$-dimensional trans-Sasakian manifolds which admits $\eta$-Einstein solitons. We have studied $\eta$-Einstein solitons on $3$-dimensional trans-Sasakian manifolds where the Ricci tensors are Codazzi type and cyclic parallel. We have also discussed some curvature conditions admitting $\eta$-Einstein solitons on $3$-dimensional trans-Sasakian manifolds and the vector field is torse-forming. We have also shown an example of $3$-dimensional trans-Sasakian manifold with respect to $\eta$-Einstein soliton to verify our results.
Introduction
In 2016, G. Catino and L. Mazzieri [7] introduced the notion of Einstein soliton which can be viewed as a self-similar solution to the Einstein flow where g is the Riemannian metric, S is the Ricci tensor and r is the scalar curvature. It can be easily seen that the Einstein soliton is analogous to the Ricci soliton which is also generated by a self-similar solution to the very famous geometric revolution equation Ricci flow. It is a well-known fact now that the study of Ricci soliton has tremendous contribution in solving the longstanding Thurston's geometric conjecture.
Similarly it is also interesting to study the Einstein soliton from various directions to solve many physical and geometrical problems. However in this paper we consider, a slight perturbation of the Einstein soliton by η ⊗ η, called the η-Einstein soliton. The mathematical expression for the η-Einstein soliton [1] is given by the following equation where L ξ denotes the Lie derivative along the direction of the vector field ξ, S is the Ricci tensor, r is the scalar curvature and λ, µ are real constants. The η-Einstein soliton is called shrinking if λ < 0, steady if λ = 0 and expanding if λ > 0. In particular, if µ = 0, the η-Einstein soliton reduces to the Einstein soliton (g, ξ, λ). J. T. Cho and M. Kimura [6] introduced the concept of η-Ricci soliton and later C. Calin and M. Crasmareanu [5] studied it on Hopf hypersufaces in complex space forms. A Riemannian manifold (M, g) is said to admit an η-Ricci soliton if for a smooth vector field V , the metric g satisfies the following equation where L ξ is the Lie derivative along the direction of ξ, S is the Ricci tensor and λ, µ are real constants. It is to be noted that if the manifold has constant scalar curvature, then the data (g, ξ, λ − r 2 , µ) of the equation (1.2) satisfies the equation (1.3), i.e; the η-Einstein soliton reduces to an η-Ricci soliton. Hence we can remark that the two notions are different for the manifolds of non-constant scalar curvature and if the scalar curvature of the manifold is constant then the concepts of η-Ricci soliton and η-Einstein soliton coincide.
The paper is organised as follows: After a brief introduction, in section 2, we recall some basic knowledge on trans-Sasakian manifolds. Section 3 deals with 3-dimensional trans-Sasakian manifolds admitting η-Einstein solitons and also the nature of the soliton is dicussed. In this section, we have constructed an example of a 3-dimensional trans-Sasakian manifold satisfying η-Einstein soliton. In section 4, we have contrived η-Einstein solitons in 3-dimensional trans-Sasakian manifolds in terms of Codazzi type and cyclic parallel Ricci tensor and characterized the nature of the manifold. Sections 5, 6, 7, 8 are devoted to the study of some curvature conditions R · S = 0, W 2 · S = 0, R · E = 0, B · S = 0, S · R = 0 admitting η-Einstein solitons in 3-dimensional trans-Sasakian manifold. In last section we have studied torse forming vector field when 3-dimensional trans-Sasakian manifolds admitting η-Einstein solitons.
Preliminaries
An n-dimensional smooth Riemannian manifold (M, g) is said to be an almost contact metric manifold [3] if it admits a (1, 1) tensor field φ, a characteristic vector field ξ, a global 1-form η and an indefinite metric g on M satisfying the following relations for all vector fields X, Y ∈ T M , where T M is the tangent bundle of the manifold M . Also it can be easily seen that φ(ξ) = 0, η(φX) = 0 and rank of φ is (n − 1).
The geometry of the almost Hermitian manifold (M × R, G, J) gives rise to the geometry of the almost contact metric manifold (M, g, φ, ξ, η), where G is product metric of the product manifold M × R with the complex structure J defined by for all vector fields X on the manifold M and smooth function f on the product manifold M × R. An almost contact metric manifold (M, g, φ, ξ, η) is called a trans-Sasakian manifold if the product manifold (M × R, G, J) belongs to the class W 4 [9]. The notion of trans-Sasakian manifolds was introduced by J. A. Oubina [13] and later J. C. Marrero [11] completely characterized the local structures of trans-Sasakian manifolds of dimension n ≥ 5. The expression for which an almost contact metric manifold (M, g, φ, ξ, η) becomes a trans-Sasakian manifold is given by for all X, Y ∈ T M and for some smooth functions α, β on the manifold M . Then such kind of manifold is called a trans-Sasakian manifold of type (α, β). In particular trans-Sasakian manifolds of type (0, 0), (α, 0) and (0, β) are called cosymplectic, α-Sasakian and β-Kenmotsu manifolds respectively. In what follows, by a trans-Sasakian 3-manifold, we mean a 3-dimensional trans-Sasakian manifold (M, g, φ, ξ, η) of type (α, β) and we will use the notation (M, g) to denote it throughout this article. Now from the above expression (2.7) it can be derived that for all vector fields X, Y in T M . Again in a trans-Sasakian 3-manifold (M, g) the Ricci tensor is given by Furthermore, if the functions α, β are constants then, in a trans-Sasakian 3-manifold (M, g) the following relations hold, S(X, ξ) = 2(α 2 − β 2 )η(X), (2.15) for all vector fields X, Y in T M and where R is the curvature tensor and S is the Ricci tensor.
Let us define the Riemannian metric g on M by g(e i , e j ) = 1, for i = j 0, for i = j for all i, j = 1, 2, 3. Now considering e 3 = ξ, let us take the 1-form η, on the manifold M , defined by Then it can be observed that η(ξ) = 1. Let us define the (1, 1) tensor field φ on M as Using the linearity of g and φ it can be easily checked that Hence the structure (g, φ, ξ, η) defines an almost contact metric structure on the manifold M . Now, using the definitions of Lie bracket, after some direct computations we get Again the Riemannian connection ∇ of the metric g is defined by the well-known Koszul's formula which is given by Using the above formula one can easily calculate that Thus from the above relations it follows that the manifold (M, g) is a trans-Sasakian 3-manifold. Now using the well-known formula Z the non-vanishing components of the Riemannian curvature tensor R can be easily obtained as Hence we can calculate the components of the Ricci tensor as follows S(e 1 , e 1 ) = 0, S(e 2 , e 2 ) = 0, S(e 3 , e 3 ) = −8.
Therefore in view of the above values of the Ricci tensor, from the equation (1.2) we can calculate λ = −2 and µ = 6. Hence we can say that the data (g, ξ, −2, 6) defines an η-Einstein soliton on the trans-Sasakian 3-manifold (M, g). Also we can see that the manifold (M, g) is a manifold of constant scalar curvature r = −8 and hence the theorem (3.1) is verified.
Next we consider a trans-Sasakian 3-manifold (M, g) and assume that it admits an η-Einstein soliton (g, V, λ, µ) such that V is pointwise collinear with ξ, i.e; V = bξ, for some function b; then from the equation (1.2) it follows that Then using the equation (2.8) in above we arrive at Replacing Y = ξ in the above equation yields Again taking X = ξ in (3.9) and by virtue of (2.15) we arrive at Using this value from (3.10) in the equation (3.9) and recalling (2.15) we can write Now taking exterior differentiation on both sides of (3.11) and using the famous Poincare's lemma i.e; d 2 = 0, finally we arrive at (3.12) r = 2λ + 2µ + 4(α 2 − β 2 ).
In view of the above (3.12) the equation (3.11) gives us db = 0 i.e; the function b is constant. Then the equation (3.8) reduces to for all X, Y ∈ T M . Hence we can state the following The purpose of this section is to study η-Einstein solitons in trans-Sasakian 3-manifolds having certain special types of Ricci tensor namely codazzi type Ricci tensor and cyclic parallel Ricci tensor.
η-Einstein solitons on trans-Sasakian 3-manifolds
satisfying R(ξ, X) · S = 0 and W 2 (ξ, X) · S = 0 Let us first consider a trans-Sasakian 3-manifold which admits an η-Einstein soliton (g, ξ, λ, µ) and the manifold satisfies the curvature condition R(ξ, X) · S = 0. Then ∀X, Y, Z ∈ T M we can write In view of (2.12) the previous equation becomes Putting Z = ξ in the above equation (5.3) and recalling (2.4) obtain for all X, Y ∈ T M . Since g(φX, φX) = 0 always and for non-trivial case α 2 = β 2 , we can conclude from the equation ( Our next result of this section is on W 2 -curvature tensor. It is an important curvature tensor which was introduced in 1970 by Pokhariyal and Mishra [14]. For this let us recall the definition of W 2 -curvature tensor as follows Definition 5.1. The W 2 -curvature tensor in a trans-Sasakian 3-manifold (M, g) is defined as Now assume that (M, g) is a trans-Sasakian 3-manifold admitting an η-Einstein soliton (g, ξ, λ, µ) and also the manifold satisfies the curvature condition W 2 (ξ, X)·S = 0. Then we can write In view of (3.3) the above equation (5.7) becomes which implies Replacing X = ξ in (5.6) and then using equations (2.12), (5.9) and (5.10) we obtain . Taking inner product of (5.11) with respect to the vector field ξ yields Using (5.11) and (5.12) in the equation (5.8) and then taking Z = ξ we arrive at which in view of (2.4) implies for all X, Y ∈ T M . Since g(φX, φX) = 0 always, we can conclude from the equation (5.13) that either A = B or, 2B = r 2 − λ − β. Thus recalling the values of A and B it implies that either µ = β or, Now for the case µ = β, proceeding similarly as the equation (5.5) we can say that the manifold becomes an Einstein manifold. Again combining (5.14) with (3.5) we get (5.15) r = 2λ + 2β.
Therefore we can state the following Theorem 5.2. Let (M, g) be a trans-Sasakian 3-manifold admitting an η-Einstein soliton (g, ξ, λ, µ). If the manifold satisfies the curvature condition W 2 (ξ, X) · S = 0, then either the manifold becomes an Einstein manifold or it is a manifold of constant scalar curvature r = 2λ + 2β.
Einstein semi-symmetric trans-Sasakian 3-manifolds
admitting η-Einstein solitons Definition 6.1. A trans-Sasakian 3-manifold (M, g) is called Einstein semi-symmetric [15] if R.E = 0, where E is the Einstein tensor given by for all vector fields X, Y ∈ T M and r is the scalar curvature of the manifold.
Now consider a trans-Sasakian 3-manifold is Einstein semi-symmetric i.e; the manifold satisfies the curvature condition R.E = 0. Then for all vector fields X, Y, Z, W ∈ T M we can write In view of (6.1) the equation (6.2) becomes Replacing X = Z = ξ in the above equation (6.3) and then using (2.12), (2.13) we arrive at So, now in view of (2.15) the above equation (6.4) finally yields for all Y, W ∈ T M . This implies that the manifold is an η-Einstein manifold. Hence we have the following Lemma 6.1. An Einstein semi-symmetric trans-Sasakian 3-manifold is an η-Einstein manifold.
7 η-Einstein solitons on trans-Sasakian 3-manifolds satisfying B(ξ, X) · S = 0 In 1949, S. Bochner [4] introduced the concept of the well-known Bochner curvature tensor merely as a Kähler analogue of the Weyl conformal curvature tensor but the geometric significance of it in the light of Boothby-Wangs fibration was presented later by D. E. Blair [2]. The notion of C-Bochner curvature tensor in a Sasakian manifold was introduced by M. Matsumoto, G. Chūman [12] in 1969. The C-Bochner curvature tensor in trans-Sasakian 3-manifold (M, g) is given by 4 . Let us consider a trans-Sasakian 3-manifold (M, g) which admits an η-Einstein soliton (g, ξ, λ, µ) and also the manifold satisfies the curvature condition B(ξ, X) · S = 0. Then ∀X, Y, Z ∈ T M we can write which implies Also taking X = ξ in (7.1) we obtain Using equations (2.12), (3.4) and (7.5) in the above equation (7.6) yields In view of (7.7) the equation (7.3) becomes Replacing Z = ξ in the above equation and recalling (2.4), finally we arrive at for all vector fields X, Y ∈ T M . Hence from (7.8) we can conclude that either or, µ = β. Also for µ = β proceeding similarly as equation (4.6) it can be easily shown that the manifold becomes an Einstein manifold. Again if µ = β using (3.7) in the equation (7.9) we have (7.10) r = 10λ + 2µ + 12β − 8, which implies that the manifold becomes a manifold of constant scalar curvature. Therefore we can state the following Theorem 7.1. Let (M, g) be a trans-Sasakian 3-manifold admitting an η-Einstein soliton (g, ξ, λ, µ). If the manifold satisfies the curvature condition B(ξ, X) · S = 0, then either the manifold is an Einstein manifold or it is a manifold of constant scalar curvature r = 10λ + 2µ + 12β − 8.
Now let (g, ξ, λ, µ) be an η-Einstein soliton on a trans-Sasakian 3-manifold (M, g) and assume that the Reeb vector field ξ of the manifold is a torse-forming vector field. Then ξ being a torse-forming vector field, by definiton from equation (9.1) we have (9.2) ∇ X ξ = f X + γ(X)ξ, ∀X ∈ T M , f being a smooth function and γ is a 1-form. | 2021-04-13T01:15:49.084Z | 2021-04-10T00:00:00.000 | {
"year": 2021,
"sha1": "a31f4c1cce4361428ebc0be127d59d293930e067",
"oa_license": "CCBYNCND",
"oa_url": "https://journals.pnu.edu.ua/index.php/cmp/article/download/4275/5755",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "f6fb6fb533d301a43c85922fe569b44540b9b90c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
186483819 | pes2o/s2orc | v3-fos-license | THE SWIMMING POWER. NEW METHOD TO TRANSFER THE POWER FROM DRYLAND TO THE WATER
129 Резюме: Вивчено вплив метода силового тренування на потужність плавання 20 спортсменів-ветеранів, яких було умовно розподілено на дві групи – силової (n = 10, ST) і плавальної (n = 10, SW) підготовки. Тренувальні заняття проводилися упродовж 6 тиж. і включали в групі SW плавальну підготовку та силову з подальшим плаванням з максимальною швидкістю. Результати в обох групах оцінювали на основі максимальної–механічної–зовнішньої потужності (ММЕР), застосуванням ергометра для вимірювання сили, швидкості і потужності в воді. В групі ST спостерігали значне підвищення ММЕР (5,79 %; р < 0,05) разом із збільшенням сили (11,70 %; p < 0,05) і зниженням швидкості (4,99 %; р < 0,05). В групі SW виявлено зниження ММЕР, сили і швидкості (7,31, 4,16 % і 4,45 %; р < 0,05). Дослідження показало, що метод, заснований на поєднанні силового тренування (на суші)з подальшим швидким плаванням, істотно збільшує потужність плавання у спортсменів-ветеранів. Ключові слова: екологічна валідність, тестування в польових умовах, тест потужності в воді, результат, силове тренування.
Introduction. The metabolic demands of the swimming competitions is very different, indeed aer obic and anaerobic systems [1] are related to the race time (from 20 sec for 25m to 900 sec for 1500m) Nevertheless, the performance of swimmers was con tinuously improved due the enhancement of tech nique [2], the evolution of the facilities [3] and the improvements of the physical skill of the ath letes [2]. Swimming action recruits many muscles for propulsion, mechanical power, and for drag con trast [4]. Therefore, the muscle strength plays a crucial role to increase the swim velocity [5]. Al though some authors [6,7] have shown that the ad justments related to technical movements performed in «dry conditions» using overloads, may be use ful to improve the technique of the swimmers in the water, this was not confirmed by field swimmer's coaches. Currently, two methods are mainly used for strength training purposes in swimmers: «drymet hods», namely with session out of water composed by exercises with loads of general type [8][9][10], or by «simulating» the swimming movements [11]. The simulation approach was carried out with «aquatic methods» training session, when the swim is over loaded with tethered [12] or tools that increase the dragging force [4].
However, it is not yet entirely clear on the actual effectiveness of these methods [ 1] , as it appears dif ficult to increase the strength as the power of swim mers through «aquaticmethods» into load session [13,14]. Similarly, an increasing strength method obtained with «drymethods» showed some limits on the «transferability» on specific technical swim movements [9,15]. Recently, several inwater meth ods [5,16] were used to assess the strength and the power of the swimmers through the assessment of the drag, providing conflicting results [5,16,17]. The strength and power estimates from swimming velocity doesn't seem adequate [ 8,18,19] because the swimming velocity was related to muscle power, and both propulsion efficiency and drag coefficient of swimmer [5]. In rare cases the use of tethered test has been reported with some limitations (the swim mer cannot effectively advance in water, and thus the technical gestures are altered).
Among the different methods of training alternat ing dry weights and swimming we chose the method proposed by Prof. Cometti [20]. Despite never hav ing been studied with scientific rigor its principles are clear. This innovative method aimed to improve the swimmer performance, using an approach poten tially valid even in other disciplines (such as Team Sport). Through the Cometti method [21] stimulate the muscle fibers using a weight of about 80 % of a maximum repetition immediately followed by the execution of technical movements specific of each discipline (in our case swimming [20,21]). The goal would be to stimulate the muscle fiber with an over load in water, that is impossible to reproduce be cause of the lack of «stable points of resistance». Therefore the aim of this study was to verify a Com etti training method based on mixed «dryland phase with overloads with a series of fast swimming» on the swimming power with a specific semitethered swimming test.
Participants
Twenty senior male master swimmers belonging to the same team were recruited for the study and randomly assigned to either the strength training (ST, n = 10) or swimming training (SW, n = 10) groups. Their main anthropometric data, as well as their best performances on 100 meter crawl. In or der to be included in the study, participants had to: 1) participate in at least 90 % of the training sessions (see following chapter about training pro gram), 2) have regularly competed during the pre vious competitive season, and 3) possess a medical clearance. There were no dropouts from the exper iments and no injuries occurred during the experi mental training or testing sessions. Indoor field tests were completed in a certified swimming pool. Base line tests started at 5:00 p.m. (26,5 ± 0,12°C, water temperature), while postassessments were carried out at 5:00 p.m. (26 ± 0,16°C, water temperature). The participants were healthy and clear of any drug consumption. The groups were homogeneous with regard to their training status (more 10 years back ground competitions). Each subject was fully in formed and trained about the test's procedures and everyone gave the written informed approval to par ticipate in the study in accordance with the guide line of the Muscle, Ligament and Tendons Journal [22]. All experimental procedures were approved by the University Human Research Ethics Committee, which followed the ethical principles laid out in the 2008 revision of the Declaration of Helsinki.
Testing A parallel, twogroup, randomized, longitudinal (pretest/posttest), singleblind experimental de sign was used. After baseline measurements, partici pants were randomly allocated to either the strength training (ST) or swimming training (SW) groups with an allocation ratio of oneto one [23]. The in dependent variable was «training type», so no con trol group was used. The study lasted 6 weeks (from September to November in preseason) and consisted of one session of test (preand posttraining) before and after one week training sessions. No addition al strength, power and/or plyometric training was completed by the subjects out of the training inter vention of the present study.
Training outcomes Before and after (testretest) the training period, participants performed one testing session of semi tetheredswimming to assessment Maximal Mechan ical External Power (MMEP). Before each testing session, participants were instructed not to eat for at least three hours before testing and not to drink cof fee or beverages containing caffeine for at least eight hours before physical testing. Tests were completed at the same time of the day, with the operators una ware of the participant's allocation.
Maximal Mechanical External Power Test
The test consisted in 15 m allout frontcrawl swims across the pool while pulling a different load during each trial, besides the reliability of the test has been shown in previous studies to be very high (Intraclass Correlation Coefficient > 0,80) as shown by DominguezCastells et al. [ 24] After a standardized 800 m warmup, the test started with a load of 45 N. The load increased by 25 N for each tri al. Swimmers rested for 5 min between 2 consecutive 5 repetitions. The protocol ended when the swimmer was not able to complete a trial. Data related to the first and last 2,5 m was discarded to consider only constant speed conditions [ 24] . The MMEP param eters of interest were acquired by means of a dedi cated custom ergometer designed and built by Tec nologicamente S.r.l. (Italy) with the collaboration of the workshop of the Department of Mechanical, Chemical and Materials Engineering of the Univer sity of Cagliari (Italy). The ergometer used for the experimental sessions was linked to the swimmer us ing a belt as described in the following.
This device is basically composed by a 28» wheel (acting as a drum with a winding circumference of 2092 mm), a cable, two sensors (force and speed) and an electronic apparatus necessary to proper ly conditions and transmits the data to a Personal Computer. The wheel is equipped with a disc brake (Shimano disc 160 mm diameter and Hayes Nine brake caliper) and a reflective encoder wheel with 72 pulses per turn read by an optical speed sensor (Optek OPB704). A 500 N miniature tensioncom pression load cell (F2220, Tecsis GmbH, Germany) was hosted inside an aluminum cylindrical (160 mm long, 47 mm diameter with a nose cone to mini mize the hydrodynamic resistance effects) that act as waterproof case and was connected to the swimmer through a belt equipped with a system composed by a light aluminum bar and four twines. The load cell signal is conditioned and powered by a Mecos train 2038 module embedded in the cylindrical alu minum case.
Prior to the tests, a calibration curve hydraulic pressure vs. resistant force was obtained using cali brated weights (corresponding to a 10 < 150 N force range). Both force transducer and speed sensor sig nals were properly acquired by a National Instru ments DAQ Module USB 6009 (8 channels, 14Bit, 48 kS/s). A custom routine was developed in the National Instruments LabView ® environment to col lect and store data in form of ASCII files during the trial. The resulting files were then postprocessed with a Matlab TM 10 software routine that transforms the raw data into a fourvectors text file contain ing time, traveled distance, instantaneous force, and speed values.
Training program The training program was performed during six weeks, divided in three sessions for both groups in according to Cometti method [ 21] . All participants (ST-SW) after 15 minutes of standardized warm up carried out the same set of exercises in water, that comprises several sprints in front crawl at maxi mum velocity with sets and recovery balanced. Each swimming session had a duration of approximately 2 hours and was repeated 5 times per week. During the swimming training the same distance was per formed for both groups (ST-SW).
Particularly, ST group were performed as suggest ed by Cometti [20,21], the strength training program during swimming training (mixed: weight trainingswim maximum velocity and vice versa). The one repetition maximum (1RM) test on bench press was conducted to determine maximal upper body strength as recommended by Padulo et al [25]one week before the training. Particularly, du ring the exercise with weigh (85 % 1RM [ 6] ) or body load, subjects were asked to perform 6 fast repetitions [6] according to Cometti method [20,21]. To minimize the effect of the passive recovery [20,21] inbetween weight train ing and swimming exercises (~5s), each participant was encouraged by the coach.
Statistical analyses Normality of the data was verified using the Sha piroWilk test. The null hypothesis was tested to re veal no difference between groups using multiple un paired ttests. A twoway mixed analysis of variance (ANOVA) was used on each continuous dependent variable. The independent variables included one bet weenparticipants factor, training intervention with two levels (ST and SW), and one withinpar ticipant factor time, with two levels (pretest and posttest). ANOVAs was used to test the null hy pothesis of no difference in the change over time be tween ST and SW (training intervention Ч time in teraction), and the null hypothesis of no difference in the change over time in response to the train ing intervention (main effect for time). With this statistical design, the following variables were ana lyzed: MMEP (Watt), Force (N) and speeds (m·s -1 ). The effect sizes were also calculated (eta squared, η 2 ) for better interpretation of the results and pva lue < 0,05 was considered significant. Testretest re liability [26] was satisfied in previous study [24] using the Intraclass Correlation Coefficient (ICC). Statistical analysis was performed using Sigma Plot TM software 11,0 (Systat Software, Tulsa, OK).
Discussion
The results shown that the transfer is effective an improving MMEP in masters male swimmers and might represent a technique useful to achieve better performance. In the last years, several authors [27,28] investigated new methods to improve the swim ming performances. Particularly, in the swimming history, several reasons have limited scientific knowl edge in water sports. Many technical approach were due to the environment that requires special equip ment; in fact, it is still difficult to validate the differ ent training methods so far tested in swimming [14]. DominguezCastells et al. [24] showed for the first time a new interesting method to assessing mechan ical power output as a reliable predictor of per formance of the swimmers [24]; for this aim the DominguezCastells methodology has been used in the present investigation.
Our findings are partly in agreement with the results of Morouco 14, who showed the existence of a relationship between dry land strength and power measured during swimming performance 14, 29. In dry conditions Morouco studied upperlower limb muscle strength and revealed high associations between swimming performance vs. muscular strength method 29. From our point of view, our work tries to change from that performed by Moruco particularly in two key points: entering the fast movements with weights (85 % 1 RM), and mixed training (weight training immediately followed by swimming sprints).
Force Considering the effects of the this method [20,21], the results indicate that mixed training in creased the strength in ST group by 9.03N (11,70 % increase). This effect could be emphasized espe cially for short time trials or several track compe titions (e.g. 50-100 m) where the results are of ten highly contested with close finishes. The present study results are in line with Schmidtbleicher et al. [30] and Padulo et al. [ 6] that have shown that few repetitions and maximal loads (> 80% 1 RM) in duced recruitment of fasttwitch motor units [6,30] and increased muscle strength (10,20 % p < 0,05) for ST, compared to repetitions with low loads and free speed. This interesting improvement of the strength in ST obtained with a mixed model training can be analyzed as a further deepening of the understand ing of the strength development in swimmers [13]. It seems that the adaptations of the swim intensity stimulate more than the mechanisms that trigger of aerobic capacity and limiting the development of the contractile muscle structure.
Conversely a decreased 4,16 % in SW does not stimulate fairly motor units. Indeed, swimmers train over many miles daily, and in the case of master swim mers this is more evident, at low intensity. Moreover the force applied in water requires particular sensitiv ity and gradation of effort [31]. The training for an enhanced MMEP, emphasizing neural adaptations, led to significant changes in rate of force develop ment using weight training. These results showed an increased in rate of force development and thus pow er production, rather than the increased swimming workout. For this reason the MMEP was no changed in SW group. These results in the SW group can be related to lower stimulation of muscle strength with only swimming as a main training activity. Velocity The swimming speed was measured during the semitethered test at maximal load, because crucial component of the power value obtained. Concern ing the maximum speed the MMEP showed a drop ping of ~5% (0,04 m·sec -1 ) in both groups (ST-SW) that represented a decrease with respect to the pre test value, resulting from the training interventions. Moreover, the velocity reported small differences (~5%) with no significant effects in both groups, in ST (min/max: 0,74-1,04 m·sec -1 ) this effects showed a shift on low speed of maximal power out put (5,73% with p < 0,05). In according with Mo rouco et al. [29] the velocity must not be assessed as a negative effect on swim performances because this velocity represent the ratio between power output and force in MMEP.
Maximal-mechanical-external-power (MMEP)
The results showed ( Figure 2C) an increase of 4.04w (MMEP), that representing 5,73% of pretest values in ST. Improvements in ST of MMEP could be relat ed to force production [ 32] more than in SW. The in creased MMEP in ST on is in agreement with explo sive movements on the neuromuscular systems [33]. In this regard, MMEP showed more accuracy in re lationship to the ecological validity because in our semitethered test the swimmers performed 15 meters of swimming with external loads. As confirmed by DominguezCastells et al. [ 24] and Morouco et al. [ 29] , the power test in swimming were altered when each subject were constrained to swim without wear on.
Combined effects of the variables studied The innovative method suggested by Comet ti highlights that for water sports, mixed train ing (land and water) is favorable to stimulate mus cle strength, in relationship at the combination of movement in dry conditions (weight training) with out other resis tance as Drag. In addition, the vari ous phases of eccent ric/concentric [34] contractions during exercise in land are not altered by the hy drostatic pressure. On the same topic, di Prampero showed that the greatest fraction of the energy ex penditure is utilized to overcome water resistance or Drag [17]. The 6 weeks explosivetype strength training resulted in considerable improvements in selected neuromuscular characteristics, although a large volume of endurance training was performed at the same time. An hypothesis is that trainingin duced alterations in neural control during stretch shortening cycle exerci ses (such as running and jumping [35]) might take place in both voluntary activation, inhibitory and facilitatory reflexes [36].
From our point of view it is not clear if MMEP and strength increased in the ST, obtained through an intense workout of 6 weeks training with combi nations «weight and swim training», has to be con sidered an important value to satisfy the research of mayor higher power. But again emphasizes how dif ficult it is to deve lop strength in the sport of swim ming, as reported in considerations of other authors [13,37]. | 2019-06-13T13:17:59.844Z | 2014-10-23T00:00:00.000 | {
"year": 2014,
"sha1": "9d01a7a098b67177e7edefa81f12de4bfaf84c6c",
"oa_license": "CCBY",
"oa_url": "http://tmfvs-journal.uni-sport.edu.ua/article/download/106569/101665",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "4a4b81fe2a996bfce15b0e0592f9330cdb4d7d47",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Environmental Science"
]
} |
239715199 | pes2o/s2orc | v3-fos-license | Reply on RC1
: 1st sentence, “may cause” — I think the literature is pretty conclusive that warming does cause snow to melt earlier. Abstract should define what you mean by the 20th percentile of snowmelt days — this is meaningless to someone only reading the abstract. What do you mean by colder places are more sensitive than warmer places? In what way? Earlier snowmelt? If there’s no snow, of course it wouldn’t be sensitive to that. “climate
snow -a lot of the "snowmelt days" marked with purple circles in Figure 1 look like rain storms to me. The South Fork of the Tolt mostly gets rain, but also rain on snow. How do diurnal cycles that are identified but aren't really snow melt impact your results?
We agree that the auto-correlation metric can be confusing and can be better explained. We will add clarifications about its meaning in the context of Figure 1. About the effect of rainstorms, we have set up several checks in our method to limit false positive snowmelt days. First, we apply a more restricted monthly and site-specific window of lagged correlations based on clear-sky snowmelt-driven diel cycles only (see lines 140-145). This limits rainfall coming at a time different from typical snowmelt (or ET) causing a false positive melt day. Second, the rainstorm needs to have a specific diel cycle that will highly correlate with solar radiation. On a complete cloudy day, solar radiation will have a diurnal cycle like a clear sky day, so a rainstorm that produces a snowmelt-like response (depending on watershed's surface and subsurface connectivity and rainfall histogram) may potentially produce a false positive. On a partly cloudy day, where rainfall occurred but either before or after the event there were clear sky conditions, the chances to have a highly correlated rainfall-induced diel cycle with solar radiations are likely minimal as the shape of the solar radiation diel cycle can have several discrete changes.
However, our method cannot guarantee that rainfall-driven diel cycle will not be picked up (though we argue that the chances are small). To address the reviewer's concern, we propose screening the days that our method classifies as snowmelt-driven by whether precipitation occurred that day or not, using daily NLDAS precipitation. This will allow detecting whether there is a chance that the snowmelt-day was wrongly picked up. We think this screen will show an unbiased effect from rainfall to the streamflow diel from our method. The Tolt River example is an important one because it has a lower number of snowmelt days. We will better highlight in Figure 1E the two examples of diel patterns that are not recognized at snowmelt and screened by our method. Figure 1F may be misleading because the diel cycles are not observable in the line graph. We will better discuss this figure and highlight the strengths and weaknesses of the method.
1b. As an alternate approach to when snowmelt is significant, you could look at the power spectra of your time series. See Figure 6 in Lundquist and Cayan 2002. The days with a sharp increase in power at the once per day cycle indicate snowmelt, whereas rain exhibits a much more red spectra. I know that power spectra are commonly used by oceanographers and not hydrologists, so your method is likely easier to understand, but it would be nice to have an independent method to check.
We appreciate the recommendation of the reviewer about the power spectra, but do not feel that it will be an improvement from our custom method that adjusts for seasonal and basin-specific lags. We think that a power spectra method would also struggle to distinguish between rainfall and snowmelt diel cycle. Please, see previous comment about figure 1E, and how we propose to check our method for rainy days.
1c. In particular, I recommend clearer discussion about the strengths and weaknesses of this approach. It will miss rain-on-snow (signal dominated by rain), as well as early melt into dry soil (no streamflow response). It may also misclassify rain with a diurnal structure to it as snowmelt. Therefore (and you allude to this multiple times in the manuscript but should make it clearer), the method is best at detecting melt in non-rainy locations with fairly-saturated soils. With that in mind, which of your basins do you trust the signal the most.
The reviewer makes good points about what the method can and cannot do.
As detailed in previous answers, we will add an independent method to check for rainstorms. One way that we will strengthen our argument is to subset the basins where we are most confident that rain is not occurring and streamflow is tightly couple to snowmelt. This additional analysis will be discussed in the text and shown in a supplemental Figure or Table. 1d. Section 3.1 explains how well the DOS_20 is related to simpler magnitude metrics (DOQ_25 and DOQ_50) but doesn't really justify why the DOS_20 is helpful beyond those metrics -can you better explain what we gain by doing this extra analysis. This section also identifies some rain-dominated rivers wherein these metrics appear less correlated. Is this because the method breaks down? Or can we learn important information from this change in relationship?
DOS 20 aims to capture snowmelt-streamflow connectivity; however, it does not imply anything about the contribution (volume) of snowmelt to streamflow. As such, this metric can potentially be implemented as a relatively easy way to benchmark -hourly-hydrological and land-surface models beyond typical daily streamflow metrics or point-scale continuous SWE measurements. Specifically, we see potential to use this information to validate snowmelt dynamics of a model. We will be more specific about the value of DOS20 in the discussion.
About the value of section 3.1, we believe there are two key points to be stated. First, the diel method is more uncertain under rainier conditions as it may potentially misclassify snowmelt events due to rainfall (we now propose checking those), and second, under rainier conditions the timing of streamflow volume is likely to be more strongly controlled by the timing of rainfall as opposed to the timing of snowmelt, and thus those sites deviate from the 1:1 line in DOS20 vs DOQ25 and DOQ50.
2) You need to more explicitly discuss the difference between a stream's climate sensitivity of snowfall changing to rainfall vs. a climate sensitivity of earlier snowmelt.
2a. Many of the earlier papers on streamflow sensitivity to climate change highlighted basins in the transitional rain-snow zone as being most sensitive because snowfall shifts to rainfall. From my own experience, the diurnal cycle in streamflow is particularly hard to detect in these basins because rain-induced runoff is such a larger signal than snowinduced runoff, especially when both happen more or less at the same time. Therefore, I imagine that your snowmelt index uniquely does not work well in these basins (e.g., the Tolt example in your paper, or the NF American River example in Lundquist and Cayan 2002 Fig. 6). I could imagine that for these basins, you could even get DOS_20 moving later in the season with warming if early season events are all rain and only a later, nonrainy period exhibits snowmelt.
We agree that a better discussion of the effects of changes from snow to rain on our results is merited. We train a simple model to predict the date of DOS20 based only on basic climatological information. This model shows, as the reviewer suggests, smaller sensitivity in DOS20 to climate variation ( Figure 5) in warmer and more cloudy locations. However, it shows consistent trends to earlier DOS20 from our simple inter-annual regression-based metric, even in the warmest and rainiest basins.
We also agree that this effect could be better discussed with regards to Figure 6. The largest difference between NoahMP and the STS method were in the sunny, cold basins that would be least likely to see changes to rain and be biased by the rainier basins.
2b. I imagine that including rain-on-snow or rain-dominated basins would bias your correlations with humidity because these tend to be more humid basins but also may have spurious results.
We try to incorporate as much site and inter-annual variability in the dataset to increase the predictive power of the space-for-time approach, as historically cold sites will transition into warmer and more humid site as those with rainier conditions. That's being said, we recognize the challenges in reliably capturing snowmelt events where rainfall is important (as discussed in previous major comment). It's relevant to highlight that those sites in the Pacific Northwest (#24, 25 and 31) that have low snowfall contributions (as highlighted in Figure 3) are ultimately not used for the sensitivity analysis, and thus, do not impact the conclusions drawn in the study. Nonetheless, this will be further clarified and discussed in the revised version of the manuscript that includes an analysis of the importance of rainfall-cased diel fluctuations.
2c. I encourage the authors to think about rainfall vs snowfall and snowmelt sensitivities separately and to decide if they want to address both in this paper or only focus on the latter. Then, be very clear about this decision in the paper discussion.
It is not easy to disentangle the two, but we agree that our method is better suited to answer questions about snowmelt sensitivity and that should be the focus of the paper. However, we recognize our empirical analysis reflects both the effect of changing precipitation partitioning and snowmelt sensitivities.
3) You need to more clearly evaluate how well your NoahMP-WRF model set up is simulating streamflow timing in the current climate before examining the results of its climate sensitivity.
3a. It appears that you have a biased simulation of NoahMP-WRF -if the historic runoff date is off by 50 days (see line 260), the model is either simulating too much rain and too little snow or melting snow way too early. It's hard to draw conclusions on sensitivity when using a biased model. Of course, if the model has less snow than the real world, it will be less sensitive to that snow disappearing. The paper would be much more meaningful if you included some evaluation of your NoahMP-WRF simulations -how do they compare to baseline observations and to other models run over the domain (similar western US climate-change papers). (Liu et al., 2017; Scaff et al., 2020). We do agree with Dr. Lundquist in that one should make sure the model represents reliably a particular system before looking at its sensitivity to climate change. Nonetheless, this type of simulations have been used for climate change analyses (Musselman et al., 2017(Musselman et al., , 2018, but its runoff component has not been tested to our knowledge. Furthermore, the NoahMP model is the under the US National Water Model (https://water.noaa.gov/about/nwm) and thus its relevance to policy and research are high.
The reviewer makes a good point, and we will improve and highlight better the description of the model performance. Just to clarify, these simulations made by the National Center for Atmospheric Research (NCAR) presented by (Liu et al., 2017) have been previously tested in terms of its meteorology and snow components
Detailing the exact biases of NoahMP simulations in the past is beyond the scope of this study, but we will detail previous efforts in this arena. We will improve our discussion and analysis to demonstrate that the NoahMP-WRF is predicting an earlier historical DOQ25 compared to our STS method and historical observations (current Figure 6A), whereas prediction of DOQ50 is more similar between the methods and observations historically ( Figure 6B). A key finding is that NoahMP DOQ50 is less sensitive to change than the STS method in the snowier basins where the STS methods should be more reliable.
3b. Also, if the NoahMP-WF simulations perform better in certain regions (if I'm correct, these were only carefully vetted for Colorado), you may also want to focus your analysis on those regions separately. Do you get closer agreement in areas where the model represents snow processes more accurately? Might a check for space-for-time sensitivity against model sensitivity be a good check for model fidelity?
For the historical DOQ25 the NoahMP-WRF model actually performed the best in rainier sites (see circled blue symbols in Figure 6a) and a few other sites classified as 'cloudy' and 'partly cloudy', whereas the Rocky sites are characterized by 'sunny' snowmelt events were the most biased (see circles in Fig6a). This suggests that the timing of streamflow volume is better represented in areas where snowmelt processes are less important, though other variables like topographic (and thus climatic) gradient can also be important.
Discussion should be better streamlined and organized. This may be a good place to address major comments 1-3 above.
We will improve the discussion based on Dr. Lundquist suggestions, which will hopefully address her main concerns.
Minor:
Abstract: 1st sentence, "may cause" -I think the literature is pretty conclusive that warming does cause snow to melt earlier. Abstract should define what you mean by the 20th percentile of snowmelt days -this is meaningless to someone only reading the abstract. What do you mean by colder places are more sensitive than warmer places? In what way? Earlier snowmelt? If there's no snow, of course it wouldn't be sensitive to that.
We will change the abstract to read "climate change will cause …", and provide a more meaningful introduction to DOS20. We will clarify what we mean by "cold sites are more sensitive", which refers to the fact that the timing of early streamflow volume changes the most at cold sites compared to warmer sites.
Line 120: "DAYMET dataset (daymet.ornl.gov), which in turn is based on ground observations" -it's interpolated from existing ground observations -worth specifying as sometimes this is far from truth.
We will change it to read as suggested by the reviewer. We appreciate the references. Stewart is already mentioned in the discussion, and we will add Lundquist et al (2004). | 2021-10-22T15:26:22.347Z | 2021-09-03T00:00:00.000 | {
"year": 2021,
"sha1": "efc0a3318aeb9cdaf1e84f255a740d929d55f230",
"oa_license": "CCBY",
"oa_url": "https://angeo.copernicus.org/preprints/angeo-2021-37/angeo-2021-37.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "078aed79f687425cf32be9f38a50fcb8e7a9f293",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
244835417 | pes2o/s2orc | v3-fos-license | Production of Polyhydroxyalkanoates in Unsterilized Hyper-Saline Medium by Halophiles Using Waste Silkworm Excrement as Carbon Source
The chlorophyll ethanol-extracted silkworm excrement was hardly biologically reused or fermented by most microorganisms. However, partial extremely environmental halophiles were reported to be able to utilize a variety of inexpensive carbon sources to accumulate polyhydroxyalkanoates. In this study, by using the nile red staining and gas chromatography assays, two endogenous haloarchaea strains: Haloarcula hispanica A85 and Natrinema altunense A112 of silkworm excrement were shown to accumulate poly(3-hydroxybutyrate) up to 0.23 g/L and 0.08 g/L, respectively, when using the silkworm excrement as the sole carbon source. The PHA production of two haloarchaea showed no significant decreases in the silkworm excrement medium without being sterilized compared to that of the sterilized medium. Meanwhile, the CFU experiments revealed that there were more than 60% target PHAs producing haloarchaea cells at the time of the highest PHAs production, and the addition of 0.5% glucose into the open fermentation medium can largely increase both the ratio of target haloarchaea cells (to nearly 100%) and the production of PHAs. In conclusion, our study demonstrated the feasibility of using endogenous haloarchaea to utilize waste silkworm excrement, effectively. The introduce of halophiles could provide a potential way for open fermentation to further lower the cost of the production of PHAs.
Introduction
The massive use of traditional, nonbiodegradable plastics results in severe environmental pollution and the large consumption of nonrenewable fossil resources [1]; biodegradable plastics have become a hotspot of concern in the world today. Polyhydroxyalkanoates (PHAs) are biodegradable polyesters produced to store excess carbon sources in cells when the microbial growth lacks nitrogen, and phosphorus sources and carbon sources are sufficient [2]. In contrast to traditional petroleum-based plastics, PHAs have similar material properties but are completely degradable in a natural environment and, also, have many advantages, such as biocompatibility, a gas barrier, etc. [3]. They can be used as medical materials without causing adverse reactions in humans, such as bone nails, vascular stents, and so on [4]. At present, the production of PHAs still has many limitations; fermentation using mixed substrates such as food waste for 65 days, demonstrating the great potential of halophilic open fermentation [22].
In order to lower the cost for PHA production, a lot of research has been conducted on whey, waste oils, molasses, etc. [23]. However, the utilization of silkworm excrement has not been reported yet. Our previous experiments found that the wellknown PHA-accumulating bacteria reported, such as the strains Ralstonia eutropha H16 and Halomonas venusta, could not grow when using silkworm excrement as the sole carbon source [24]. Therefore, this study attempted to independently isolate and screen the endogenous halotolerant PHAs accumulating microorganisms from the silkworm excrement and investigate their open fermentation production characteristics when using chlorophyll ethanol-extracted silkworm excrement as the sole or partial carbon source, which can not only improve the PHA synthesis ability and reduce the cost of PHA synthesis but also realize waste recycling.
Isolation and Identification of the Endogenous Halotolerant Microorganisms in Silkworm Excrement
Our previous study showed that a culture medium with less than 10% sodium chloride concentration cannot inhibit the growth of most endogenous microorganisms in silkworm excrement [24], while a medium with 15% sodium chloride concentration can. Two promised PHA-producing strains: Ralstonia eutropha H16 and Halomonas venusta were shown to not be able to produce PHA from silkworm excrement [24]. Therefore, in this study, for screening the candidate strains for the open fermentation with non-sterilized silkworm excrement as the carbon source, the isolation and identification of endogenous microorganisms from silkworm excrement were carried out by using AS-165 with a 15% sodium chloride concentration as the medium, and 16S rDNA identification was performed on all the different colonies isolated from the silkworm excrement. The results showed that a total of 14 different microorganisms were isolated, of which 10 were bacteria and four were archaea (Table 1). All the strains obtained were 100% identical in the 16S rDNA sequence to the identified strains in the database, except for the strains Halorubrum aidingense A28 (99.88%), Haloarcula hispanica A85 (99.72%), Halomonas janggokensis B6 (98.89%), and Brachybacterium paraconglomeratum BZ6 (99.72%).
Screening of Endogenous PHAs Accumulating Strains in Silkworm Excrement
After 48-h (for bacteria) and 120-h incubation (for archaea) of the above 14 halotolerant microorganisms in the MGL medium, respectively, the fermented cells stained by Nile red were analyzed by fluorescence microscopy. The results showed that only four endogenous microorganisms of the silkworm excrement had significant red fluorescence compared with the genetically engineered PHA-defeated strain ∆EC, namely, the strains A85 (Haloarcula hispanica) and A112 (Natrinema altunense), BSF4 (Halomonas salina), and B6 (Halomonas janggokensis) ( Figure 1). Further, the PHA accumulation of the above four strains was quantified by gas chromatography, and the results showed that all the PHA accumu-lating strains initially identified by the Nile red method were able to accumulate the PHAs, but their accumulation did not seem to be linearly related to the light intensity observed by the fluorescence microscopy analysis; among which, the PHA accumulations of the strains A85 (0.68 ± 0.05 g/L), A112 (0.69 ± 0.04 g/L), and BSF4 (0.72 ± 0.02 g/L) were higher and close to each other, while the PHA accumulation of the strain B6 (0.15 ± 0.03 g/L) was significantly lower than that of strains A85, A112, and BSF4 (Table 2).
After 48-h (for bacteria) and 120-h incubation (for archaea) of the above 14 halotolerant microorganisms in the MGL medium, respectively, the fermented cells stained by Nile red were analyzed by fluorescence microscopy. The results showed that only four endogenous microorganisms of the silkworm excrement had significant red fluorescence compared with the genetically engineered PHA-defeated strain ΔEC, namely, the strains A85 (Haloarcula hispanica) and A112 (Natrinema altunense), BSF4 (Halomonas salina), and B6 (Halomonas janggokensis) ( Figure 1). Further, the PHA accumulation of the above four strains was quantified by gas chromatography, and the results showed that all the PHA accumulating strains initially identified by the Nile red method were able to accumulate the PHAs, but their accumulation did not seem to be linearly related to the light intensity observed by the fluorescence microscopy analysis; among which, the PHA accumulations of the strains A85 (0.68 ± 0.05 g/L), A112 (0.69 ± 0.04 g/L), and BSF4 (0.72 ± 0.02 g/L) were higher and close to each other, while the PHA accumulation of the strain B6 (0.15 ± 0.03 g/L) was significantly lower than that of strains A85, A112, and BSF4 (Table 2). After determining the PHA accumulation capacity of the four strains, we further optimized the culture conditions of the selected strains in terms of the temperature, salinity, and pH. The parameter gradient settings for each condition were described above. The results showed that, when the temperature range was from 30 to 50 • C for the strain A85, the growth conditions in 37 • C and 42 • C appeared to have no significant difference, which was better than that in the other temperatures. The optimal growth temperature for the strain BSF4 was 42 • C, and the optimal growth temperature for the strains A112 and B6 was 37 • C for both ( Figure 2A). When the salinity range was from 50 to 300 g/L, the optimum growth salinity of the strain A85 was 200 g/L, strain A112 had the best growth status at salinities of 100 g/L and 150 g/L, strain B6 had an optimum growth salinity of 100 g/L, and strain BSF4 had optimum growth salinities of 50 g/L and 100 g/L ( Figure 2B). When the pH range was from 5 to 9, the optimal growth pH was 6.5 for strain A85, 5 and 6.5 for strain B6, 7 and 8 for strain A112, and 7 for strain BSF4 ( Figure 2C).
BSF4
2. After determining the PHA accumulation capacity of the four strains, we further optimized the culture conditions of the selected strains in terms of the temperature, salinity, and pH. The parameter gradient settings for each condition were described above. The results showed that, when the temperature range was from 30 to 50 °C for the strain A85, the growth conditions in 37 °C and 42 °C appeared to have no significant difference, which was better than that in the other temperatures. The optimal growth temperature for the strain BSF4 was 42 °C, and the optimal growth temperature for the strains A112 and B6 was 37 °C for both ( Figure 2A). When the salinity range was from 50 to 300 g/L, the optimum growth salinity of the strain A85 was 200 g/L, strain A112 had the best growth status at salinities of 100 g/L and 150 g/L, strain B6 had an optimum growth salinity of 100 g/L, and strain BSF4 had optimum growth salinities of 50 g/L and 100 g/L ( Figure 2B). When the pH range was from 5 to 9, the optimal growth pH was 6.5 for strain A85, 5 and 6.5 for strain B6, 7 and 8 for strain A112, and 7 for strain BSF4 ( Figure 2C).
Based on the optimization of the culture conditions for the above four microorganisms, and in order to reduce the cost of industrial fermentation, the optimum growth temperature, salinity, and pH were 37 °C, 200 g/L, and 6.5 for strain A85; 37 °C , 150 g/L, and 7 for strain A112; 37 °C, 100 g/L, and pH 7 for strain BSF4; and 37 °C, 100 g/L, and pH 6.5 for strain B6.
Utilization of Silkworm Excrement by Strains
The PHA accumulation of these four microorganisms was tested by using the silkworm excrement as the sole (SE medium) or partial (SM medium) carbon source. Firstly, the results showed that the amount of PHA accumulation was increased after optimization of the culture conditions. Strain A85 had the highest PHA production of 0.96 ± 0.06 g/L with 96 h of fermentation using glucose as the carbon source. Additionally, the PHA production on the SM medium was high, up to 75% of the MGL medium, at 0.72 ± 0.03 g/L. However, the PHA production on the SE medium with the silkworm excrement as the sole carbon source was lower, at 0.23 ± 0.02 g/L. Meanwhile, strain A112 showed the highest PHA production with 96 h of fermentation using glucose as the carbon source at 0.71 ± 0.02 g/L. The PHA production on the SM medium was 0.46 ± 0.05 g/L, up to 65% of the MGL medium. The PHA production by the silkworm excrement as the sole carbon Based on the optimization of the culture conditions for the above four microorganisms, and in order to reduce the cost of industrial fermentation, the optimum growth temperature, salinity, and pH were 37 • C, 200 g/L, and 6.5 for strain A85; 37 • C, 150 g/L, and 7 for strain A112; 37 • C, 100 g/L, and pH 7 for strain BSF4; and 37 • C, 100 g/L, and pH 6.5 for strain B6.
Utilization of Silkworm Excrement by Strains
The PHA accumulation of these four microorganisms was tested by using the silkworm excrement as the sole (SE medium) or partial (SM medium) carbon source. Firstly, the results showed that the amount of PHA accumulation was increased after optimization of the culture conditions. Strain A85 had the highest PHA production of 0.96 ± 0.06 g/L with 96 h of fermentation using glucose as the carbon source. Additionally, the PHA production on the SM medium was high, up to 75% of the MGL medium, at 0.72 ± 0.03 g/L. However, the PHA production on the SE medium with the silkworm excrement as the sole carbon source was lower, at 0.23 ± 0.02 g/L. Meanwhile, strain A112 showed the highest PHA production with 96 h of fermentation using glucose as the carbon source at 0.71 ± 0.02 g/L. The PHA production on the SM medium was 0.46 ± 0.05 g/L, up to 65% of the MGL medium. The PHA production by the silkworm excrement as the sole carbon source was 0.08 ± 0.01 g/L (Table 3). Both haloarchaea showed the highest PHA accumulation in the medium with glucose as the sole carbon source and could accumulate PHAs using the silkworm excrement as the sole carbon source. However, in contrast to the haloarchaea, the two bacteria BSF4 and B6 could not accumulate PHAs using the silkworm excrement as the sole or partial carbon source but could use glucose as the carbon source to accumulate PHB, and the highest PHA production was observed with 72 h of fermentation, with 1.79 ± 0.03 g/L for strain BSF4 and 0.17 ± 0.02 g/L for strain B6 ( Figure 3A,C,E,G).
Feasibility Study of the Open Fermentation Process
In this study, two haloarchaea, A85 and A112, were tested as the potential candidates in open fermentation using the silkworm excrement as the carbon source due to their significant advantages in terms of yield, salinity, etc. The open fermentation was performed on the unsterilized SE and SM mediums for 72-144 h. The results of the gas chromatog- Along with the detection of PHA production, we also investigated the growth of the four strains in the silkworm excrement medium using the CFU method. Although two bacteria could not accumulate PHA using the silkworm excrement, their growths seemed to be superior to that of archaea in the SE medium. The CFU values of strains BSF4 and B6 in the SE medium seemed to have no significant difference from those in the MGL medium ( Figure 3F,H), but the growths of the two haloarchaea in the MGL medium were significantly better than that of the SE or SM medium ( Figure 3B,D). In addition, it was shown that, even in the glucose containing the SM medium, the two bacteria could not grow in the SM medium ( Figure 3F,H).
Feasibility Study of the Open Fermentation Process
In this study, two haloarchaea, A85 and A112, were tested as the potential candidates in open fermentation using the silkworm excrement as the carbon source due to their significant advantages in terms of yield, salinity, etc. The open fermentation was performed on the unsterilized SE and SM mediums for 72-144 h. The results of the gas chromatography analysis showed that the two haloarchaea could both accumulate PHAs by using the silkworm excrement (Figure 4), and the accumulation of PHAs could not be detected in the natural fermentation of the silkworm excrement medium without seed culture inoculation ( Figure 4F). Strain A85 accumulated the highest amount of PHA with 96 h of fermentation in the SM medium, up to 0.81 ± 0.05 g/L and the PHA production of 0.31 ± 0.01 g/L with the silkworm excrement as the sole carbon source; the PHAs production was slightly increased compared to the sterilized fermentation. The highest production of PHAs by strain A112 after 120-h incubation in the SM medium was 0.58 ± 0.04 g/L and 0.08 ± 0.01 g/L in the SE medium; its synthetic amount of PHAs was similar to the sterilized fermentation effect (Table 4). Although the yield of strain A112 was not as high as that of strain A85 in terms of PHA synthesis, it can be seen from the gas spectrum that strain A112 accumulated PHAs with a high 3-HV content of about 13.06% ± 0.03%, while strain A85 accumulated PHAs with a lower 3-HV content of 4.49% ± 0.04% (Figure 4).
To study the microbial community composition in the open fermentations, the fermented medium at the time points of the highest PHA productions (4 and 6 days, respectively) was used for the CFU analysis. More than 20 colonies per sample were randomly selected and sequenced identification ( Figure 5A,B). The results showed that four strains of halophilic microorganisms were identified in the SE medium, including strains Haloarcula hispanica, Natrinema altunense, Halorubrum cibarium, and Gracilibacillus orientalis; among them, strain A85 (Haloarcula hispanica) accounted for 63.50% ± 1.87%, while, in the SM medium, all the colonies tested were strain A85. For strain A112, five halophilic microorganisms were identified in the SE medium, including strains Natrinema altunense, Halorubrum saccharovorum, Marinococcus halotolerans, Alkalibacillu shalophilus, and Bacillus qingdaonensis, of which strain A112 (Natrinema altunense) accounted for 62.83% ± 1.55%; in the SM medium, the number of strains decreased, and only strains Natrinema altunense and Marinococcus halotolerans were identified, with strain A112 accounting for a higher percentage of 92.73% ± 2.58% ( Figure 5C). The above results indicated that the two strains of halophilic archaea had a relatively close ecological advantage of open fermentation, and both were able to maintain about 63% of the colony proportion in the silkworm excrement medium, and it is noteworthy that the addition of glucose significantly enhanced the environmental percentage of the seed microorganisms. 0.01 g/L with the silkworm excrement as the sole carbon source; the PHAs production was slightly increased compared to the sterilized fermentation. The highest production of PHAs by strain A112 after 120-h incubation in the SM medium was 0.58 ± 0.04 g/L and 0.08 ± 0.01 g/L in the SE medium; its synthetic amount of PHAs was similar to the sterilized fermentation effect (Table 4). Although the yield of strain A112 was not as high as that of strain A85 in terms of PHA synthesis, it can be seen from the gas spectrum that strain A112 accumulated PHAs with a high 3-HV content of about 13.06% ± 0.03%, while strain A85 accumulated PHAs with a lower 3-HV content of 4.49% ± 0.04% (Figure 4). Halorubrum saccharovorum, Marinococcus halotolerans, Alkalibacillu shalophilus, and Bacillus qingdaonensis, of which strain A112 (Natrinema altunense) accounted for 62.83% ± 1.55%; in the SM medium, the number of strains decreased, and only strains Natrinema altunense and Marinococcus halotolerans were identified, with strain A112 accounting for a higher percentage of 92.73% ± 2.58% ( Figure 5C). The above results indicated that the two strains of halophilic archaea had a relatively close ecological advantage of open fermentation, and both were able to maintain about 63% of the colony proportion in the silkworm excrement medium, and it is noteworthy that the addition of glucose significantly enhanced the environmental percentage of the seed microorganisms.
Discussion
The chlorophyll ethanol-extracted silkworm excrement is a kind of environmental pollutant that is difficult to be reused [6,8]. Two haloarchaea, strains A85 and A112, isolated and identified from the silkworm excrement in this study were shown to convert inexpensive carbon sources to PHAs under high-salt conditions. After optimization of the culture conditions, the two haloarchaea were proven to be able to accumulate PHAs using chlorophyll ethanol-extracted silkworm excrement as the only carbon source, respectively.
The possibility of open fermentation of the silkworm excrement by these two haloarchaea was offered by high-salt stress as well.
Previous research of high-salt open fermentation conversion of agricultural waste for PHA production was based on microorganisms of the genus Halomonas, including the strain Halomonas campaniensis LS21 for PHA production using food waste conversion [22], the utilization of the strain Halomonas halophila for food processing waste with carbon source [11], and the open continuous fermentation model study of the strain Halomonas TD01 [21], where the strains Halomonas TD01 and Halomonas halophila have high PHA production. However, in our study, we found that a variety of halophilic bacteria, including the endogenous microorganisms Halomonas salina BSF4 and Halomonas janggokensis B6, in the silkworm excrement identified in this study; both were unable to accumulate PHAs using the silkworm excrement as the sole or partial carbon source ( Figure 3E,G). On the other hand, the haloarchaea A85 and A112 were better able to use the silkworm excrement for PHA production. There are reports of PHA production by halophilic archaea using industrial wastewater, and the halophilic archaea Natrinema altunense and Haloterrigena jeotgali identified from Chott El Jerid Lake were shown to accumulate PHAs using industrial sugar wastewater, with the strain Natrinema altunense being the same strain as A112 in this study. However, in the above study, the accumulation of PHAs was about 0.15 g/L under a 2-g/L glucose addition, while strain A85 in this study could accumulate up to 0.3 g/L of PHAs in the medium with silkworm excrement as the only carbon source ( Table 4). The accumulation of PHAs was significantly increased when glucose was added as a partial carbon source, and a certain content of 3-HV component copolymer was accumulated.
In this study, a total of 14 halotolerant bacteria or haloarchaea were isolated, and two halotolerant bacteria and two haloarchaea could accumulate PHAs in a medium with glucose as the carbon source, as confirmed by the Nile red staining and GC assays, but only two archaea showed that they could accumulate PHAs using the silkworm excrement as the only carbon source. The method of Nile red for PHA staining is a common means, but due to its lack of better negative control, the method of Nile red is more likely to produce false-positive results, because the Nile red can also be bound to the cell membrane. In the present work, a deletion mutant strain of PHA synthase, ∆EC, was introduced, and the results showed that it could be used as a suitable negative control to preliminary determine the accumulation of PHAs, and the results were basically consistent, as confirmed by the GC method. Most studies have used PCR with a degenerate primer for the PHA synthase gene as a primary screening method for PHAs accumulating bacteria, but in our study, the accuracy of the PCR method had a much higher false-positive rate than the Nile red staining method [24].
It has been reported that archaea are better able to utilize inexpensive carbon sources compared to bacteria and are able to accumulate PHAs using unrelated carbon sources [16], which is consistent with our results. Most of the above microorganisms grew significantly better in the medium with glucose as the sole carbon source than with silkworm excrement. It is noteworthy that, even containing higher concentrations of glucose, strains BSF4 and B6 could not grow in the SM medium but could grow in the medium with the silkworm excrement as the sole carbon source; it seems that the combination of glucose and silkworm excrement produced an inhibitory factor that hindered the growth of bacteria but did not significantly affect the growth of archaea. This may be related to the physiological metabolic characteristics of haloarchaea, but the exact mechanism has not been elucidated.
The two haloarchaea A85 and A112 accumulated PHAs with different characterizations; strain A85 could accumulate high amounts of PHAs (0.96 ± 0.06 g/L), but its 3-HV content was lower (6.65 ± 0.18%), while strain A112 had a relatively high 3-HV content (15.26 ± 1.01%), although the amount of PHA was low(0.37 ± 0.02 g/L) (Table 3), which might be related to the difference of its 3-HV synthesis-related genes [25]. Among the known archaeal PHA-producing strains, the highest content of 3-HV fraction is the strain Haloferax mediterranei, with a maximum content of about 9.33 ± 0.13 mol% [26], and the highest content of strain A112 in this study was found to be 15.26 ± 1.01% in the MGL medium. The halotolerant bacteria isolated in this study that could only accumulate PHB in the MGL medium, which is also in accordance with the reports of related studies.
In this study, two PHA-producing haloarchaea isolated and identified in a high-salt environment conferred the possibility of open fermentation of the silkworm excrement. In a pre-experiment, we tested the microbial growth in silkworm excrement at different salinities with a CFU method [24], and the results showed that, in the 10% NaCl concentration AS-165 agar plates, the number of countable microorganisms per mL of the non-sterilized SE extracts was 2 × 10 5 CFU/mL, while the number of such endogenous microorganisms detected in 15% NaCl concentration AS-165 plates was substantially lower. The optimum growth salt concentrations of the two halophilic archaea isolated in this study were 15% and 20% NaCl, respectively, which met the salinity requirements for open fermentation in the pre-experiments.
The results of the open fermentation experiments with the medium using the silkworm excrement as the sole carbon source showed that, after a 96-h fermentation, strains A85 and A112 were able to occupy the main ecological dominance (their CFU reached more than 60%), and the addition of glucose had a significant effect on improving the ecological dominance of the two archaea, with strain A85 having almost undetectable symbionts in the end after 5 g/L of glucose addition. In contrast, strain A112 could also increase the percentage of CFU from 62.83 ± 1.55% to 92.73 ± 2.58% ( Figure 5C). This result provides an important basis for the optimization of conditions for the subsequent development of fermentation.
In addition, the two archaea did not differ significantly between the non-sterilized and sterilized silkworm excrement mediums, where strain A85 also had a slight improvement, the reason for which has not been able to be elaborated. It may be related to the collaborative symbiosis of microorganisms under carbon source poor conditions; in the open fermentation culture, the symbionts in the fermentation product of strain A85 were mainly strain A112 and Halorubrum aidingense and Gracilibacillus orientalis, while the sym-bionts in the fermentation product of strain A112 did not contain PHA-producing strains, mainly Halorubrum saccharovorum, Marinococcus halotolerans, Alkalibacillus halophilus, and Bacillus qingdaonensis. Whether open fermentation coculture of the two strains can optimize the current fermentation method still needs further study.
Pretreatment of Silkworm Excrement and Culture Mediums
The chlorophyll ethanol-extracted silkworm excrement used in this study was provided by Fengming Chlorophyll Company Limited (Haining, Zhejiang Province, China). In order to quantify the nutrients in the water-soluble fraction of silkworm excrement, the total sugars and total nitrogen of the solution extracted from silkworm excrement were first determined by the phenol-sulfuric acid method [27] and Kjeldahl nitrogen determination method, respectively [28]. The treatment was as follows briefly: 5 g of chlorophyll ethanol-extracted silkworm excrement sample was shaken with 100 mL of 15% sodium chloride solution at room temperature for 2 h at 200 rpm, and the extracts were obtained by analytical filter paper extraction. The total sugar content was measured as 9.425 ± 0.04 g/L, and the total nitrogen content was 0.88 ± 0.05 g/L. The amount converted to crude protein was about 5.5 ± 0.31 g/L.
To facilitate a comparison of the effect of the silkworm excrement as a carbon source for microbial fermentation, this study replaced the main carbon source glucose in the MGL medium with a mass of the silkworm excrement of equal carbon content, approximately 53 g/L, named SE. The medium in which the MGL medium was mixed with the SE medium in an equal volume was called SM.
The PCR was performed in a 20-µL reaction volume containing 2 × Accurate Taq Master Mix 10 µL (Vazyme), forward primer (100 µmol/L) 1 µL, reverse primer (100 µmol/L) 1 µL, template DNA 1 µL, and ddH 2 O up to 20 µL. The PCR program was as follows: 95 • C for 3 min, 30 cycles of 95 • C for 30 s, 54 • C for 30 s, 72 • C for 90 s, and a final extension 72 • C for 10 min. The PCR products were analyzed by 1.0% agarose gel electrophoresis. The full length 16S rDNA PCR products were cloned into a T-vector (pGEM-T, Promega, USA) and the plasmid isolated from the positive colony were sequenced on an ABI 3730XL platform for further identification. All the strains were stored at −80 • C with 25% glycerol for further study.
Detection of PHA Production Using a Microscopy Approach and Gas Chromatography
The primary screening of PHA-producing halophiles was performed using the Nile red method [30], where the target strains were inoculated into MGL medium and cultured until the end of logarithm or stable phase, harvest 1 mL of culture by centrifugation (60 s, 13,000 rpm), discard the supernatant, add 15% NaCl solution, and resuspend the cell pellet. Each 40 µL of cells were stained by adding 10 µL of Nile red solution (0.1 mg/mL) and incubating for 20 min in a lightproof tube. The cells were observed by a fluorescence microscope (12 V, 4.15A/EVOS FL; Thermo Scientific, Leica DM4; Wetzlar, Germany). To exclude false positives introduced by nonspecific staining of the cell membrane by Nile red, we used a PHA synthase-deficient mutant halophilic strain Haloferax mediterranei ∆EC as the negative control; the strain ∆EC was obtained from Xiang's Lab by Chinese Academy of Sciences (Beijing, China) [26].
The quantification analysis of PHA production of the target strains was performed using the gas chromatography technique (GC) [31]. The fermented cells were harvested by centrifuge and freeze-dried to collect the powder. A certain amount of powder was put into an esterification tube (approximately 80 mg), and 4 mL of esterification solution was added, which was 2 mL of chloroform and 2 mL of 3% (v/v) concentrated sulfuric acid in a methanol solution containing benzoic acid (1 g/L), and the esterification reaction was carried out at 100 • C for 4 h. The organic phase was analyzed by gas chromatography using an Agilent Technologies 7890A chromatograph with an injection volume of 1 µL, an inlet temperature of 200 • C and a detector temperature of 220 • C, an initial column temperature of 80 • C, a dwell time of l.5 min, a ramp up to 140 • C at a rate of 30 • C/min, and a ramp down to 140 • C at a rate of 40 • C/min. PHBV standard (Sigma Aldrich, Catalog No: 403121, 12 mol % PHV content) was used as the standard control.
Optimization of Culture Conditions
The seed cultures of the target strain at the late log phase were inoculated as 1:10 into AS-165 medium separately and cultivated at 37 • C, 200 rpm. The AS-165 medium was used as the basic medium for investigating the optimum sodium chloride concentration, PH, and cultural temperature of PHA-accumulating strains with specific gradient settings, including temperature (30, 37, 42, and 50 • C); salinity (5, 10, 15, 20, 25, and 30%); and pH (5, 6, 6.5, 7, 8, and 9). The OD 600 was used to characterize the growth status of the microorganisms. Finally, the specific growth rates (u) were calculated in the exponential growth period (u = 0.693/td, td: doubling time, h).
Feasibility Study on the Open Fermentation Process
The SE medium without sterilization was used for the study of the possibility of open fermentation of the silkworm excrement, and the accumulation of PHAs was quantified by the GC method under the same conditions as sterilization. The growth of halophiles was done using the CFU method instead due to the interference of the native color of the SE medium. The seed cultures were inoculated into the SE medium at a ratio of 1:10 for fermentation, and at the endpoint of the fermentation, the fermented broth was diluted in an appropriate proportion (10 −5 -10 −7 ) and spread on 165 medium plates. More than 20 colonies were randomly picked for 16S rDNA identification. The ratio of the dominated strain was calculated by divided the number of the target colonies to the total number of all chose colonies.
Statistical Analysis
The results in this study were expressed as the means ± SD. Sequence homology was analyzed by the BLAST service (National Center for Biotechnology Information. http://blast.ncbi.nlm.nih.gov/Blast.cgi, accessed on 24 October 2021) in the National Center for Biotechnology Information (NCBI) [32]. Statistical data analysis was performed using the one-way ANOVA method. p < 0.05 was considered statistically significant. Three independent experiments were performed for each result. | 2021-12-03T16:20:13.500Z | 2021-11-25T00:00:00.000 | {
"year": 2021,
"sha1": "1974ef1120d5baab50ff466d4bf370861dc41c8c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/26/23/7122/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59662bd4d713f52354c3fe0a7c93da1879e126a0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16547095 | pes2o/s2orc | v3-fos-license | Facial profile parameters and their relative influence on bilabial prominence and the perceptions of facial profile attractiveness: A novel approach
Objective To evaluate the relative importance of bilabial prominence in relation to other facial profile parameters in a normal population. Methods Profile stimulus images of 38 individuals (28 female and 10 male; ages 19-25 years) were shown to an unrelated group of first-year students (n = 42; ages 18-24 years). The images were individually viewed on a 17-inch monitor. The observers received standardized instructions before viewing. A six-question questionnaire was completed using a Likert-type scale. The responses were analyzed by ordered logistic regression to identify associations between profile characteristics and observer preferences. The Bayesian Information Criterion was used to select variables that explained observer preferences most accurately. Results Nasal, bilabial, and chin prominences; the nasofrontal angle; and lip curls had the greatest effect on overall profile attractiveness perceptions. The lip-chin-throat angle and upper lip curl had the greatest effect on forehead prominence perceptions. The bilabial prominence, nasolabial angle (particularly the lower component), and mentolabial angle had the greatest effect on nasal prominence perceptions. The bilabial prominence, nasolabial angle, chin prominence, and submental length had the greatest effect on lip prominence perceptions. The bilabial prominence, nasolabial angle, mentolabial angle, and submental length had the greatest effect on chin prominence perceptions. Conclusions More prominent lips, within normal limits, may be considered more attractive in the profile view. Profile parameters have a greater influence on their neighboring aesthetic units but indirectly influence related profile parameters, endorsing the importance of achieving an aesthetic balance between relative prominences of all aesthetic units of the facial profile.
INTRODUCTION
A growing body of evidence suggests that perceptions of facial profile attractiveness have changed and will continue to change over time. 1,2 For the lips, some authors have suggested that fuller, more prominent lips may be perceived as more youthful and consequently, more desirable from an aesthetic viewpoint. 1,2 If this change in perception is true, particularly since modern orthodontics is partly demanded and undertaken to improve facial attractiveness, it would have potentially important consequences to both orthodontic treatment planning and hard and soft tissue surgery, which can influence lip prominence.
A study by Auger and Turley, 1 which assessed periodical fashion magazines spanning over 100 years, found that perceptions of the ideal female facial profile have changed throughout the 20th century. Ideals of facial beauty appear to have changed with a trend toward more protrusive lips and increased vermilion display.
Another study by Nguyen and Turley 2 examined fashion magazine photographs of male models over the last 65 years from publications such as Harper's Bazaar, Vanity Fair, Vogue, and Cosmopolitan. Their findings showed that the perceptions of the male model profile have changed significantly with time, especially with respect to the lips. There has been a trend towards increasing lip protrusion, lip curl, and vermilion display. However, facial convexity and measurements in the region of the forehead and nose, including the nasofrontal angle, nasal tip angle, and nasal base angle, have remained unchanged over time. Linear mea surements of the upper and lower lips to the E-line have significantly reduced with time, suggesting an increase in lip protrusion. Additionally, a significant decrease in the interlabial angle with time (representing the increased lip projection/lip curl) was observed. The labiomental angle has also increased with time, again suggesting an increased lip curl. However, vertical facial heights, i.e., the upper, middle, and lower, did not change significantly with time. The authors surmised that fuller lips were perceived to be more youthful.
Age changes in the lips are well documented and are perceived as a natural flattening of the facial profile with age, indicated by less protrusive lips and a soft tissue profile with increased age. [3][4][5] Furthermore, there is an increase in ethnic diversity among fashion models; for example, African models have more voluptuous lips. Yehezkel and Turley 6 evaluated changes in the profiles of African-American women presented in fashion maga zines during the 20th century. The photographs of women were divided into six groups corresponding to the decade in which they were published. Twentysix variables were measured, and significance between group differences (p < 0.01) was found for the anteroposterior lip position, the nasolabial angle, and the inter labial angle, with increased fullness and more anteriorly positioned lips in the more recent decades. No significant differences were found for the nasofrontal angle, the nasal tip angle, and the relationship of the chin to the upper face (total facial angle). A low mean total facial angle (convex profile) was consistently observed, and a number of subjects may have had a Class II skeletal relationship. The authors concluded: "Esthetic standards for the African American female profile have changed during the 20th century, and similar to standards for the white profile, show a trend towards fuller and more anteriorly positioned lips". 6 Thus, it is questionable whether facial aesthetic standards of the past are applicable to present day aesthetic facial analysis.
Meanwhile, the lay public and professionals have different profile preferences. 7-10 Hall et al. 11 published a study designed to assess the perceived optimal profiles of African-Americans versus white Americans. A survey was conducted using profile silhouettes of 30 African Americans and 30 white patients, ranging in age from 7 to 17 years. Twenty white orthodontists, 18 African-American orthodontists, 20 white laypersons, and 20 African-American laypersons evaluated the profiles. The preference of each rater for each of the 60 profiles was scored on a visual analog scale. Eighteen cephalometric variables were measured for each profile, and statistical analyses were performed on the profiles. The results showed that the following six cephalometric variables were significant: the Z-angle, skeletal convexity at A-point, upper lip prominence, lower lip prominence, nasomental angle, and mentolabial sulcus. All raters preferred the African American sample to have a greater profile convexity than they preferred for the white sample. 11 The raters preferred the African-American sample with upper and lower lips that were more prominent than the white sample. However, only the choice of African-American orthodontists in the African-American sample was significantly different for this parameter. White orthodontists gave the highest mean scores for the profile chosen; whereas, African-American laypersons gave the lowest scores.
When a patient is assessed by his/her profile, the position of the profile features are usually assessed in relation to each other. The lips are most often related to the relative prominence of the nose and chin. 12 More spe cifically, the evaluation of lip prominence may be under taken in relation to certain reference lines, which include 12 the E-line (Ricketts 13,14 ), S-line (Steiner 15,16 ), Z-line (Merrifield 17 ), H-line (Holdaway 18,19 ), Subnasale-Pogonion line (Burstone 20,21 ), Riedel plane (Riedel 22 prominence of the lips may be related to a true vertical line passing through subnasale. 12 The purpose of this investigation was to evaluate the relative importance of bilabial prominence in relation to the overall facial profile attractiveness and the relative prominences of other facial profile parameters in a normal population by using a different approach to traditional attractiveness perception research.
MATERIALS AND METHODS
Previous research in this field has predominately used one of two methodologies: the profile silhouette manipulation or the digital photographic manipulation. In traditional attractiveness research, a facial profile is chosen or created then only one facial parameter is incrementally altered to create a series of images, which are then rated in terms of attractiveness by a group of observers. In the present study, a novel methodology was used. The facial profiles of normal male and female subjects were used, but the observers were asked specific questions regarding the subjects' attrac- tiveness in order to rate each profile parameter that may influence the perceptions of attractiveness (e.g., do you think the chin is too far forward, just right, or too far back, etc.). The responses were analyzed in relation to the angular and linear aesthetic analysis of each subject image to assess whether any trends were apparent (Figures 1 and 2). Unaltered images were used to most closely simulate real-world scenarios of how observers discern facial profile attractiveness. To make inferences about a population from a sample, a minimal number of representative individuals should be included. While university students may not always be adequately representative of the overall population, they were willing participants, and this population provided enough numbers for group separation in terms of age, gender, and ethnic background. Since the scatter of the data from previous studies was not defined and there is no industry-accepted value of clinical significance for the profile characteristics under investigation, a sample size of at least 40 was recommended. The present study was a part of an on-going project that had commenced in the year 2000, for which images had been obtained. After 2005, ethical permission from King's College London was required for students' images in this study, but if this was a new project, ethical committee approval would have been deemed mandatory. If identifiable images from dental students were used without anonymization, their written informed consent was obtained participation in the study.
In the first phase of this investigation, the profile photographs of King's College London students in their 3rd to 5th years were taken in a standardized manner by one operator (first author) following a standardized protocol: the spectacles and any head coverings of the students were removed, and right-sided profile photographs were taken against a plain background. Participants held a ruler parallel to their face to indicate the facial midline with respect to the photographer to record linear measurements during analysis of the images without the risk of magnification errors, invalidating the measurements. Thirty-eight students (28 female, 10 male) agreed to take part in the photographic acquisition phase, and an example of is shown in Figure 3.
In the second phase, the aforementioned photographs The forehead is A LITTLE TOO PROMINENT 3 The forehead is of IDEAL PROMINENCE 2 The forehead is A LITTLE RETRUSIVE 1 The forehead is MUCH TOO RETRUSIVE Question 4: Regarding the position of the NOSE… 5 The nose is MUCH TOO PROMINENT 4 The nose is A LITTLE TOO PROMINENT 3 The nose is of IDEAL PROMINENCE 2 The nose is A LITTLE RETRUSIVE 1 The nose is MUCH TOO RETRUSIVE The chin is A LITTLE TOO PROMINENT 3 The chin is of IDEAL PROMINENCE 2 The chin is A LITTLE RETRUSIVE 1 The chin is MUCH TOO RETRUSIVE . Most of these students had just arrived for their first year, and thereby did not know individuals in the stimulus photographs. However, three of these observers did indicate cognizance of at least one individual and were excluded from the study. We obtained 42 completed results. The photographs were individually viewed on a 17-inch computer monitor with the observers receiving standardized instructions for previewing. The observers were asked not to talk about their experimental experience with their colleagues. No time limit was set for completion of the questionnaires, but most observers required approximately 30 minutes. The questionnaire consisted of six questions (Figure 4), and each question had five possible answers arranged in a Likert-type scale for questions 1−6 and as a simple list for question 2. The measurements were tabulated into a Microsoft Excel spreadsheet and were forwarded to a professional biostatistician (MS) for analysis.
Statistical methodology
Statistical analysis was performed using the statistical pro gram Stata, version 12.1 (StataCorp LP, College Station, TX, USA). Questions 1−6 were analyzed by ordered logistic regression, 23 because the response variable had more than two values and was categorically ordered (i.e., a larger value corresponded to a higher response). Question 2 was just a selection of a feature rather than a graded response, so a simple frequency analysis was used with a null hypothesis that all features had an equal likelihood to be chosen. The variables in question 2 were not ordered in a natural manner, as in questions 1 and 3−6.
Then, Bayesian Information Criterion (BIC) was used to select the subset of variables that had the greatest effect on how the observers answered each question. 24 The BIC is a criterion used to select a model that is most valid for the data. The BIC identifies the best model for the data set by penalizing any models that have parameters that only add complexity rather than validity out of a range of all the possible models. Bayesian probability is the name given to several related interpretations of probability, which have the notion of probability as a partial belief, rather than a frequency. This allows the application of probability to a greater variety of pro positions.
RESULTS
The BIC inferred that the zero-meridian line to the pro nasale (pr), zero-meridian line to the labrale inferius (ll), labrale inferius to Rickett's E-line (lle), zero-meridian line to the soft tissue pogonion (poe), nasofrontal angle (nfr), upper lip curl (ulc), and lower lip curl (llc) had the greatest effect on how observers answered question 1 ( Table 1). Figure 5 shows the influence of these variables on each response. The plots are probabilities associated with each value of the independent variable.
There were 38 images viewed by 42 observers, each of whom had five response choices (38 × 42) / 5 = 319.2. In Figure 6, the vertical line represents the value 319.2, which corresponds to the null hypothesis (Tables 2 and 3).
The BIC inferred that the lip-chin-throat angle (lcta) and ulc had the greatest effect on how observers an swered question 3 (Table 4). Figure 7 shows the influence of these variables on each response.
The BIC inferred that the zero-meridian line to the labrale superius (ul), ll, nasolabial angle (nla), lower component of the nasolabial angle (lnla), and mento labial angle (mla) had the greatest effect on how observers answered question 4 (Table 5). Figure 8 shows the influence of these variables on each response.
The BIC inferred that the labrale superius to the Rickett's E-line (ule), ll, poe, nla, lnla, nasofacial angle (nfa), submental length (sml), and llc had the greatest effect on how observers answered question 5 (Table 6). Figure 9 shows the influence of these variables on each response. The BIC inferred that ul, ll, nla, lnla, mla, nfa, sml, and ulc had the greatest effect on how observers answered ques tion 6 ( Table 7). Figure 10 shows the influence of these vari ables on each response.
DISCUSSION
An odds ratio (OR) describes the strength of association between two variables. In the odds ratio representation, a ratio of 1 has no effect. If OR > 1, then an increase in the parameter value indicates an increase in response, and if OR < 1, an increase in the parameter corresponds to a decrease in the response. For example, in the OR representation in Table 1, increased values of ll, lle, ulc, and llc are related to increased attractiveness, as are decreased values of pr, poe, and nfr. This is consistent with previous works by Auger and Turley, 1 Nguyen and Turley, 2 Yehezkel and Turley, 6 and Hall et al., 11 with an increasing preference towards a more protrusive lip position. However, the present study also found that an increased distance from the lle was associated with increased attractiveness according to the responses to question 1. This contradicts both Auger and Turley's 1 and Nguyen and Turley's 2 findings. Furthermore, Hall et al. 11 used a different age group (7−17 years) that did not overlap with the age group of observers in this study. Therefore, age changes of the lips have been demonstrated. 3,4 Analysis of question 2 results revealed that many obser vers perceived the prominence of both the nose and lips to be more important than the prominence of the forehead or chin, or any other profile feature. Figure 6 illu strates this graphically, and Table 3 numerically describes the variance from the null hypothesis that all profile features have an equally likelihood to be chosen. The importance of the lip prominence in question 2 again supports the aforementioned research cited. In question 3, there was an OR value of 1.18 for ulc, which was even higher than the OR for ulc in question 1 (OR = 1.13). This supports conclusions by Auger and Turley 1 and Nguyen and Turley. 2 In contrast, question 4 suggests that a less protrusive upper lip is a positive aesthetic value when observers rate the prominence of the nose. Lower lip prominence, within normal limits, was again associated with greater perceived attractiveness. Question 5 asked observers to rate the prominence of the lips. Again, the OR displayed preferences towards increased lower lip prominence but also for decreased llc.
Overall, the OR was most positive (OR = 1.20) for the ll prominence and the llc (OR = 1.18) in question 1, the ulc in question 3 (OR = 1.18), and the ll prominence in question 5 (OR = 1.15). The lowest OR (signifying a decrease in the likelihood of the response) was associated with an OR value of 0.84 in question 5 for the ulc.
The plots demonstrate the contribution of each variable (e.g., ulc or nla) to each level of response (ans wering 1−6 on the questionnaire) within each of the questions (1 and 3−6). For example, if question 6 is considered, then of the 15 possible variables chosen, the response is best described statistically by the subset of 8 variables in Figure 10 and Table 7 (i.e., the ul, ll, nla, lnla, mla, nfa, sml, and ulc). A trend can be discerned, because when the ulc decreases, the ul increases, as the response progresses from questions 1−5. Thus, when observers considered the chin position in the profile view to be much too retrusive (response 1), then it was more likely because of the value of the ulc and less likely because of the value of the ul. Similarly, with response 5 in question 6, if the observer believed the chin to be much too prominent, their response was most likely influenced by the ul and least likely influenced by the ulc−the plots involve average marginal effects. This is the effect of a unit change in the parameter, which holds the other variables constant and averages over all observations. Therefore, numerically, if you consider response 1 for question 6, then a 1-unit change in the ulc (i.e., 1 mm ulc increase) will change the probability of observing a very retrusive chin by 0.015% or 1.5%. There were other discernable trends in the plots. Question 1 -If the observer rated the image attractive or very attractive, they were most likely to have been influenced by the ll and llc values and not by the poe or pr. If the observer rated the image unattractive or very unattractive, it is likely that they were influenced most by the pr and poe values and not by the ll or llc values.
Question 3 -With regard to the position of the forehead in the profile view, if the observer responded that the forehead was retrusive or very retrusive, it is most likely that they were influenced by the lcta and least likely influenced by the ulc. If the observer responded that the forehead was too prominent or much too prominent, it is most likely that this response was influenced by the ulc value and least likely influenced by the lcta value. Question 4 -With regard to the position of the nose in the profile view, if the observer responded that the nose was retrusive or very retrusive, it is most likely that they were influenced by the ul and least likely influenced by the ll. If the observer responded that the nose was too prominent or much too prominent, it is most likely that this response was influenced by the ll value and least likely influenced by the ul value.
Question 5 -With regard to the position of the lips in the profile view, if the observer responded that the lips were retrusive or very retrusive, it is most likely that they were influenced by the ule, poe, and llc and least likely influenced by the ll. If the observer responded that the lips were too prominent or much too prominent, it is most likely that this response was influenced by the ll value and least likely influenced by the ule, poe, and llc values as measured on the stimulus photographs.
Question 6 -Question 6 has been interpreted in the aforementioned text. However, with regard to the position of the chin in the profile view, observers displayed a positive OR for the ll, lnla, nfa, and sml and nega tive ORs for the ule, poe, nla, and llc.
To the best of our knowledge, an analysis of this detail has not been undertaken previously. Most analyses stop at the overall model without looking at the responses to each item. Further investigation is required to analyze the merit of the trends described, which would potentially provide useful data for facial aesthetic analysis. Bars on the plots represent the 95% confidence intervals. Refer Figures 1 and 2
CONCLUSION
Perceptions of facial profile attractiveness are multifactorial. This investigation provides support to the hypothesis that more prominent lips, within normal limits, are perceived to be more attractive in the profile view. It appears that profile parameters have a greater influence on their neighboring aesthetic units (e.g. the nasolabial angle has a considerable influence on the perceptions of nasal prominence). However, the results also provide evidence that profile parameters further away also have an influence on perceptions of indirectly related profile parameters (e.g., the nasofrontal angle on chin prominence). This further endorses the importance of achieving aesthetic balance between the relative prominences of all the aesthetic units of the facial profile. | 2017-11-01T00:06:38.249Z | 2014-07-01T00:00:00.000 | {
"year": 2014,
"sha1": "7e4338e31109eaa04b3a9da1373efbff36f69696",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4130914?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e4338e31109eaa04b3a9da1373efbff36f69696",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
156097453 | pes2o/s2orc | v3-fos-license | Analysis of Social Responsibility Implementation Plan to Profitability Companies in Bank Syariah
: This paper aims to describe Corporate Social Responsibility (CSR) is one of concern for a company on the environment. Currently CSR aims to maximize profits but are also required to better accommodate the needs of the community and its stakeholders. This study was conducted to provide empirical evidence of the influence of CSR to the company's profitability. Research was conducted on PT Bank SyariahMandiri. The Data used are the financial statements issued by the company each year during the period 2002-2013. This study uses a simple regression analysis results showed that during the period 2002-2013 cooperated with LAZNAS BSM BSM/organization partners in the distribution of alms fund companies and the implementation of programs that are Humanity. The program routine that has been Carried out Independently of Islamic banks is synergy Together LAZNAS, who consistently implement the People Partner program, Micro Development, development and economic empowerment of the people through the capital assistance, training and mentoring of individual businesses, educational assistance (scholarships) to Reviews those in need, help is Also included learning facilities, Community Development Program, Religious, Public Facility etc. Moreover in this study indicate that the implementation of CSR.
Introduction
The world of business is no longer just pay attention to the company's financial records (single bottom line), but also includes social and environmental aspects of the so-called triple bottom line. Synergy of these three elements is the key concept of sustainable development (Sustainable Development). The concept of Triple Bottom Line or "3P" (profit, people, and planet) translated by John Elkington in Wibisono (2007: 56) explains if the company wants to sustain, in addition to the pursuit of profit, the company must also give a positive contribution to society and actively participate in protecting the environment. Some companies actually have been doing CSA (Corporate Social Activity) or social activities of the company. Although not named as CSR, in fact the action closer to the concept of CSR which represents a form of participation and corporate concern for the social and environmental aspects. Through the concept of corporate social investment, since 2003 the Ministry of Social recorded as government agencies are active in developing the concept of CSR and advocate for various national companies. In the public marketplace of ideas, the term "corporate governance" has recently been described as "the set of processes, customs, policies, laws and institutions affecting the way in which a corporation is directed, administered or controlled." 12 Yet the substance attributed to this definition has changed quite dramatically over the past years, shifting from a functional, economic focus on agency problems within a private law sphere to a public policy approach that seeks to protect investors and nonshareholder stakeholders. The evolution in the perception of corporate governance reflects broad changes in the socio-legal view of business corporations (Gill, 2008). Referring to Law No.40 / 2007 states, social and environmental responsibility is the company's commitment to participate in the sustainable economic development to improve the quality of life and environment is beneficial, both for the company itself, the local community and society at large. Law No. 40/2007 also said the company is conducting its business activities in the field and or related to the natural resources required to implement social and environmental responsibility. Islamic banking CSR program should actually touch the basic needs of the community to create equitable economic prosperity for society. On the argument above, this research is intended to perform an analysis of: Analysis of implementation of social responsibility Fund Company to profitability at PT. Bank Syariah Mandiri.
Literature Review
Corporate social responsibility (CSR) disclosure has received an increasing amount of attention in both the academic and business fraternities. Such disclosure encompasses the provision of information on human resource aspects, products and services, involvement in community projects/activities and environmental reporting (Mohd-Ghazali, 2007).
Environment:
The overall level of environment disclosure for Malaysian companies remained low despite government efforts and campaigns to improve the environment. Casual evidence suggests, however, that there are some companies that contributed to improve the environment but which did not disclose the fact in their annual reports. As a result, the users of the annual reports do not know and may conclude that the company did not do anything towards the protection of the environment (Jamil, Alwi& Mohamed, 2002), The company will acquire social legitimacy and maximize long-term financial strength through social programs and maximize its financial strength in the long term through the application of CSR. If something similar happened in the banking company, especially in this case Islamic banking. Agency theory and Signalling rooted in the idea of asymmetric information, which says that in some economic transactions, inequalities in access to information to the normal market for the exchange of goods and services. Measure both the space and the quality of CSR disclosures, including in the latter a measure based on informational quality attributes as discussed by the Inter-national Accounting Standards Board, the Financial Accounting Standards Board, and the Global Reporting Initiative. We find significant increases in the space allocated to CSR disclosure, as well as some evidence of increased quality; although the informational quality of the disclosures remains quite low and fewer firms are including J.-N. Chauvey, S. Giordano-Spring Montpellier Research in Management, ISEM, Universite negative performance information in their reports. Finally, we document that differences in disclosure space and quality in 2004 appeared to be associated with legitimacy based variables and that those relations remain largely unchanged in 2010. As such, it appears that the NRE's goals of increased transparency remain unmet (Chauvey, Giordano-Spring, Cho& Patten, 2015).
Profitability and Using a quasi-natural experiment that mandates a subset of listed firms to issue corporate social responsibility (CSR) reports, this paper examines the effect of mandatory CSR disclosure on market information asymmetry in China, where we estimate information asymmetry using high-frequency trade and quote data. We find that contrary to the criticism that mandatory CSR disclosure lacks credibility and relevance in emerging markets, mandatory CSR reporting firms experience a decrease in information asymmetry subsequent to the mandate. In addition, the decrease in information asymmetry is more pronounced among firms with greater political/social risks, poorer information environments, and better CSR reporting quality. Additional analyses suggest that relative to mandatory CSR disclosure, voluntary CSR disclosure is part of a firm's political/social strategy and has higher CSR reporting quality. However, the effect of voluntary CSR disclosure on information asymmetry is limited unless CSR reporting is widespread (Shi, Hung, & Wang, 2015). There has been little attempt to develop a model that can evaluate CR in diverse environments with differing regulatory and market settings. I attempt to fill this gap by developing a conceptual framework that focuses on legal systems and is tested empirically in the European context. My results confirm the validity of this conceptual framework in the European environ? The analysis of CER shows a direct correlation between intense regulation and high corporate ratings. With regard to CSR, the findings are more ambiguous; while civil and German civil laws significantly influence CSR, common and Scandinavian civil laws do not. Furthermore, mean test results indicate that corporate social rating averages differ only slightly across the countries of Europe, despite a variety of legal systems. This article suggests ways to interpret this finding. Second, this study adds to the sparse but growing literature that assesses the links between CR and corporate financial performance on the European market. Most relevant research focuses on the US market. One reason why earlier studies have not considered the effect of financial performance on CR in the European market is that CR data for European firms are produced by very few rating agencies and are not widely available (Ce, 2011). In this study, the company's profitability measured by return on assets ratios derived from financial data which is the object of research over the period 2002 to 2013. This ratio is a ratio that is important to determine the profitability of a company. Return on assets is a measure of the effectiveness of the company in generating profits by exploiting its assets. By calculating ROA as follows:
Return on Assets (ROA) = Net of Income Total Assets
CSR is an agreement of the World Business Council for Sustainable Development (WBCSD) in Johannesburg South Africa in 2002 which aimed to encourage all companies of the world in order to create a sustainable development (sustainable development), working with the company's employees, their families, communities and the local community as a whole in improved quality of life. More specifically, we show (1) the breadth of CSR disclosure (using two different measures of disclosure extensiveness) has grown dramatically, (2) that there is no significant change in the relation between legitimacy variables and differences in CSR disclosure, and (3) that differences in CSR disclosure (using either of the breadth measures) were not significant in explaining differences in the market value of firms in the late 1970s and continue to be insignificant today. In general, our results suggest that CSR disclosure, while more extensive today than it had been three decades ago, fails to provide information that is relevant for assessing firm value (Bott, 2014). According to Nafarin (2007: 46) budget is a financial plan drawn up periodically by the programs that have been approved.
Budget (budget) is a written plan about the activities of an organization that planned quantitative and generally expressed in units of money for a certain period. Based on the proportion of corporate profits and the amount of the CSR budget, Sudharto (2008) (2004) indicate that the relationship CSR with financial performance (as seen from the ratio of profitability ROA, ROE, and ROS) is positive and statistically significant. It means that there is a positive association between CSR and profitability.
Methodology
This research was conducted at PT. Bank SyariahMandiri, located at the branch office in JalanAndiDjemma Palopo 4 Palopo, South Sulawesi province of Indonesia. To analyze and interpret the data properly, the necessary data is accurate and systematic so that the results obtained were able to describe the situation of the object being studied properly. In the data collection phase, data collection techniques used in this study are: Population and Samples, In this study population used is independent Islamic Bank's annual report for the past 12 years, because of the small number of the population owned causes the population as well as a sample in this study. The type of data used in this research is quantitative data, i.e. data obtained and presented in the form of figures. Data mentioned from of calculation the amount of budget and realization from year to year, addition in this study also uses data from interviews in the form of statements from informants in this case the bank officials who already have the capability field, the authors obtained information in the form of the action of the activities liability social responsibility by Bank SyariahMandiri based on annual reports that have been published. Analysis of the data used in this study is a simple linear regression analysis, using SPSS version 21. Before performing a linear regression analysis, first tested the classical assumption of regression in order to obtain good results, among the classical assumption used is normality test on the model regression, Test and Test Heteroskedastity autocorrelation. Descriptive statistics provide a picture or description of a data seen from the average value (mean), standard deviation, variance, maximum, minimum and range. Simple linear regression analysis was analysis to measure the influence of the independent variables with the dependent variable and the dependent variable predicted by using the independent variable. In simple linear regression are the classical assumptions that must be met, namely the residuals are normally distributed, the absence of heteroscedasticity and the absence of autocorrelation in the regression model.
Y = a + bX+e
Where: Y: Profitability Company X: Implementation of CSR A: Constants b: Regression Coefficients e: Error term
Research Finding
Descriptive statistics in this study aims to provide an overview of the data that has been processed consisting of frequency, mean and standard deviation. The following descriptive data table: Based descriptive research data in Table 1 can be seen that all the variables have 12 samples; at a variable profitability company has an average value of 25.58 with a standard deviation of 1.31. While the implementation of CSR have variable average value of 21.66 with a standard deviation of 1.82. Hence although significant, the influence of return-on-as? sets on CER remains relatively limited. The cash-to assets variable had no significant impact on CSR. These results offer some support to the slack resources "Firms with slack resources theory (Ce, 2011), As more and more multi-national companies their domiciled country but also their corporate response expand their operations globally, their responsibilities extend abilities now include cross-national issues as well. Various beyond not only the economic motive of profitability but also stakeholders are beginning to emphasize or expect more other social and environmental factors. The objective of these social responsibilities from companies.
a. Normality Test:
Normality test results showed that the residual value is already normally distributed. This is shown by drawing P-Plot which shows that the point is not far from the diagonal line. The results of the normality test with P-plot diagram is as follows: From the graph it can be seen that points out spread around the line and follow the diagonal lines of the residual value has been normal.
b. Test heteroskedastisity:
A good regression model is not the case heteroskedastisity. And as for the means used to determine whether free of heterokedastisity with Spearman's rho, which correlate with the independent variables unstandardized residual value, the test using a 0.05 significance level if correlations between independent variables with significant residual can be more than 0.05, it can be said heteroskedastisity problem does not occur in the regression model.
Hypothesis Testing
a. Simple regression test, Regression analysis was performed after classical assumption has been met, where the data entered is normal or free from multicollinearity and heteroscedasticity so it will not lead to biased data. Regression analysis was performed to determine the effect between the independent variables and the dependent variable. In this study the analysis is simple regression enter method by inserting the whole variable so it can be seen how much influence the independent variable on the dependent variable. Below is a From the results table above can be explained as follows: Value constants (a) is equal to 12.977, meaning that if the implementation of CSR's value is 0, then the level of profitability in value by 0.582. 2) Value Variable Regression Coefficients CSR Implementation is 0.582, this may mean that any increase in CSR implementation by 1%, then the rate of profitability also will increase by 0.582%. T test is used to determine whether the independent variables have a significant effect (real) or not on the dependent variable in this case to determine whether the variables significantly influence the implementation of CSR or not on the profitability of the company. The degree of significance used was 0.05. If the value is significantly smaller than the degree of confidence then we accept the alternative hypothesis, which states that independent variables affect the dependent variable. Here is a hypothesis that will be tested: H1: the implementation of social responsibility funds tanggunng significant effect on the profitability of the Company. Sundgren and Schneeweis (1988) that there was a significant positive correlation between CSR to company profitability.
Analysis Correlation Coefficient and the coefficient of determination (R2):
The correlation coefficient R showing how much correlation or relationship between the dependent and independent variables. The correlation coefficient R value is said to be strong if it is above 0.5 and close to 1. Here is the output of a model summary: a. R in simple linear regression analysis showed modest correlation (correlation person), which is the correlation between the independent variable on the dependent variable. Figures R in the can on the table is equal to 0.810 means that the correlation between the variable implant csr with variable profitability of the company amounted to 0.810. This means there is a very close relationship because the value closes to 1. b. R Square (R2) or square of R, which shows the value of the coefficient of determination. This figure will be converted into the form of percent, which means the percentage contribution of the influence of the independent variable on the dependent variable. As much as 0,656 R2 value means the percentage contribution of implementing csr variable influence on the profitability of the company by 65%, while the rest influenced by other variables that are not entered in this model. Adjust R Square, is the adjusted R-square value of 0622, this also shows the contribution of the influence of the independent variable on the dependent variable. Adjust R Square is usually to measure the contribution of influence if the regression using more than two independent variables. c. Standard error of the estimate, is a measure of prediction error, a value of 0.80651, meaning that errors in predicting the profitability of 0.80651%.
Conclusion
The purpose of this study was to examine the implementation of the Fund How Social responsibility PT Bank SyariahMandiri and its influence on the company's profitability in the period 2002-2013. Based on the discussion of the results of empirical research that has been described, the researchers took the following conclusion: In the implementation of CSR implementation, BSM cooperating with LAZNAS BSM/Organization Partners in the distribution of alms fund companies and the implementation of programs that are Humanity. BSM realize that CSR (Corporate Social Responsibility/CSR) are important in supporting the growth of the company. Bank consistently implement Corporate Social Responsibility (CSR) as a form of concern to the company as well as the appreciation of the people who have given the trust and support of the Islamic banking business, as for the regular program that has been carried out independently of Islamic banks is 1) synergy LAZNAS Together, the Partners program race, Micro Development, development and economic empowerment of the people through the capital assistance, training and mentoring of individual businesses. Educate People Providing educational assistance (scholarships) to those in need and to seek the persistence of teaching and learning activities. Help also includes learning facilities, 2) Community Development Program, 3) Religious, 4) Public Facilities etc. In this study indicate that the implementation of csr influential significance of 0.001. Tcount> t table (4.368> -1.812) and with significant value <0.05 (0.001 <0.05) then this indicates that the implementation of CSR variables significantly affect the profitability of the Company. This shows that the higher costs incurred by CSR PT Bank SyariahMandiri, making the company's relationships with surrounding communities and the environment as well as the consumer will get better, it allows an increase in sales (Januarti, 2005). With further ensure the well-being of employees will make the employees more loyal and passion in doing his job, so the company's objectives in the long term can be achieved.
Limitations Research:
This study has many limitations that require repair for further research. The limitations experienced in this study are the lack of the number of samples and the object of research. It is also still not being generalized and cannotrepresents all existing companies. The study also only uses the dependent variable profitability calculated using return on Assets (ROA) only. These results provide additional empirical evidence about the phenomenon of CSR, especially in Indonesia, and Based on the limitations described above, further research is expected to increase the number of samples and extend the observation time so that further research could generalizable.Research can replace or supplement a proxy profitability, e.g. ROE, ROI or ROS, and the independent variables plus or using other variables that potentially contribute to the profitability of companies such as CSR Performance (Measure by KLD index) as a measure of CSR. | 2019-05-18T13:06:46.057Z | 2016-05-11T00:00:00.000 | {
"year": 2016,
"sha1": "0b7dd6f951a0fe5b068487173ec3c11b6b60758b",
"oa_license": "CCBY",
"oa_url": "https://ojs.amhinternational.com/index.php/jebs/article/download/1250/1224",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "ec1c871715118b837d620695ab65e18fc8e8d6bb",
"s2fieldsofstudy": [
"Business",
"Environmental Science"
],
"extfieldsofstudy": [
"Economics"
]
} |
239469374 | pes2o/s2orc | v3-fos-license | Janus Kinase Signaling: Oncogenic Criminal of Lymphoid Cancers
Simple Summary Janus kinases (JAKs) are transmembrane receptors that pass signals from extracellular ligands to downstream. Increasing evidence has suggested that JAK family aberrations promote lymphoid cancer pathogenesis and progression through mediating gene expression via the JAK/STAT pathway or noncanonical JAK signaling. We are here to review how canonical JAK/STAT and noncanonical JAK signalings are represented and deregulated in lymphoid malignancies and how to target JAK for therapeutic purposes. Abstract The Janus kinase (JAK) family are known to respond to extracellular cytokine stimuli and to phosphorylate and activate signal transducers and activators of transcription (STAT), thereby modulating gene expression profiles. Recent studies have highlighted JAK abnormality in inducing over-activation of the JAK/STAT pathway, and that the cytoplasmic JAK tyrosine kinases may also have a nuclear role. A couple of anti-JAK therapeutics have been developed, which effectively harness lymphoid cancer cells. Here we discuss mutations and fusions leading to JAK deregulations, how upstream nodes drive JAK expression, how classical JAK/STAT pathways are represented in lymphoid malignancies and the noncanonical and nuclear role of JAKs. We also summarize JAK inhibition therapeutics applied alone or synergized with other drugs in treating lymphoid malignancies.
Introduction
Lymphoid cancers are lethal malignancies, which include lymphomas, myeloma and lymphoid leukemias. The Janus kinase (JAK) family comprises four members: JAK1, JAK2, JAK3 and TYK2. Structurally, all JAKs contain a FERM domain, a SH2 domain, a pseudokinase domain and a catalytic kinase domain. The JAK tyrosine kinases are mainly located in the cytoplasm and transmit signals from cytokines and their γ-chain receptors to signal transducers and activators of transcription (STAT), and the phosphorylated, dimerized and activated STAT then binds to chromosome and trans-regulates gene expression ( Figure 1). There are seven members in the mammalian STAT family: STAT1, STAT2, STAT3, STAT4, STAT5A, STAT5B and STAT6 [1]. The JAK/STAT pathway is evolutionarily conserved and directly affects developmental hematopoiesis and oncogenic proliferation and migration. JAK deregulates, either by mutations and translocations of itself or by upstream aberrance of other nodes, augmented disease pathogenesis, promoted tumor cell survival, and out-ofcontrol cell cycling via classical cytoplasmic JAK signaling or the noncanonical nuclear JAK pathway, both of which rewrite the epigenome and prompt the expression of oncogenes. In this article, we review activating mutations and fusions of JAKs that enhance JAK/STAT phosphorylation and lead to overexpression of STAT target oncogenes in a couple of lymphoid cancerous contexts, canonical JAK/STAT signaling and the nuclear role of JAKs that non-canonically bind to RNA polymerase II and phosphorylate histones [2] or chromatin modifiers [3,4]. We also summarize the effectiveness of JAK-targeting monotherapy and combinational therapy in curing lymphoid cancers, which induce programmed cell death and cell cycle arrest [5].
Furthermore, some studies have well described alterations affecting one to multiple cell fate-related nodes of the JAK/STAT pathway, including Hodgkin-Reed-Sternberg (HRS)like "cells of NK phenotype" [29], primary cutaneous γδ T cell lymphoma (PCGDTL) [30], EITL [31], post-transplant lymphoproliferative disorder (LPD) [32] and CTCL [32], part of which led to upregulated JAK phosphorylation and activation. All the JAK mutations mentioned above are summarized in Table 1.
Additionally, a three-way t(9;13;16) (p24;q34;p11) chromosome translocation was detected in a cutaneous CD4 positive T-cell lymphoma case, in which JAK2 was fused to a novel gene ATXN2L. This fusion product contained the full ATXN2L protein and the catalytic domain of the JAK2 kinase, leading to constitutive activation of the JAK2/STAT signaling pathway, similar to the TEL-JAK2 chimeric protein [41]. In one case of classical Hodgkin lymphoma (cHL), the t(4;9)(q21;p24) translocation was observed, which resulted in a new oncogenic and enzymatically activated SEC1A-JAK2 fusion protein. Additionally, the fused protein was sensitive to JAK inhibitors [42]. Interestingly, by genetic profiling of breast implant associated anaplastic large cell lymphoma (BIA-ALCL), JAK2 was found to fuse with its downstream node STAT3, and this is also the first reported fusion fact in BIA-ALCL [43]. Utilizing whole-transcriptome sequencing in CD30+ LPD, a fusion involving NPM1 (5q35) and TYK2 (19p13) was observed. The fusion encoded an NPM1-TYK2 chimeric protein containing the oligomerization domain of NPM1 and an intact catalytic domain in TYK2. The NPM1-TYK2 fusions were found in 2 of 47 (4%) primary cases and functionally evoked activation of TYK2 and STAT1/3/5 [44]. A recurrent chimera combining transcription factor NFkB2 and TYK2 was also discovered in WT JAK1/STAT3 ALK(-) ALCL [10]. Moreover, JAK chimeric aberrations were also identified in BCR-ABL1like pediatric BCP-ALL [14], CTCL [45] and pediatric cHL [46].
Upstream Drivers for JAK Activation
This section describes how JAKs are deregulated by kinase/phosphatase, non-cytokine stimulus and trans-modulated by other factors. As members of the class I nonreceptor protein tyrosine phosphatase family, PTPN proteins are ubiquitously expressed with high levels in immune cells [47]. In cHL, splice variants of PTPN1, which missed one or more exon sequence and were catalytically inactive, augmented downstream JAK/STAT signaling [48,49]. As a tumor suppressor capable of inhibiting the JAK/STAT pathway, PTPN2 suppressed T cell proliferation. Therefore, bi-allelically inactivated PTPN2 identified in 2 out of 39 cases of PTCL led to JAK/STAT activation [50]. Similarly, PTPN6 loss-of-function N225K and A550V mutants exhibited reduced tyrosine phosphatase activity and caused the deregulated JAK3/STAT3 pathway in diffused large B cell lymphoma (DLBCL) [51]. Moreover, the PIM serine/threonine kinase aberrant expression and activation appeared in several cancerous contexts, including primary mediastinal large B-cell lymphoma and cHL, promoting cancer cell survival and immune surveillance escape partly via modulating JAK/STAT activity [52,53]. Abnormal suppression of SHP1/2 and SOCS-1 in multiple myeloma (MM) plasma cells significantly correlated with the sustained activation of the JAK/STAT3 pathway [54]. A double kinase fusion ITK-SYK was identified in PTLC, which drove cellular transformation and progression of this malignancy. Additionally, through microarray data analysis, JAK3/STAT5 activation was discovered as a downstream effect of ITK-SYK aberrance, and pharmacological inhibition of JAK3 abrogated STAT5 phosphorylation, suppressed cell survival and induced G1/S phase arrest [5].
Several non-cytokine upstream stimuli have been recounted to directly affect JAK/STAT signaling. By exploiting the IL-10/JAK pathway, the human T-cell leukemia virus type 1 (HTLV-1) viral protein HBZ induced an increased IL-10 level, suppressed host immune response and therefore upgraded HTLV-1 proliferation in infected T leukemia cells [55]. In cHL, lymphotoxin-α was characterized as one of the factors that promotes JAK2/STAT6 activation, as dissected by chromatography coupling with mass spectrometry [56]. In MM cells, hypoxia-dependent erythropoietin (EPO)-receptor was shown to be upstream of the JAK signaling pathway. JAK2 could be phosphorylated by recombinant EPO in kinase assay and EPO exposure intriguingly reduced myeloma cell survivals [57].
Trans-mediation of JAK family proteins was also reported in recent years. In highgrade B-cell lymphoma, BCL6 was characterized as a transcription factor, which directly bound to the JAK2 promoter, as evidenced by ChIP-seq [58]. In DLBCL and follicular lymphoma (FL), the histone methyltransferase KMT2D has been shown as a bona fide tumor suppressor and one of the most frequently mutated genes. KMT2D directly mediated histone H3K4 methylation and thereby perturbs expression of a set of genes, including JAK/STAT [59]. miR-155, associated with poor prognosis, has been implicated in the progression of CTCL. This microRNA simultaneously modulated multiple survival-associated pathways, including JAK/STAT. Cobomarsen, a locked nucleic-acid-modified oligonucleotide inhibitor of miR-155, effectively saved expression of these survival cascades [60]. The JAK signaling pathway could be driven by MALT1 [61], MYD88 [62], HSP90 [63] and SOD [64] via undescribed mechanisms.
Classical JAK/STAT Pathway
The cytokine/JAK/STAT pathway starts when a cytokine binds to its cognate receptor and induces the dimerization and phosphorylation of the receptor on its intracellular domain. These receptors contain a common γ chain and a unique α chain. Specifically, IL-2 and IL-15 receptors share an additional IL-2/IL-15Rβ subunit [1]. The receptor activation further causes JAKs protein phosphorylation, creating docking sites for STATs phosphorylation and dimerization. The dimerized STAT then transfers to the nucleus and trans-regulates gene expression via binding to DNA consensus sequences [65].
Newly Identified Nuclear JAK Signaling
In addition to the traditional JAK/STAT signaling cascade, non-STAT phosphorylation and the nuclear role of JAKs have been proposed, which strongly relate to the pathogenesis and progression of lymphomas. In primary mediastinal B cell lymphoma (PMBL) and cHL, JAK2-mediated H3Y41 phosphorylation co-operated with JMJD2C-modulated H3K9 demethylation, thereby silencing the myc oncogene, promoting heterochromatin formation and remodeling epigenome [2] (Figure 2A). The H3Y41 locus may also be phosphorylated by JAK1, thus regulating nearly 3000 proliferation-and survival-associated genes in activated B cell-like diffuse large B cell lymphoma (ABC-DLBCL), including IRF4, MYD88 and MYC [93] (Figure 2A). Nuclear JAK3 has also been observed in CTCL cells, which interacted with the catalytic subunit of RNA polymerase II and phosphorylated histone H3 on its tyrosine residue [94] ( Figure 2B). Epigenetic phosphorylation by JAK family members occurs on histone modifiers as well. We have shown that in NKTCL, JAK3 transferred to the nucleus and phosphorylated PRC2 methyltransferase EZH2 at Y244, switching EZH2 from an epigenetic silencer to a transcriptional activator ( Figure 2B). The downstream activated genes were related to stemness, invasiveness, DNA replication, cell cycle, oncogenesis and proliferation [3]. Similarly, JAK2 also site-specifically phosphorylated EZH2 at Y641, and rendered EZH2 to avoid β-TRCP-mediated proteosomal degradation [4] ( Figure 2B). Apart from JAK-catalyzed phosphorylation, JAK3 and SUZ12 mutations orchestrated to drive T-cell transformation and T-ALL development [95].
Monotherapy
The most widely known JAK inhibitor tested in lymphoma trials is Ruxolitinib. This potent compound selectively inhibits JAK1 and JAK2 and is administrated orally. Ruxolitinib has been approved for the treatment of myelofibrosis (MF) by the US Food and Drug Administration (FDA) in 2011 and by the European Medicines Agency (EMA) in 2012, followed by the approval for treatment of hydroxyurea (HU)-resistant or -intolerant polycythemia vera (PV) in 2014 [96]. The drug is not only specific for the mutated form of JAK2 but also inhibits the wild-type JAK2 [97]. In cHL, Ruxolitinib has been seen to induce anti-proliferative effects and programmed cell death in vitro and significantly inhibited tumor progression and improved survival in vivo [98]. Effects of Ruxolitinib in cHL have also been validated in clinical trials, with a disease control rate of 54% (7/13) and a median response duration of 5.6 months [99], or an overall response rate of 9.4% (3/32) after six cycles of dosing for relapsed/refractory cases [100]. In MM, Ruxolitinib treatment decreased expression of genes including JAK2, TYK2, IL-6 and IL-18, driving disease progression and inducing antophagosome accumulation [101]. In a phase I clinical trial, Ruxolitinib was able to overcome lenalidomide and steroid resistance for relapsed/refractory MM patients, with a clinical benefit rate of 46% and an overall response rate of 38%, respectively [102]. Hypersensitivity of Ruxolitinib was noted in one patient with CSF3R T618I mutation, in which there were decreased white cell numbers and neutrophil counts as well as a normalization of the platelet count [103]. Effectiveness of Ruxolitinib was also seen in primary cutaneous CD8+ aggressive epidermotropic cytotoxic T-cell lymphoma [104], BCP-ALL [14] and ALCL [105], in which the JAK/STAT pathway played a vital role. However, whether Ruxolitinib is effective in treating PMBL remains controversial [98,99]. This medication has been approved to enter clinical trial phase I/II/III for the treatment of lymphoma, lymphoblastic leukemia or MM alone or together with other agents (NCT01877005, NCT01965119, NCT02164500, NCT02974647, NCT03117751, NCT03041636, NCT02723994, NCT03613428, NCT01712659, NCT03878524, NCT01914484, NCT01620216, NCT00674479, NCT00639002 and NCT03773107). The immunosuppressive side effects of Ruxolitinib have been reviewed extensively before [97].
Tofacitinib, an oral and small molecule compound, inhibits all four JAKs but preferentially inhibits JAK1 and JAK3 [106]. In EBV+ T and NK lymphoma cell lines and patient samples which displayed JAK3/STAT5 activation, Tofacitinib treatment effectively reduced p-STAT5 levels, suppressed proliferation, induced G1 cell cycle arrest and decreased EBV viro-protein LMP1 and EBNA1 expression [107]. In CTCL cells, Tofacitinib inhibited the level of aberrantly expressed anti-apoptotic miR-21 by blocking JAK3/STAT5 signaling, and STAT5 could directly bind to miR-21 promoter [108]. This drug reversed the majority of pro-survival signals modulated by JAK-STAT cascade in MM [109]. In PTCL, as mentioned above, the JAK3/STAT5 signaling program was identified to be downstream of ITK/SYK via Signal Net and cluster analyses of microarray data. JAK3 selective inhibitor tofacitinib abrogated the phosphorylation of STAT5, suppressed cell growth, induced cell apoptosis and arrested the cell cycle at the G1/S phase [5]. As JAK3-activating mutation was frequent in NKTCL pathogenesis, the pan-JAK inhibitor Tofacitinib efficiently reduced phosphorylated STAT5 and cell viability in JAK3-mutant and wild-type NKTCL cell lines and mouse xenografts [19,24]. However, in one case of relapsed T-ALL with two JAK3 activating mutations, Tofacitinib failed to induce a positive clinical response following failure of salvage chemotherapy, indicating that the presence of activating JAK3 mutations did not necessarily guarantee sensitivity to Tofacitinib treatment [110].
Moreover, several JAK-targeting new compounds or derivatives as well as JAK upstream inhibitor have been reported in recent years. Here I summarize these inhibitors based on the types of malignancy. In DLBCL, a natural osalmid derivative DCZ0858 blocked JAK2/STAT3 signaling and inhibited B lymphoma cell survival in a concentrationand time-dependent manner while causing no significant toxicity to normal B cells [111]. Additionally, upstream IRAK4 inhibition by highly selective novel small molecule inhibitors, ND-2158 and ND-2110, impeded survival of DLBCL cells by downregulating survival signals, including IL6/IL10/JAK/STAT3 [112]. In another lethal and skin-attacking lymphoma CTCL, a retinoic acid derivative, ECPIRM, induced cell apoptosis and induced G0/G1 phase arrest via inhibiting the JAK/STAT rather than the RAR/RXR pathway and exhibited little cytotoxicity in normal lymphoid counterparts [113]. Besides, a vitamin A derivative, 9-cis-RA, induced CTCL cellular apoptosis dose-and time-dependently via decreasing JAK1/STAT3/STAT5 phosphorylation, Bcl-xL and cyclin D1 levels [114]. A novel taspine derivate TPD7 was able to bind to the IL-2 receptor in CTCL and therefore suppressed the downstream cascade, including JAK/STAT and PI3K/AKT/mTOR [115] Additionally, another compound ONC201 exerted time-dependent cell survival inhibition in CTCL cell lines and patient-derived primary CD4+ malignant T cells, and the JAK/STAT pathway was downregulated with ONC201 treatment [116]. These derivatives or inhibitors demonstrated effectiveness and selectivity in harnessing JAK/STAT in order to treat CTCL. In NKTCL, frequent STAT3/5B activating mutations were detected in primary patient samples and cell lines, and JAK1/2/3 inhibitors potently suppressed cellular proliferation, inhibited tumor growth and induced apoptosis via abrogation of JAK/STAT program [117,118]. Moreover, NKTCL is known for EBV infection, which is also one of the criteria for NKTCL diagnosis, and LMP1 was a viro-and onco-protein generated by EBV. In NKTCL, a constructed human anti-LMP1 antibody successfully inhibited cell proliferation, induced apoptosis and activated antibody-dependent cell-mediated cytotoxicity and complement-dependent cytotoxicity at least partly via inhibiting JAK3/STAT3 [119]. Even classic cytotoxic agents also exhibited anti-JAK/STAT properties. Doxorubicin inhibited c-myc and PIM1 expression by repressing JAK/STAT3 and promoted NKTCL cell death [120]. In MM, compounds including Icarrin, 3-formylchromone, TM-233, Auranofin, AZD1480, thalidomide analogs and tetracyclic pyridone 6, inhibited upstream JAK1/2, thereby blocking constitutive STAT3 phosphorylation and its nuclear translocation, downregulating downstream STAT3 target genes, such as Bcl-2, Bcl-xl, survivin, COX-2, VEGF, Mcl-1, Cyclin D2 and MMP-9 and inducing programmed cell death [121][122][123][124][125][126][127]. Similarly, two novel and highly selective JAK inhibitors, INCB20 and INCB16562, effectively suppressed IL-6 dependent growth of MM cell lines and primary bone marrow-derived plasma cells [128,129]. In addition, several natural product extracts blocked JAK/STAT as well and exerted anti-myeloma effects. Leelamine from pine's bark attenuated phosphorylation of upstream JAK1/JAK2/Scr macromolecules and downstream STAT3, hence evoking myeloma cell cycle arrest and apoptosis [130]. A Scutellaria radix component, Baicalein, suppressed myeloma cell survival and proliferation by blocking IκB-α degradation, followed by downregulating IL-6/JAK/STAT3 and XIAP gene levels [131]. These findings demonstrated possibilities to inhibit myeloma cell survival, proliferation and invasiveness via targeting JAK/STAT using synthesized compounds and natural extracts. Moreover, in waldenström macroglobulinemia (WM), the pan-FGF trap molecule NSC12 significantly inhibited cellular growth and provoked apoptosis through halting JAK/STAT3, MAPK and PI3K-AKT pathways [132]. All the JAK-based monotherapies are summarized in Table 2.
Combinational Therapy
The most heavily studied and JAK-related dual inhibitor should be Cerdulatinib. This orally available compound demonstrates activities against JAK1/3 and SYK with limited inhibition of JAK2. Cerdulatinib did not inhibit phorbol-mediated signaling or activation in normal B and T cells, or T-cell receptor mediated signaling in T cells, showing selectivity and safety [133]. This inhibitor exerted potent antitumor activities in a subset of B-cell lymphomas, including ABC-DLBCL, germinal center-diffuse large B cell lymphoma (GC-DLBCL), mantle cell lymphoma (MCL), FL and small lymphocytic lymphoma (SLL) [133][134][135]. In CLL, the dual JAK/SYK inhibitor Cerdulatinib was a promising therapeutic agent that overcame the support of the microenvironment [136] and targeted critical survival pathways, used either alone or combined with Venetoclax [137]. This compound also displayed efficacies in ATLL [138]. Activities of Cerdulatinib against lymphoid tumors were evaluated in clinical trial phase I/II (NCT01994382 and NCT04757259). Another notable JAK-associated dual inhibitor is SB1518, which co-targets JAK2 and FLT3. This compound was selected as a development candidate and progressed into clinical trials for lymphomas [139]. SB1518 demonstrated safety and efficacy in various types of lymphomas, including refractory cases, and a phase I clinical trial demonstrated that an escalating dose of SB1518 led to significant tumor reduction of 4-46% among enrolled patients of relapsed/refractory lymphomas with well-tolerated toxicities [140,141] (NCT01263899 and NCT00741871).
The most widely known JAK inhibitor, Ruxolitinib, as mentioned above, has been applied in synergism with several different compounds. In ABC-DLBCL, JAK1/STAT3 was activated by autocrine IL-6/10 signaling, and Ruxolitinib synergized well with type I IFN inducer lenalidomide in vitro and in vivo [142]. In MM, both JAK1 and JAK2 presented overexpression in a proportion of patients, and Ruxolitinib treatment in combination with Bortezomib, Itacitinib or Daratumumab inhibited JAK/STAT3 phosphorylation, upregulated CD38 expression, inhibited in vitro and in vivo myeloma cell growth and induced cell apoptosis and subG0 arrest [73,143,144]. In NKTCL, Ruxolitinib and CDK4/6 inhibitor LEE011 treatment demonstrated synergistic growth inhibitory effects [145]. Ruxolitinib and Bcl-2/Bcl-xl inhibitor Navitoclax well synergized with each other, augmenting the expression of Bik, puma and Bax expression in cHL cells [146], lowering tumor burden and prolonging survival in an ATLL mouse model [147]. In CTCL cell lines, Ruxolitinib and Resminostat (HDAC inhibition) together exhibited substantial anti-cancer effects [148]. In relapsed/refractory T-ALL, Ruxolitinib and Venetoclax treatment reduced cell survival and proliferation in vitro [149].
The combination between JAK inhibitor and PI3K inhibitor showed significance in a few lymphoid malignancies. In relapsed/refractory B cell lymphoma, JAK1 inhibitor itacitinib+ PI3Kδ inhibitor INCB040093 demonstrated efficacy and few toxicities, presenting a promising treatment option [150]. In MM, JAK2 inhibitor TG101209 and PI3K inhibitor LY194002 combination displayed synergistic cytotoxicity against myeloma cells [151]. In PI3K inhibitor-resistant B-cell and T-cell lymphoma cell lines, the addition of JAK inhibitor BSK805 circumvented well with PI3K inhibitor acquired resistance in lymphomas, and simultaneous inhibition of these two pathways produced combined effects [152].
Successful combinations were also observed for inhibitors against JAK and BTK, a major target for B-cell malignancies [153]. The bromodomain and extra-terminal (BET) inhibitor OTX015 targeted different pathways including JAK/STAT in mature B-cell lymphoid cancer cell lines, and it presented in vitro synergism with BTK inhibitor [154]. The JAK/STAT inhibitor + BTK inhibitor Ibrutinib in combination bypassed survival stimuli from bone marrow mesenchymal stromal cells to induce cell death in CLL [155] and induced IRF4 levels to synergistically kill ABC-DLBCL cells [93].
A couple of studies have evaluated the combination between JAK inhibitors and the anti-apoptotic macromolecule BCL inhibitors. Combined inhibition of JAK and BCL2 demonstrated strong potentiation of cytotoxicity in CTCL cells, driven by intrinsic and extrinsic apoptosis pathways [156]. In Burkitt lymphoma (BL), BCL6 deficiency induced JAK2 expression and STAT3 phosphorylation, and a JAK2 inhibitor, Lestaurtinib, repressed survival of BCL6-deficient cells and tumor xenografts, demonstrating the significance of co-suppressing BCL6 and JAK2, which was considered as synthetic lethality [58]. In cHL, Decitabine inhibited cell growth but concurrently upregulated pro-survival signals, such as MEK/ERK, JAK/STAT and NF-κB, demonstrating a rationale for combining Decitabine with BCL/BCL2L1 inhibitor ABT263, JAK-STAT inhibitors Fedratinib and SH-4-54, AKT inhibitor KP372-1, NF-κB inhibitor QNZ, as well as the BET family proteins inhibitor JQ1 [157].
Investigators also tried to combine JAK inhibitor with conventional therapies in order to ameliorate clinical outcomes. In MCL, anti-JAK/STAT3 agent Degrasyn was considered as a useful therapy administered together with Bortezomib [158]. In MM, selective JAK1 inhibitor INCB052793 in combination with carfilzomib, bortezomib, dexamethasone or lenalidomide effectively reduced tumor volume in tumor-bearing mice [159]; another novel and orally available JAK1/2 inhibitor, CYT387, was able to prevent IL-6-induced STAT3 phosphorylation and was synergized in killing myeloma cells with traditional therapies Melphalan and Bortezomib [160]. JAK inhibitors combined with the cytotoxic anti-folicacid agent methotrexate significantly suppressed lymphoma cell growth and prolonged survival of tumor xenografts, resulting in better clinical outcomes [161,162]. In CML, targeting JAK/STAT3 cascade by JAK inhibitor in combination with classical BCR-ABL inhibitor promoted cell death and eliminated minimal residual disease located in the bone marrow, representing a hopeful therapeutic strategy [163,164].
In addition, as JAK/STAT3 mutations promoted STAT3-based transcription activation and directly regulated NF-κB and CD30 levels in NIK+/ALK-ALCL, combined NIK and JAK inhibitor therapy could be applied to benefit patients [165]. JAK inhibitor AZD1480 treatment potently blocked STAT phosphorylation but yielded no anti-proliferative effects in cHL, as it led to ERK1/2 phosphorylation upregulation. Therefore, inhibiting ERK activities by MEK inhibitors along with JAK inhibition resulted in enhanced cytotoxicities [166]. Histone deacetylase (HDAC) inhibitors represent an encouraging class of antitumor therapies, and these inhibitors induce minimal toxicity to normal cells [167]. The orally administered HDAC6 inhibitor Citarinostat was used together with JAK/STAT3 inhibitor Momelotinib, resulting in reduced mitochondrial membrane potential, decreased Bcl-2 and Bcl-xl and activated caspase 3/9, indicating extrinsic apoptosis [167]. In Sézary syndrome, an aggressive and diffused form of CTCL, the HDAC inhibitor Romidepsin showed remarkable but transient activity, and the add-in of JAK inhibitor in combination led to markedly increased therapeutic responses [168]. In LPD, constitutive JAK/STAT3 significantly contributed to disease progression, and combinations including JAK, HSP90 and mTOR inhibitors yielded satisfactory effects on repressing cell viability [169]. All the JAK-based combinational therapies are summarized in Table 3. Antcin H and Methotrexate BCL Inhibit JAK and folic acid [161] csDMARDs and Methotrexate NSHL, AML Inhibit JAK and folic acid [162] Nilotinib and INC424 CML Inhibit JAK and Bcl-Abl [163] INK inhibitor and JAK inhibitor ALCL Inhibit JAK and INK [165] AZD1480 and UO126/PD98059 HL Inhibit JAK and MEK [166] Citarinostat and Momelotinib Lymphoid malignancies Inhibit JAK/STAT3 and HDAC6 [167] Romidepsin and Mechlorethamine CTCL Inhibit JAK and HDAC [168] INK128/Temsirolimus/Ruxolitinib and Luminespib LPD Inhibit JAK/STAT3, HSP90 and mTOR [169] Abbreviations
Conclusions and Future Directions
Accumulating evidence in this review demonstrates how JAKs are aberrantly expressed in lymphoid cancerous contexts and how JAKs connect with upstream and downstream signaling. JAK abnormalities, either mutation or translocation, were found in a few but not all cases in a variety of lymphoid cancers. These abnormalities augment the signals of the cytokine/JAK/STAT pathways, but do not necessarily support lymphoid tumor survival. In a majority of contexts, JAKs signal through STAT-based activation and transcriptional regulation, whereas in a few contexts, the tyrosine kinase JAKs may phosphorylate histone H3 or EZH2 and reprogram transcription profiles [3,4,93,94]. These findings contribute to the importance of the nuclear role of JAKs.
In the recent decade, a couple of specific small-molecule JAKs inhibitors have been developed and utilized to target JAK abnormalities in lymphoid malignancies, such as Ruxolitinib and Tofacitinib. Ruxolitinib has entered more than 10 clinical trials for lymphoid disease treatment. Several natural product derivatives and traditional medications have also been reported to be able to block JAK/STAT signaling and impede cancer cell survival [111,124]. Combinational JAK inhibition, either through a dual inhibitor or through several agents, exhibits better cell killing effects than monotherapy. These results demonstrate an indispensable role of JAK-targeting in treating lymphoid cancers, and future studies are needed to compare the effects of these JAK inhibition therapies over conventional therapeutics. | 2021-10-16T15:07:12.281Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "d9db38ee369a585173006d803d94612e09981034",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/20/5147/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8d539227c6a01d840a42b27ed60ba55c958f335c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
222161040 | pes2o/s2orc | v3-fos-license | Quality improvement for cancer multidisciplinary teams: lessons learned from the Anglian Germ Cell Cancer Collaborative Group
Summary Shamash and colleagues describe how their supra-regional germ cell tumour multidisciplinary team achieved standardisation of treatment and improved survival. We discuss some of the insights the study provides into prioritising complex patients, streamlining processes, the use of telemedicine, and the centrality of good data collection to continuous quality improvement.
the suggestions set out in the Gore report 1 years ahead of publication.
Taking each of these individually: Focussing on complex cases The greatest benefit of MDT working is seen in complex cases, e.g. unusual subtype of disease, failure of previous treatment, significant comorbidities, and social or psychological problems. 2 These patients often do not fit guidelines, are not eligible for clinical trials, and can be challenging to engage in healthcare services. Shamash et al. 2 highlight patients with learning difficulties or mental health problems and those with late relapses each present problem that are less commonly addressed and require tailored individualised treatment plans. These findings are in line with the recent study on what constitute a complex case for MDT discussion, mirroring those found to be indicators of complexity across a range of tumour types. 3 Although they represent a small portion of cases, considerable amount of additional support is needed before and after diagnosis and treatment. 2 Shamash and colleagues 2 set out criteria for cases that may not need full discussion in the MDT meeting. It may be desirable to go further and identify cases that are truly 'complex' and those that are 'simple'. Recently, Soukup and colleagues 3 published work on the development and validation of a tool for stratifying cases by complexity, which might allow teams to streamline their caseload in a scientific manner. Further research is needed to assess its impact on patient care and the efficiency of MDT processes.
The inclusion of information on patients' comorbidities and psychological and social factors that may impact care are persistently, poorly represented in MDT meetings. 4 Such information, as well as that which focusses on the disease in question, is necessary for comprehensive clinical management planning. 5 These findings support the conclusions by Shamash and colleagues, 2 that patients with complicating features require holistic discussion in order to develop tailored treatment plans.
Using chair's action to facilitate urgent treatment The time between meetings can present a significant period for patients with rapidly progressing disease waiting for MDT review and recommendations. 2 In such cases, the MDT chair is well placed to endorse management proposals of clinicians out with www.nature.com/bjc the MDT meeting in order to avoid delays. 2 Such cases should still be registered with the MDT and could be reviewed post hoc. The responsiveness of an MDT to clinical or organisational pressures is an area fertile for improvement.
The use of videoconferencing to improve collaborative decisionmaking Videoconferencing has been controversial in MDT meetings, and Shamash and colleagues 2 discuss some of its advantages and challenges. Regular SMDT meetings are not feasible without some form of remote contact. 2 Technology failure and differences in communication styles can present challenges to the quality of MDT decision-making. 6 Perhaps a lasting legacy of COVID-19 will be the dramatic shift towards telemedicine, replacing many faceto-face interactions. Interestingly, Shamash and colleagues 2 note the benefits of a yearly meeting at which members of the SMDT can interact and discuss matters of importance. Many MDTs now manage to operate remotely via video link. It may be desirable to supplement this with periodic face-to-face interaction that permit more nuanced communication regarding performance, operational policy, challenges, and future directions.
Data collection and audit
The careful, planned collection of clinical and process data was crucial for assessing complex areas of healthcare, such as care pathways and organisational changes. 2 Recent NHS England and NHS Improvement report 7 has highlighted that data collection and regular audit must accompany MDT transformation. As Shamash and colleagues 2 showed, the collection and analysis of such data might provide a resource to benchmark processes and outcomes, thereby driving standardisation and convergence towards best practice. Well-designed data collection supports quality improvement and clinical research, driving the development of new and better standards of care. Ultimately, this will provide high-quality information to patients and their doctors, enabling shared decision-making of the highest quality.
ADDITIONAL INFORMATION
Ethics approval and consent to participate Ethical approval is not applicable for this editorial piece.
Data availability Not applicable. Note This work is published under the standard license to publish agreement. After 12 months the work will become freely available and the license terms will switch to a Creative Commons Attribution 4.0 International (CC BY 4.0).
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Tayana Soukup 1 , Nick Sevdalis 1 , James S. A. Green 1,2 and Benjamin W. Lamb 3 | 2020-10-06T13:33:23.210Z | 2020-09-29T00:00:00.000 | {
"year": 2020,
"sha1": "a4b4b4536418c6840bb4fe698abc6300ba72adcb",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41416-020-01080-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "21a762011a35585ac034775476f6118259e776be",
"s2fieldsofstudy": [
"Medicine",
"Business"
],
"extfieldsofstudy": [
"Medicine"
]
} |
27411277 | pes2o/s2orc | v3-fos-license | STUDY OF THE PLACENTAL ATTACHMENT OF FUNICULUS UMBILICALIS IN NORMAL AND PRE-ECLAMPTIC PREGNANCIES AND ITS EFFECTS ON BIRTH WEIGHT
Address for Correspondence: Dr. Ankit Jain, S/O Subodh Kumar Jain, Near Jain mandir, Neha Nagar (Shakti Nagar), Makronia, distt. SAGAR, (M.P.), Pin470004, Telephone: +91 9827570908, +91 9977799776. E-Mail: ankitjain6285@gmail.com Introduction: Abnormalities in the insertion of umbilical cord is associated with a number of complications in pregnancy and these complications may adversely affect the fetus. The aim of this study was to evaluate the variations in the attachment of umbilical cord in normal and pre-eclamptic pregnancies and to assess the effects of variable cord insertions on fetal birth weight. Materials and Methods: Seventy placentae each of normotensive and pre-eclamptic pregnancies were studied (n=140). After delivery, weight of the baby was recorded by using weighing machine and the attachment of umbilical cord on placenta was observed. Results: In the present study, commonest site of insertion of umbilical cord was central (60%) in normal pregnancies, whereas in pre-eclamptic pregnancies, a common site of insertions of umbilical cord were central (37.14%) and/or eccentric (34.28%). Marginal cord insertions were found 2.11 times more in pre-eclamptic pregnancies as compared to normal pregnancies. A single case of velamentous insertion was found in the preeclamptic pregnancies. We found that 65.52% of placentae with abnormal cord insertions were associated with low fetal birth weight and the association between cord insertion and fetal birth weight was found statistically highly significant. Discussion: Abnormal cord insertions are significantly associated with pre-eclampsia. Mean fetal birth weight decreases as the site of cord insertion moves towards the periphery. Conclusively, abnormalities in the site of insertion of umbilical cord have an adverse effect on fetal health. Therefore, early detection of abnormal cord insertion may provide sufficient information to take additional care in such conditions.
INTRODUCTION
development of the umbilical cord [2].Normally, the umbilical cord is inserted at the center or near the center (eccentric) of the placenta.Other types of attachments of umbilical cords are marginal, velamentous and furcated [3].In marginal cord insertion, the umbilical cord is inserted within 2 cm from the placental edge [4].In velamentous cord insertion, the umbilical cord is inserted into the chorio-amniotic membranes rather than on to the placental mass [5].In furcated insertion, umbilical cord branch before its insertion on the fetal surface of the placenta [6].Variations in the site of insertion of umbilical cords are explained by two different theories.First is "placental migration theory or trophotropism", in which the placenta migrates towards the richly vascularised area with advancing gestation to achieve better perfusion [7].Another is the "blastocyst polarity theory", which hypothesizes that abnormal cord insertion results from malpositioning of blastocyst during implantation [8].
Abnormal cord insertion is associated with poor obstetric outcomes.Increased rates of fetal malformation, low birth weight, preterm labor, fetal growth restriction, vasa previa, low APGAR (appearance, pulse, grimace, activity and respiratory rate) scores and intrapartum complications have been noted with velamentous cord insertions [7,9,10].In velamentous cord insertion, umbilical vessels are inserted into the membranes, therefore these vessels lack the protection of Wharton's jelly and are prone to rupture and/or compression, which results in acute cessation of umbilical blood flow.Thus the risk of perinatal death is increased in pregnancies with velamentous cord insertions [11].
Various studies suggest that compression of umbilical vessels reduces cardiac output and increases the risk of pulmonary complications after birth [12,13].Marginal cord insertion has also been associated with fetal growth restriction and preterm delivery [9,14].Because of poor obstetric outcomes, evaluation of the attachment of umbilical cord deserves attention right from the first trimester.Sonographic visualization of the site of cord insertion becomes more difficult with advancing gestation; therefore, it should be evaluated at 15-20 weeks of gestation [15,16].
The purpose of this study was to observe the variations in the attachment of umbilical cord in normal and pre-eclamptic pregnancies and to determine whether the umbilical cord insertion site could be linked to fetal birth weight.
MATERIALS AND METHODS
The present study was an observational comparative study, which was carried out in the Department of Anatomy, Gandhi Medical College, Bhopal (M.P).A total of 140 placentae with umbilical cord were collected from pregnant women delivered in Sultania Zanana Hospital associated to G.M.C. Bhopal, after permission from institutional ethics committee.All mothers were properly explained about the study and their written consent was taken.
Women were diagnosed with pre-eclampsia if they had systolic BP > 140mmHg and diastolic BP > 90mmHg measured on two or more occasions, at least 4 hrs apart after the 20th week of gestation with proteinuria.Proteinuria was considered when there was a urine dipstick value of at least 1+ (>30mg/dl) on two separate occasions at least 6 hours apart [17].On this basis, subjects were divided into two groups.Group I consist of placentae obtained from normal pregnant women (n=70) with gestational age 37-40 weeks.Group II consist of placentae obtained from pre-eclamptic women (n=70) of similar gestational age.Patients with essential hypertension, diabetes mellitus, anemia, renal disorders and other illness associated with pregnancy were excluded from this study.
The mother's and their neonates identified for this study were given code numbers and studied at the hospital.After delivery, fetal birth weight was recorded.The placentae were collected soon after their expulsion and washed in the running tap water to clear all blood.Distance from the placental margin to the site of attachment of the umbilical cord was measured.The attachment of the umbilical cord on the fetal surface of the placenta was categorized into central, eccentric, marginal and velamentous insertion.Central cord insertion includes the cord, which was inserted into the center of placenta, whereas cords which were inserted near the center were included in eccentric cord insertion.Both central and eccen- tric cord insertions were considered as normal cord insertion [Fig.1A, B].Marginal cord insertion includes the cord, which was inserted within 2 cm from placental margin, whereas velamentous cord insertion includes the cord which was inserted into the membrane rather than placental mass.Both marginal and velamentous cord insertions, were considered as abnormal cord insertion [Fig.2A, B].
Statistical analysis of data was performed by using Statistical Package for Social Sciences (SPSS) version 15.0 (Chicago, IL).The values of continuous variables were presented as mean values ± standard deviation.The statistical significance was analyzed by using Chi-square test for categorical data.The significance of differences between group parameters was considered significant if p < 0.05.found in 27.14% placentae of normal pregnancies.In pre-eclamptic pregnancies, central and eccentric cord insertions were found in 37.14% and 34.28% placentae respectively.Marginal cord insertion was found in 12.86% and 27.14% placentae of normal and pre-eclamptic pregnancies respectively.Only one case of velamentous insertion was found in the pre-eclamptic pregnancies.Thus the commonest site of insertion of the umbilical cord in normal pregnancy was central, whereas in pre-eclamptic pregnancies commonest sites of insertion of umbilical cord were central and/or eccentric.We observed that marginal cord insertions were 2.11 times more in pre-eclamptic pregnancies as compared to normal pregnancies.Statistically, the differences in the attachment of umbilical cord between two groups were found to be significant [Table -1].
RESULTS
In the present study, central cord insertion was found in 60% and eccentric cord insertion was with low fetal birth weight (birth weight less than 2500 grams), while 72.07% of placentae with normal cord insertions were associated with fetal birth weight more than 2500 grams.The relation between umbilical cord insertion on the placenta and fetal birth weight were found statistically highly significant [Table -2].Chi-square (x 2 ) = 14.973, df = 1, p = 0.0001, Statistically highly significant.* Normal cord insertion includes centric and ec-centric cord insertions.** Abnormal cord insertion includes marginal and velamentous cord insertions.
DISCUSSION
Udainia A. et al., reported that commonest site of the insertion of the umbilical cord is eccentric in both normal and pre-eclamptic pregnancies [18].In contrast to above, present study showed that the commonest site of insertion of the umbilical cord was central in normal pregnancies, whereas, in pre-eclamptic pregnancies, central and eccentric insertion were found in almost equal proportion .In our study, the attachment of umbilical cord was found to be normal (central/eccentric) in 87.14% placentae of uncomplicated pregnancies.This finding is consistent with those reported by earlier observers [19][20][21][22] in a manner that central and/ or eccentric cord insertions were the commonest type of attachment of the umbilical cord on placenta in uncomplicated pregnancies .Therefore, these types of cord attachments were considered as normal cord insertions.
Previous experience of caesarean delivery and maternal medical conditions i.e. maternal asthma, gestational diabetes, chronic hypertension had an increased risk of abnormal cord insertion [11].Udainia A. et al., found that as the severity of pregnancy induced hypertension increases, insertion of the umbilical cord becomes marginal to velamentous in nature [18].Benirschke K. found that incidence of marginal cord insertion was 7.9% in singletons and 24.33% in twins [23], whereas Ebbing C. et al, found the incidence of marginal cord insertion was 6.3% in singletons and 10.9% in twins [11].Although all subjects in our study were singletons.In this study, the incidence of marginal cord insertion was found in 12.86% placentae of normal singletons pregnancies [Table -1 & 3], which is higher as compared to above-mentioned studies, but is lower to the study done by Di Salvo et al [22], and Lakshmidevi CK. et al .Rath G. et al, reported that marginal cord insertion is associated with hypertensive pregnancies [24].
In the present study, we found that prevalence of marginal cord insertions was 2.11 times more in pre-eclamptic pregnancies as compared to normal pregnancies .This finding is in concurrence with the findings of Udainia A. et al [18], and Pretorius DH. et al [15], who had observed a similar increase in marginal cord insertion in pre-eclamptic pregnancies.Marginal cord insertion in a previous pregnancy increases the risk of velamentous cord insertion in the subsequent pregnancy and vice versa [11].In the present study, the incidence of velamentous cord insertion was found in 1.43% placentae of pre-eclamptic pregnancies .This finding is in-line with the study of Udainia A. et al. [18], whereas, it differs with the observation of Monie IW, who reported much higher frequency (15.3%) of velamentous cord insertions in pregnancy induced hypertension [25].
Abnormalities in the attachment of umbilical cord on placenta have been associated with a number of complications in pregnancy i.e. vasa previa, preterm labor [14,26].Previous studies show the association between the abnormal cord insertion and fetal malformations [7,25,27].Fetal malformations associated with abnormal cord insertion are esophageal atresia, spina bifida, trisomy 21 and congenital heart defect i.e. ventricular septum defect [7].An abnormal cord insertion has also been implicated in the induction of hypertension and intrauterine growth restriction (IUGR) [11,15,28].
In the present study, we found that the mean fetal birth weight decreases, as the attachment of umbilical cord in the placenta shifts from central to the periphery [Fig- 1].This finding is in concurrence with the study done by Udainia A. et al [18].We found that abnormal cord insertion was significantly associated with low fetal birth weight .This finding is consistent with those reported by earlier observers [18,24].The vessels density is lower in placentae with abnormal cord insertion as compared to those with normal cord insertion, and the fetal stem vessels may be longer in the abnormal cord insertion, which would increase vascular resistance [29,30].Therefore, abnormal cord insertion hampers the nutrient transfer to the fetus and may induce fetal growth restriction.Since pregnancies complicated with abnormal cord insertions are at great risk for adverse perinatal outcome, thus various investigators have suggested that the systemic identification of the abnormal cord insertion is an extremely important part of the prenatal sonographic evaluation [15,21,22].
CONCLUSION
Abnormalities in the development and site of insertion of umbilical cord have potential to affect fetal health and well-being.Abnormal insertions of umbilical cords are significantly associated with pre-eclampsia and these anomalous cord insertions are also significantly associated with low fetal birth weight.Mean fetal birth weight decreases, as the site of insertion of umbilical cord moves towards the periphery.Abnormal cord insertion increases the risk of intrapartum death at term, due to rupture of the unprotected umbilical vessels during labor.Therefore, prenatal sonographic detection of abnormal cord insertion might offer enough information to justify an increased focus throughout gestation and prompt additional care during and after labor.
International
Journal of Anatomy and Research, Int J Anat Res 2017, Vol 5(1):3535-40.ISSN 2321-4287 DOI: https://dx.doi.org/10.16965/ijar.2017.107Accepted: 13 Feb 2017 Published (O): 28 Feb 2017 Published (P): 28 Feb 2017 fetal surface of the placenta [1].The umbilical cord delivers oxygen and nutrients to the developing fetus throughout pregnancy.Thus, the growth of the fetus is highly dependent on the The umbilical cord is also referred to as Funiculus umbilicalis or Birth cord.It is a flexible structure that connects the developing embryo to the Ankit Jain, Sonia Baweja, Rashmi Jain.STUDY OF THE PLACENTAL ATTACHMENT OF FUNICULUS UMBILICALIS IN NORMAL AND PRE-ECLAMPTIC PREGNANCIES AND ITS EFFECTS ON BIRTH WEIGHT.
Fig. 1 :
Fig. 1: Showing placenta with normal umbilical cord (UC) insertion taken from normal pregnancy.A. Showing central cord insertion in which umbilical cord (UC) inserted in the center (Cr) of the placenta.B. Showing eccentric cord insertion in which umbilical cord (UC) inserted near the center (Cr) of the placenta.
Fig. 2 :
Fig. 2: Showing placenta with abnormal umbilical cord (UC) insertion taken from pre-eclamptic pregnancy.A. Showing marginal cord insertion in which umbilical cord (UC) inserted within 2 cm from the placental edge.B. Showing velamentous cord insertion in which umbilical cord (UC), inserted in the membrane (arrow).
Comparison of mean fetal birth weight in various cord insertions.In our study, mean fetal birth weight was found 2649.07 ± 260.52 grams and 2530.95 ± 215.49 grams in placentae with central and eccentric cord insertion respectively.Whereas in placentae with marginal and velamentous cord insertion, mean fetal birth weight was found 2296.66 ± 273.77 grams and 2150 grams respectively [Fig.3].We also observed that 65.52% of placentae with abnormal cord insertions were associated Ankit Jain, Sonia Baweja, Rashmi Jain.STUDY OF THE PLACENTAL ATTACHMENT OF FUNICULUS UMBILICALIS IN NORMAL AND PRE-ECLAMPTIC PREGNANCIES AND ITS EFFECTS ON BIRTH WEIGHT.
Ankit Jain, Sonia Baweja, Rashmi Jain.STUDY OF THE PLACENTAL ATTACHMENT OF FUNICULUS UMBILICALIS IN NORMAL AND PRE-ECLAMPTIC PREGNANCIES AND ITS EFFECTS ON BIRTH WEIGHT.
Table 1 :
Distribution of the insertion of umbilical cord in normal and pre-eclamptic pregnancies.
Table 2 :
Relation between umbilical cord insertion and fetal birth weight.
Table 3 :
Comparison of distribution of umbilical cord insertion with previous studies in uncomplicated pregnancies. | 2017-08-27T11:30:09.544Z | 2017-02-28T00:00:00.000 | {
"year": 2017,
"sha1": "7cf11fb4d2e42da85408daf881cdf5aa64861bb6",
"oa_license": "CCBYSA",
"oa_url": "http://www.ijmhr.org/ijar.5.1/IJAR.2017.107.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "7cf11fb4d2e42da85408daf881cdf5aa64861bb6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
266996674 | pes2o/s2orc | v3-fos-license | Prophylactic pectoralis major flap to compensate for increased risk of pharyngocutaneous fistula in laryngectomy patients with low skeletal muscle mass (PECTORALIS): study protocol for a randomized controlled trial
Background Total laryngectomy (TL) is a surgical procedure commonly performed on patients with advanced laryngeal or hypopharyngeal carcinoma. One of the most common postoperative complications following TL is the development of a pharyngocutaneous fistula (PCF), characterized by a communication between the neopharynx and the skin. PCF can lead to extended hospital stays, delayed oral feeding, and compromised quality of life. The use of a myofascial pectoralis major flap (PMMF) as an onlay technique during pharyngeal closure has shown potential in reducing PCF rates in high risk patients for development of PCF such as patients undergoing TL after chemoradiation and low skeletal muscle mass (SMM). Its impact on various functional outcomes, such as shoulder and neck function, swallowing function, and voice quality, remains less explored. This study aims to investigate the effectiveness of PMMF in reducing PCF rates in patients with low SMM and its potential consequences on patient well-being. Methods This multicenter study adopts a randomized clinical trial (RCT) design and is funded by the Dutch Cancer Society. Eligible patients for TL, aged ≥ 18 years, mentally competent, and proficient in Dutch, will be enrolled. One hundred and twenty eight patients with low SMM will be centrally randomized to receive TL with or without PMMF, while those without low SMM will undergo standard TL. Primary outcome measurement involves assessing PCF rates within 30 days post-TL. Secondary objectives include evaluating quality of life, shoulder and neck function, swallowing function, and voice quality using standardized questionnaires and functional tests. Data will be collected through electronic patient records. Discussion This study’s significance lies in its exploration of the potential benefits of using PMMF as an onlay technique during pharyngeal closure to reduce PCF rates in TL patients with low SMM. By assessing various functional outcomes, the study aims to provide a comprehensive understanding of the impact of PMMF deployment. The anticipated results will contribute valuable insights into optimizing surgical techniques to enhance patient outcomes and inform future treatment strategies for TL patients. Trial registration NL8605, registered on 11-05-2020; International Clinical Trials Registry Platform (ICTRP).
Introduction
Total laryngectomy (TL) is performed routinely in patients with primary advanced laryngeal or hypopharyngeal carcinoma with invasion of the thyroid or cricoid cartilage and/or extra laryngeal soft tissue.TL is also indicated in patients with residual or recurrent disease after treatment with chemoradiation or radiotherapy solely and patients with a dysfunctional larynx due to posttreatment sequalae.During TL, the distinction between the swallowing and breathing pathways is established by forming both a neopharynx and a tracheostoma.A pharyngocutaneous fistula (PCF) is one of the most common postoperative complications after TL and is defined as a saliva-leaking communication between the neopharynx and the skin (see Fig. 1).PCF mostly exists between the mucosal line of the neopharynx and the surgical skin incision or, but less frequently, around the tracheostoma [1,2].Incidence rates vary between 6% and 58% in literature [3].In a nationwide Dutch study an overall incidence rate of 26% in 324 patients undergoing TL was found [4].
PCF is associated with severe consequences such as prolonged hospital stay and delay or interruption of the start of oral feeding and voice rehabilitation, leading to a long healing course significantly impacting the patient's quality of life [5][6][7].In addition, PCF may cause complications such as carotid artery rupture or delay of the needed adjuvant treatment, potentially jeopardizing optimal oncologic treatment [4][5][6][7][8].PCF has even been associated with an increased risk of distant metastases after TL salvage [9].
Conservative treatment of PCF usually consists of local wound treatment and antibiotics, and the patient is fed by a nasogastric tube or parenteral nutrition.However, due to the breakdown of the mucosal suture and therefore the constant flow of saliva into surrounding soft tissues, wound healing is often impaired.Surgical closure of PCF after failure of the conservative treatment is indicated in 37-58% of the patients [2,7,10].In summary, preventing PCF holds the potential to minimize the influence of the negative outcomes on the patient's quality of life, help to avoid additional surgeries and their associated morbidity and reduce the risk of life-threatening complications.
One of the surgical strategies to minimize PCF development following TL is the transfer of a myofascial pectoralis major flap (PMMF) to the neck as onlay for reinforcement of the pharyngeal closure (see Fig. 1) [11].It has been shown that a prophylactic PMMF reduces the risk of PCF in TL patients significantly [12][13][14] or PCFs were smaller and less likely to require surgical repair Discussion This study's significance lies in its exploration of the potential benefits of using PMMF as an onlay technique during pharyngeal closure to reduce PCF rates in TL patients with low SMM.By assessing various functional outcomes, the study aims to provide a comprehensive understanding of the impact of PMMF deployment.The anticipated results will contribute valuable insights into optimizing surgical techniques to enhance patient outcomes and inform future treatment strategies for TL patients.
Several risk factors for PCF have been described in literature such as prior chemoradiotherapy, the extent of the pharyngectomy, neck dissection, pre-treatment tracheostomy, preoperative albumin and low BMI [3,4,17,18].A nationwide Dutch study showed a broad range of PCF incidence between the centers of the Dutch Head and Neck Society (NWHHT), which could not be fully explained by the prediction model developed with known risk factors know at that time.More recently also, a preoperative radiological assessed low skeletal muscle mass (SMM) was found to be an independent risk factor for PCF development [18,19].
Therefore, in this randomized controlled trial (RCT), our primary aim is to investigate if the use of PMMF as onlay on the pharyngeal closure for reinforcement will reduce the PCF rate in TL patients with a high risk for PCF because of low SMM.
Primary objective
To determine and compare among patients with low SMM, the PCF rate in those with PMMF as onlay for reinforcement to the PCF rate in those without PMMF.PCF rate will also be evaluated in patient without low SMM and in patients who unexpectedly needed the PMMC for reconstruction of the pharynx.
Secondary objective(s)
Secondary outcome measurements will only be scored in the group with low SMM.In this group, the following outcomes are compared between the group with and without PMMF using questionnaires and function tests.
• Quality of life.
• Shoulder and neck function.
• Swallowing function and dysphagia complaints.
• Voice quality and it's psychological consequences.• Patient's perspective.
• The healthcare related costs.
Study design and population
This multicenter PECTORALIS-study is designed as a randomized clinical trial (RCT) and funded by the Dutch Cancer Society (KWF) (NL72319.041.20).
Patients who are planned for TL, will be included in this study when they: (1) have a minimum age of 18 years, (2) are mentally competent and (3) have sufficient knowledge of the Dutch language to be able to give informed consent.Patients will be enrolled by their head and neck surgical oncologist and/or by a researcher after consultation in one of the participating tertiary referral centers of the NWHHT or three Belgian (Dutch speaking) centers.Patients will be excluded for this study when they: (1) will be treated with chemoradiotherapy (with cisplatinum/ carboplatin) for a previously diagnosed head and neck carcinoma (HNC), (2) will undergo TL with reconstruction of the pharynx with myocutaneous pectoralis major (PMMC), gastric pull up or jejunal flap, (3) have major CT-or MRI-scan artefacts impeding accurate muscle tissue identification, and (4) have an interval between TL and imaging longer than 2 months.
When a patient is eligible for participation in this study, SMM will be measured using routinely performed (FDG-PET/)CT-or MRI scan of the head and neck as described below (see Fig. 2).
After informed consent, patients with low SMM will be centrally randomized between prophylactic PMMF at the time of TL or not.A stratified permuted-block procedure randomizes patients to the groups on a 1:1 ratio.Strata include treating center and concomitant neck dissection.Both primary and secondary outcome measurements as described below will be evaluated in the group with low SMM.
Patients without low SMM will undergo the TL as regularly scheduled, will not be randomized and only the primary outcome measurement will be evaluated.
Patients definitively scheduled for TL with reconstruction of the pharynx using the PMMC meet the exclusion criteria and thus will not be recruited for the study.If in an included patient, regardless of SMM and possible randomization, it is unexpectedly decided peroperatively that a PMMC is required for reconstruction of the pharynx, these patients will be followed over time.The primary outcome measurement will still be evaluated.
In conclusion, the primary outcome measure is thus evaluated in the following groups: • Patients with a low SMM who will undergo a TL with PMMF.• Patients with a low SMM who will undergo a TL without PMMF.• Patients without a low SMM who will undergo TL as regularly scheduled.
• Patients who unexpectedly need a PMMC for reconstruction of the pharynx during the TL, regardless of their SMM.
Measurement of SMM
The cross-sectional area (CSA) of the paravertebral muscles and both sternocleidomastoid muscles at the level of the third cervical vertebra (C3) will be measured by using (FDG-PET/)CT or MRI.When possible, CT is preferred over MRI because you are aided in accurately delineating the CSA by setting the radiodensity to -29 and + 150 Hounsfield Units (HU) which is specific for muscle mass [30,31].If MRI-imaging is used, SMM will be manually delineated, excluding fatty mass through manual means.If FDG-PET/CT is available, SMM will also be measured (directly) at the level of the third lumbar vertebra (L3).The single axial slide at level C3 of imaging which will show both the transverse processes and the entire vertebral arch scrolling from cranially to caudally will be selected.This segmentation of SMM will be performed using the software package SliceOmatic (Tomovision, Canada).CSA at level of C3 will be converted to the CSA at L3 by using the formula as previously described by Swartz et al [30] Then the CSA at L3 will be corrected for height thus creating the lumbar skeletal muscle index (LSMI).A LSMI of ≤ 43.2 cm2 /m2 will be considered as low SMM.
Intervention
First the neopharynx will be closed.The PMMF will be harvested by elevating the muscle off the chest like the myocutaneous pectoralis major (PMMC) flap, but without the skin and subcutaneous fat of the donorsite (see Fig. 1).Then the muscle and its fascia will be tunneled into the neck and sutured to different structures around the neopharynx.In this manner, the PMMF will be used as a muscular vascularized flap and additional layer to cover the delicate closure of the neopharynx [11,32].
Outcome measurements Primary outcome measurement
As mentioned above, the PCF-rate following TL will be scored in patients with a low SMM who will undergo a TL with or without PMMF, without low SMM (undergoing TL as regularly scheduled) and in patients who unexpectedly need a PMMC for reconstruction of the pharynx during the TL, regardless of their SMM.PCF is defined as a clinical fistula requiring any form of conservative or surgical treatment occurring within 30 days after TL.To also assess the prevention of possible PCF development, the results of the swallow X-ray and their potential impact on the patient's oral intake are taken into account.This approach aims to obtain the most comprehensive evaluation of PCF incidence.
Secondary outcome measurements
In low SMM patients shoulder and neck function, swallowing function, and voice quality with their consequences on quality of life (QoL) will be investigated by questionnaires before and 6 months after TL.
Shoulder and neck function tests will be performed depending on the feasibility in the participating center also before and 6 months after TL.In addition, this latter group of patients will be recruited 3 months after TL to have a voice recording and a video fluoroscopy (VFSS).Performance of these side studies will also be performed on the available logistics of the participating center.
Shoulder and neck function tests
AROM of the shoulders and neck will be performed in the patients' group with a low skeletal muscle mass before and 6 months after TL according to a standardized protocol.The flexion, abduction, rotation, extension and flexion of the shoulder and neck and forward flexion and abduction the shoulder will be examined using a goniometer.The mean of two sequential measurements will be used for further analysis [43].
Patients' experienced need for neck and shoulder rehabilitation
Qualitative research will be performed by semi-structured interviews to get insight in the patients' experiences with and insights in the treatment and its morbidity, such as the effects on shoulder and neck function, related to provided information and therapy.Data will be analyzed with a thematic analysis approach [44].This part of the study will be performed and written according to the Standards for Reporting Qualitative Research (SRQR) [45].Participants will be recruited until saturation will be achieved, which is when no new information will be identified from the last two interviews and expected to occur between six and twelve interviews [46,47].
The semi-structured interviews will be conducted using pre-defined topic guides.This topic guide is open to changes when interviews identify new information.All participants will be asked about possible shoulder and neck function problems, how this is handled by the patient and whether rehabilitation was required.
Swallowing function
Function tests on the swallowing quality of the TLpatients with low SMM will be assessed by the performance of videofluoroscopy (VFSS).Patients will be offered thin liquid (thinned Micropaque), thick liquid (Micropaque purely) and firm consistency (toast in Micropaque) in 3 steps.Each step will be performed twice.
Voice quality
The quality of the voice of patients with low SMM will be measured by the performance for voice recording and the associated Acoustic Voice Quality Index (AVQI) [48,49].AVQI is a multi-parameter model in which the outcomes of six acoustic parameters are measured and combined into one objective measure of the voice quality.
Other parameters
Patients' demographic, staging, treatment and outcome data will be collected using electronic patient records.To allow for comparison with the recent Dutch Head and Neck Society audit the same characteristics and potential predictive factors will be evaluated [4].The following parameters will be added: peroperative data (i.e.type of closure of the neopharynx), comorbidity scores (ACE-27 and Charlson Comorbidity Index), American Society of Anesthesiologist's physical status (ASA score), WHO performance status and preoperative laboratory results, which will be analyzed from routine blood tests.General postoperative complications (except from PCF-rate) are graded according to the Clavien-Dindo classification of Surgical Complications [50].Severe complications are defined as Clavien-Dindo grade 3 A or higher [41][42][43][44].
Cost-effectiveness analysis
A detailed analyses of cost and effect differences for patients having a PMMF and standard of care (no PMMF) will be assessed using a health care perspective.All healthcare consumption for every individual patient will be collected from electronic patient files.Subsequently units of health care consumption will be linked to respective Dutch unit costs according to available lists of the Dutch Health Care Institute.The economic evaluation will take place both via a trial based approach and making use of decision analytical modeling to extrapolate outcomes.Uncertainty of outcomes will be depicted by both deterministic as well as probabilistic sensitivity analyses.
Power calculation
Subtraction of data from the meta-analyses from Paleri et al. [13] and Sayles et al. [12] revealed that the PCF rate for patients with and without PMMC or PMMF for reinforcement is reduced (11/114 (0.10) to 47/156 (0.30)), giving a relative risk of 0.32.After exclusion of the patients who received a reconstruction of the pharynx from the database of Bril et al. [18], the PCF rate in patients with low SMM was 31.0%.Assuming that the same relative risk as in the meta-analyses is applicable, this leads to our hypothesis that a prophylactic PMMF can reduce the PCF rate from 31.0 to 9.9%.
To show that the use of PMMF can reduce the fistula rate for TL patients with low SMM, 61 patients per arm are needed (two sided alpha 0.05 and power 85%).With an expected drop-out of 5%, a total of 128 patients with low SMM are needed.This power calculation was performed with the program PASS (two-sided Z-test with pooled variance).Since approximately 46% of TL patients has low SMM, a total of about 276 TL patients are required to include 128 patients with low skeletal muscle mass.
Statistical analysis
Our primary hypothesis is that the use of PMMF as onlay for reinforcement can reduce the PCF rate in patients with low SMM after TL from 31.0 to 9.9%.To test this hypothesis, we will compare the incidence of fistula formation in patients with low SMM between the group with PMMF (intervention arm) and the group without PMMF lap (control arm) by the Chi-squared test or when needed the Fisher's exact test (N < 5).To demonstrate the association between SMM and fistula formation, the incidence of fistula formation in the control arm (low SMM without PMMF) will be compared with the incidence of fistula formation in the (non-randomized) group of normal SMM.The relative risk will be calculated with an associated 95% confidence interval.Modified Poisson regression models will be used to correct for potential confounder, such as radiotherapy in prehistory, type of closure of the neopharynx etc.
Results of our other outcomes will be presented as the mean scores with standard deviation for continuous variables or as median with interquartile range for ordinal or non-normal distributed continuous data.Differences between groups with or without PMMF will be tested by independent t-tests for normally distributed continuous data and for ordinal and non-normal distributed continuous data Mann Witney U tests will be used.Differences over time within groups with or without PMMF will be tested by paired t-tests for normally distributed continuous data and for ordinal and non-normal distributed continuous data Wilcoxon signed-rank tests will be used.
Analyses of semi-structured interviews
Semi-structured interviews will be analyzed by two researchers using thematic descriptive analyses [44].This thematic analysis will be an independent qualitative descriptive approach to identify, analyze and report patterns (themes) within the data.Data analysis will be performed by two researchers independently and compared after the third and last interview when saturation is reached.During analysis we will search for the identification of common threads that extend across the interviews.This will provide a detailed, and nuanced account of data by breaking the interview texts into relatively small units.Practically the semi-structured interviews will be transcribed verbatim, anonymized and will be thoroughly read several times.Thereafter initial codes will be generated, followed by the search for themes, reviewing these themes and finally defining and naming the themes.These themes will be reported and will be supported by compelling extract examples relating back to the analysis to answer the research question.Quotes from the interviews will be used to support the themes.All quotes provided in the article will be translated into English.
Discussion
Skeletal muscle mass (SMM) has emerged as a critical predictive factor for various adverse outcomes following medical interventions.For instance, in patients with HNC undergoing treatment, a low SMM has been identified as a significant risk factor for adverse events, such as PCF development subsequent to TL.Given the undesirable nature of PCF, proactive identification of individuals at risk becomes imperative.Notably, patients previously subjected to CRT for HNC cancer have an elevated risk of PCF development and generally receive routinely PMMF reinforcement during TL.Hence, the aim of this trial is to assess whether utilizing PMMF as an onlay technique for pharyngeal closure reinforcement can effectively reduce PCF incidence among high-risk TL patients with low SMM.
Numerous techniques are available for evaluating body composition and SMM.These methodologies encompass DEXA-scans, BIA, and imaging modalities like CT and MRI.Among these, the measurement of CSA at the level of L3 on CT scans has gained prominence due to its strong correlation with total skeletal muscle volume.To account for individual height variations, CSA is normalized using squared height, resulting in the calculation of skeletal muscle index (SMI; cm²/m²).Recognizing the limited availability of abdominal CT scans in HNC patients, a novel approach for SMM assessment utilizing a single CT slice at the level of C3 was introduced by Swartz et al. [30].This method exhibits robust correlations with L3 CSA measurements, further enhanced by a multivariate formula that predicts L3 CSA based on C3 CSA, gender, age, and weight.This method is validated [51] with a very good interobserver agreement and intraobserver agreement [52,53].CSA can be measured on the level of C2, C3 and C4 and all showed a very strong and significant correlation with the SMI at the level of L3 [54].However, the most effective discriminator for sarcopenia remained the level of C3 for both males and females [54], in some cases dependent on the type of HNC [55].Measurement of CSA can be performed on CT and MRI interchangeably [52,56].The existing methodologies enable straightforward SMM assessments using routine CT or MRI scans during HNC diagnosis and treatment evaluation.Potential influences of variables on SMM measurements like contrast usage and slice thickness in CT scans [53,57] have been explored or are currently being investigated (to be published).The clinical relevance of small detected differences in CSA measurements will also be assessed in this research.
This study excludes patients undergoing pharyngeal reconstruction with PMMC or gastric pull up and jejunal interponate.Patients who undergo TL with gastric pull-up reconstruction or jejunal interponate frequently undergo omentum overlay as well, which functions similarly to a PMMF.This introduces a potential bias into the study results and therefore these patients will be excluded.
An inherent challenge of this study pertains to defining the primary outcome measurement, the PCF.The study's PCF definition entails a clinically diagnosed communication between the neopharynx and the outside of the skin within 30 days after TL.In general, many centers perform a protocol-mandated barium swallow X-ray 7 or 10 days postoperative.In cases where contrast leakage is detected during the swallow X-ray, a one-week delay in initiating oral intake is implemented to mitigate the potential formation of a fistula.Nevertheless, the routine performance of a swallow X-ray varies across the participating centers in this study, complicating the comparison of PCF incidence rates.To address this challenge, a questionnaire survey was conducted to assess variations in protocols related to the prevention, diagnosis or definition, and treatment of PCF among different centers within the NWHHT (to be published).Based on these results we will collect all data on the diagnosis and development of PCF and affecting factors.This encompasses whether a clinical PCF developed within 30 days post TL, a swallow X-ray was conducted according to protocol or due to other considerations, if methylene blue is used or drain fluid is tested for amylase for diagnosis of PCF and the timing of oral intake initiation.By adopting this approach, we aim to score our primary outcome measure as completely as possible.
The secondary outcome measures encompass the impact of PMMF deployment on a range of factors, including QoL, shoulder and neck function, swallowing function, and voice quality.The harvest of the PMMF might influence shoulder and neck function, as the pectoralis major (PM) flap contributes to movement of mainly the shoulders [58,59].The neck and shoulder morbidity seems not to be increased by PMMF when patients already underwent a neck dissection [60].In our study, in addition to specific questionnaires, we will also perform function tests by measuring the AROM of the shoulder and neck before and after surgery.This will allow data to be compared both within the patient (before and 6 months after TL) and between patients (patients with low SMM and TL with and without the PMMF), thus providing the fullest possible representation of the effects of the PMMF on these functions.
Function tests will also be performed (at the ability of the participating center) on the patient's swallowing function and voice quality.The effect of the PMMF on swallowing function and voice quality is not yet fully understood.In particular, some studies describe a possible effect on swallowing function because of the bulkiness of a myocutaneous PM-flap [61].However, this flap contains both skin and subcutaneous fat which significantly increases the thickness compared to the myofascial PM-lap as used in this study.Possible effects of PMMF on the voice quality are not explored extensively yet.Jacobi et al. assessed surgical parameters correlating with voice quality [62].The standard TL was compared to TL with PMMF for reinforcement (n = 10).Speech and voice measures were comparable in both groups.This means that an impact on voice quality of the reconstruction with the PMMF is not expected, but cannot be completely ruled out.Therefore, in addition to administering questionnaires on these functions, we also perform function tests.
In conclusion, this study endeavors to shed light on the efficacy of PMMF deployment as an onlay technique for reducing PCF rates among high-risk TL patients with low SMM.Also potential side-effects, e.g.shoulder morbidity, dysphagia and decreased voice quality, will be investigated to allow weighing the advantages and disadvantages of the use of the PMMF as onlay technique in the management of TL patients.With this study we hope to be able to answer the question whether patients with low SMM, and therefore a high risk of PCF development, should receive PMMF during TL as standard in the future.
Fig. 2
Fig. 2 Timeline of including patients | 2024-01-17T05:07:10.184Z | 2024-01-15T00:00:00.000 | {
"year": 2024,
"sha1": "b6a72f2306d928d917e1a643f10ecd49c94bc9c2",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b6a72f2306d928d917e1a643f10ecd49c94bc9c2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
256965365 | pes2o/s2orc | v3-fos-license | Inertial projected gradient method for large-scale topology optimization
We present an inertial projected gradient method for solving large-scale topology optimization problems. We consider the compliance minimization problem, the heat conduction problem and the compliant mechanism problem of continua. We use the projected gradient method to efficiently treat the linear constraints of these problems. Also, inertial techniques are used to accelerate the convergence of the method. We consider an adaptive step size policy to further reduce the computational cost. The proposed method has a global convergence property. By numerical experiments, we show that the proposed method converges fast to a point satisfying the first-order optimality condition with high accuracy compared with the existing methods. The proposed method has a low computational cost per iteration, and is thus effective in a large-scale problem.
Introduction
Topology optimization is a method to obtain an optimal structural design depending on the objective by mathematical programming. The extensive study of topology optimization dates back to the seminal work by Bendsøe and Kikuchi [6] in 1988. Since then, a wide range of applications have been suggested in fluid, heat, elec-B Akatsuki Nishioka akatsuki_nishioka@mist.i.u-tokyo.ac.jp 1 Department of Mathematical Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Hongo, 7-3-1, Bunkyo-ku, Tokyo 113-8656, Japan tromagnetic, acoustic, and aerospace engineering [7,13]. A topology optimization problem of continua is formulated as an infinite-dimensional optimization problem. We can discretize the problem by the finite element method and obtain a conventional finite-dimensional optimization problem [7,9]. The discretized problem is a largescale nonconvex optimization problem with some constraints. Moreover, it requires the finite element analysis (FEA), which is a solution of a linear equation, for calculating the objective function value and the gradient of the objective function at each iteration. This property makes the computational cost of topology optimization even larger.
There are various types of approaches to reduce the computational cost of topology optimization. In topology optimization, most of the computational cost is spent on FEA. Accordingly, there are many studies on reducing the computational cost of FEA [11, 34-36, 38, 39].
In this paper, we attempt to reduce the computational cost by reducing the number of iterations using an efficient and faster optimization algorithm. In a large-scale topology optimization problem, common nonlinear optimization algorithms such as the interior-point method and the sequential quadratic programming are often impractical because of the huge iteration cost (computational cost per iteration). Therefore, algorithms designed specifically for structural (topology) optimization such as the optimality criteria method [6] and the method of moving asymptotes [31,32] are commonly used. See [27] for a comparative study on the optimization algorithms for topology optimization. Some studies on faster optimization algorithms for topology optimization are found in [20,28].
Recently, first-order optimization methods which only require the first-order derivatives of the objective (and constraint) functions have been attracting much attention in the machine learning community. First-order methods are suited for large-scale problems because of their low iteration cost in time and memory storage. Secondorder methods such as the Newton method and the interior-point method require the (approximated) second-order derivative and solution of linear equations. The iteration cost grows drastically as the problem size increases, and thus second-order methods are impractical for a large-scale problem. Although the convergence of first-order methods is basically slower than that of second-order methods, there are studies on accelerating the convergence of first-order methods. For an unconstrained convex optimization problem where the objective function f has Lipschitz continuous gradient (also called that the objective function is L-smooth), Nesterov's accelerated gradient method [23] achieves the convergence rate at f (x k ) − f (x * ) ≤ O(1/k 2 ), while the steepest descent method converges with rate O(1/k) where k is the iteration counter and x * is the optimal solution. The above convergence rate is often equivalently described by the iteration complexity: O(1/ 1/2 ) iterations to acheive f (x k ) − f (x * ) ≤ . Beck and Teboulle [4] combined Nesterov's acceleration technique with the proximal gradient method for convex optimization problems, which is a generalization of the projected gradient method, to treat simple nondifferentiable functions and constraints.
The accelerated gradient method has also been extended to an optimization problem with a nonconvex objective function with Lipschitz continuous gradient. In unconstrained nonconvex optimization, the optimality measure f (x k ) − f (x * ) used in convex optimization is not appropriate since there may exist multiple local minima. Therefore, the number of iterations k required to acheive ∇ f (x k ) ≤ (the iteration complexity) is considered. The steepest descent method has O(1/ 2 ) iteration complexity. The accelerated gradient methods by [15,18] have the same order of iteration complexity as the steepest descent method in nonconvex case, but have the accelerated convergence rate in convex case same as Nesterov's accelerated gradient method. Although they do not have theoretically improved convergence rates in nonconvex optimization, their empirical performance is expected to be better than that of firstorder methods without acceleration since the iteration complexity is worst-case under all L-smooth functions f . With the additional assumption of Lipschitz continuity of the Hessian of the objective function, Carmon et al. [10] acheived an improved iteration complexity of O( −7/4 log(1/ )) and Li and Lin [19] acheived O( −7/4 ). The former method requires more complicated update scheme. The methods by [10,19] only treat unconstrained problems. The accelerated gradient methods for nonconvex optimization are still developing. See [3,12,22] for more details in first-order methods and their acceleration.
Although the accelerated proximal gradient method has been applied to optimization problems in computational plasticity [17,29,30], there are very few applications to topology optimization. Li and Zhang [21] applied the accelerated mirror descent method to a robust topology optimization problem under stochastic load uncertainty. They used stochastic optimization techniques to efficiently obtain a robust design. However, as they applied a convex optimization algorithm to nonconvex optimization problems, the convergence of the method is not guaranteed.
In [24], the authors applied the accelerated projected gradient method based on [15] to the compliance minimization problem in topology optimization. Although we call the proposed method the "accelerated" projected gradient method, the convergence rate is not improved theoretically from the classical projected gradient method for a nonconvex objective function. Moreover, to guarantee the convergence, the method requires additional FEAs at each iteration. 1 Therefore, in this paper, we adopt an inertial projected gradient method based on iPiano [25] instead. The main advantage of this algorithm is that it contains no auxiliary variables and requires a smaller number of FEAs than [15] to guarantee global convergence. It has an inertial term in its update formula to accelerate the convergence. Although the theoretical convergence rate is the same as that of the projected gradient method (and that of the method in [15]), practical performance is expected to be better than the projected gradient method. We consider an adaptive step size policy to further reduce the computational cost. The proposed method is easy to implement and guaranteed to converge to a stationary point which satisfies the first-order optimality condition. This convergence guarantee is important to properly stop the algorithm and obtain a high-quality solution. We also extend the results to the heat conduction problem and the compliant mechanism problem. We show that the projection onto the feasible set can be easily calculated for each of the equality and inequality constraints on the structural volume, and thus the inertial projected gradient method can be efficiently applied to the topology optimization problems considered in this paper.
In numerical experiments, we consider the compliance minimization problem, the heat conduction problem and the compliant mechanism problem, and compare the proposed method with the optimality criteria method [6], the method of moving asymptotes (MMA) [31][32][33], and the MATLAB fmincon (interior-point method and sequential quadratic programming). We show that the proposed method has a low iteration cost and converges fast. Moreover, the solution obtained by the proposed method satisfies the first-order optimality condition with higher accuracy than those obtained by the existing method.
This paper is organized as follows: In Sect. 2, we provide the fundamentals of topology optimization and the problem formulation. In Sect. 3, we briefly discuss the projected gradient method and the projection onto the feasible set of topology optimization problems. Then, we explain the proposed method, the inertial projected gradient method and its step size policy. In Sect. 4, we show the results of numerical experiments. Finally, some concluding remarks are provided in Sect. 5.
All of the norms · in this paper are the Euclidean norm of a vector. The inner product is denoted by ·, · . We use 0 and 1 to denote the vectors with all components equal to 0 and 1, respectively. Moreover, max{0, ·} and min{1, ·} are the componentwise operations acting on a vector.
Problem formulation
We consider three topology optimization problems with simple linear constraints: the compliance minimization problem, the heat conduction problem and the compliant mechanism problem. The problem setting in this paper is based on [1,7].
Consider a topology optimization problem the design domain of which is discretized by the conventional finite element method. An example of discretization of the design domain is shown in Fig. 1. For simplicity, we divide the design domain into n identical square finite elements with unit volume. The design variable of the optimization problem is the density vector x ∈ R n , the eth component x e of which denotes the density of the eth finite element. Each density x e takes the value in [0, 1]. When x e = 0, the element e is regarded as void and when x e = 1, the element e is regarded as material. Thus x corresponds to a design of the structure. We use the SIMP (solid isotropic material with penalization) method [5] to penalize the intermediate values in (0, 1).
Compliance minimization problem
Consider the compliance minimization problem shown in Fig. 2. The top figure in Fig. 2 describes an example of problem setting and the bottom figure describes the optimal solution of the discretized problem with uniform square finite elements in the way shown in Fig. 1. The aim is to maximize the stiffness of structure when the external force is applied. In the SIMP method, the global stiffness matrix can be defined as where p > 1 is the penalty parameter, E 0 E min > 0 are constants and K e is the local stiffness matrix which is a constant symmetric matrix. In addition, we use the density filter [9] to prevent mesh dependency; refining the mesh leads to a different optimal structural design, not a refined structural design. The density filter is a linear operator acting on the density vector x. Therefore, by using a constant matrix H ∈ R n×n , the filtered density vector can be written asx = H x.
The compliance minimization problem is defined as follows: Minimize x∈R n ,x∈R n ,u∈R m p T u subject to K (x)u = p, Here, p ∈ R m is the constant load vector, u ∈ R m is the global nodal displacement vector, m is the number of degrees of freedom of the nodal displacements, and V 0 ∈ (0, n) is the upper limit of the structural volume. Problem (2) can be rewritten as the following optimization problem with a nonconvex objective function and linear constraints: Note that, in practice, we do not calculate the inverse matrix of the global stiffness matrix, rather we solve the equilibrium equation K (x)u = p (corresponding to FEA) at each iteration. Subsequently we use the solution u of FEA to calculate the objective function p T u. Also, the gradient of the objective function is calculated by substituting u into the following formula: where is positive semidefinite. This is why we consider equality volume constraint in (2).
Heat conduction problem
The heat conduction problem aims to maximize the heat conduction from the entire design domain under uniformly distributed heating to the designated region where the temperature is constant T (lower than that of the entire design domain) as shown in Fig. 3. It can be formulated in the same way as the compliance minimization problem (2). The vector u, the stationary solution of the discretized heat equation, consists of the temperature of each node. Note that the steady state heat equation and the equilibrium equation of linear elasticity are both described by the Poisson equation.
We put p = c T 1 using a scaling parameter c T . Then the problem (2) corresponds to the minimization problem of the average temperature of the design domain (the problem Fig. 3 Problem setting of the heat conduction problem (left) and its optimal design (right) to find the optimal shape of the heat conductor to minimize the average temperature of the design domain). See [7] for details.
Compliant mechanism problem
A compliant mechanism transmits the force and motion through elastic body deformation. In Fig. 4, we aim to design a compliant mechanism which maximizes the displacement in the direction of vector u out when the force is applied in the direction of vector p. The spring with stiffness k in and k out are added to the points at which p is applied and u out is measured, respectively. The compliant mechanism problem is defined as follows [7]: The main difference from the compliance minimization problem is the objective function. The coefficient in the objective function q is different from the right hand side of the equilibrium equation p. The gradient of the objective function is calculated by whereū is the solution of so-called adjoint equation K (x)ū = q. This equation can be efficiently solved using the Cholesky decomposition, because the coefficient matrix is the same as the equilibrium equation. Note that a component of the gradient is not necessarily non-positive in this case, and thus the volume constraint is imposed as an inequality constraint. Problems (2) and (5) can be written as follows: where S is the feasible set defined by Although we have v = H T 1 in this paper, the coefficient of the volume constraint is in general not necessarily equal to 1 (as the volume of each element can differ from each other). In the next section, we present algorithms to solve problem (7).
Projected gradient method
The projected gradient method [16] is a classical optimization algorithm for an optimization problem in the form of (7) with a smooth objective function f : R n → R and a closed convex feasible set S ⊂ R n . It is a special case of the proximal gradient method which has been attracting much attention in recent years [3,4,22]. The projected gradient method finds a solution by repeating the following formula starting from the initial point x 0 ∈ S: Here, α k > 0 is the step sizes, and S (w) ∈ S is the projection of a given vector w ∈ R n onto S defined as follows: That is, S (w) is the closest point in S from w. The projected gradient method coincides with the steepest descent method when S = R n .
Projection onto the feasible set
To use the projected gradient method effectively, the projection onto the feasible set must be easily calculated. We present an easy way to calculate the projection in our problems, for each of the equality volume constraint cases in (8) and the inequality volume constraint case in (9). The algorithm of the projection in our problems is similar to that of the projection onto the probability simplex (see e.g. [26]).
Case of equality volume constraint
Consider S in (8). The projection S (w) is equal to the unique optimal solution of the following problem: This is a convex optimization problem, and the following KKT (Karush-Kuhn-Tucker) condition is the necessary and sufficient condition for optimality: where λ, ν ∈ R n and μ ∈ R are the Lagrange multipliers. We find the unique point x which satisfies (13) for given w, and it is the projection S (w). By the first equality in (13), we obtain Then, for given w, x is a function depending only on μ: Therefore, we need to find μ * such that x(μ * ; w) satisfies the rest of the condition (13): the volume constraint v T x(μ * ; w) = V 0 . As v T x(μ; w) is a monotonically decreasing piecewise linear function of μ and 0 < V 0 < n, μ * exists in In practice, all we need to do is to find the solution by e.g. the bisection method (in the numerical experiments, we use MATLAB fzero function). The projection is then calculated by
Case of inequality volume constraint
In the case that the volume constraint is an inequality constraint, the projection can be calculated in a manner similar to Sect. 3.2.1. The KKT condition is as follows: (17), and hence Then the projection is written as The constraints of the topology optimization problems in this paper are expressed as the intersection of box constraints (a ≤ x ≤ b) and a single linear constraint. Therefore, we only need to find the scalar Lagrange multiplier μ which can be efficiently calculated by e.g. the bisection method. Note that, in a problem with general linear constraints, the calculation of projection becomes convex quadratic programming and computationally expensive in a large-scale problem.
Inertial projected gradient method
The projected gradient method has a low iteration cost and is suited for a large-scale optimization problem such as a topology optimization problem. However, the convergence of the projected gradient method is not very fast. Recently, the acceleration techniques of the projected gradient method and, more generally, the proximal gradient method have been attracting much attention.
There are several different kinds of acceleration techniques for the projected gradient method for nonconvex optimization. We adopt iPiano (inertial proximal algorithm for nonconvex optimization) [25] to solve topology optimization problems. Its simple update scheme is suited for topology optimization. It does not require additional evaluations of the objective value. Most accelerated projected gradient methods [15,18] require evaluations of the objective value more than once at each iteration to update the design variable or to guarantee the convergence. Moreover, some methods [37] require FEA at an infeasible point where the global stiffness matrix may become singular. Although iPiano does not have a faster convergence rate than the projected gradient method, in the numerical examples we show that it is practically faster than the projected gradient method for topology optimization problems.
In iPiano, the design variable is updated as follows: where α k > 0 and β k ≥ 0 are step size parameters discussed in Sect. 3.4. The term (20) is a so-called inertial (or momentum) term which accelerates the convergence. When β k ≡ 0, (20) coincides with the classical projected gradient method (10).
Step size policy
To achieve faster and guaranteed convergence to a stationary point, the choice of the step size parameters is crucial. One choice is to use constant step size parameters. To choose constant step size parameters of a first-order method, the Lipschitz constant of the gradient of the objective function is often used as a guideline. For a differentiable function : If such an L exists, we say that f : R n → R is L-smooth over D. Many first-order methods including the proposed method assume the L-smoothness of the objective function. In the topology optimization problems in Sect. 2, the objective functions are L-smooth over S, since they are rational functions and twice continuously differentiable on [0, 1] n . See [3] for more details on the L-smoothness. In [25], the condition for constant step size parameters of iPiano (20) to guarantee the convergence is introduced as follows: Although this constant step size policy is simple, there are two drawbacks. One is that it requires a good estimation of L, the Lipschitz constant of the gradient of the objective function. This estimation is difficult in topology optimization. The other is that a constant step size cannot benefit from a smaller local value of L. The Lipschitz constant over D ⊂ D can possibly be much less than the Lipschitz constant over D, and the points generated by an algorithm can be restricted to a smaller subset of D as iterations progress. In this case, the acceptable step size for the convergence guarantee becomes greater than (22) as iterations progress. For faster convergence, it is better to adjust the step size parameters at each iteration.
In case that the step size parameters change at each iteration, they must satisfy the following conditions to guarantee the convergence [25]: where b = a 1 + L k 2 / a 2 + L k 2 and a 1 ≥ a 2 > 0 are constant parameters, and L k is a parameter satisfying the following descent condition: Note that if L k ≥ L, (24) is always satisfied (see e.g. [3] for the proof). However, too large L k leads to a small step size and hence slow convergence. Also, too small L k leads to a large step size and numerical instability or even divergence. Thus, we need to choose L k appropriately for fast and stable convergence.
Backtracking is a popular way to choose the step size parameter L k in a first-order method (see e.g. [3]). In the backtracking procedure, we start with a sufficiently small initial value s for L k and repeat multiplying η > 1 until the descent condition (24) is satisfied, i.e. we set L k = sη l where l is the smallest nonnegative integer such that L k = sη l satisfies (24). However, the backtracking procedure requires evaluations of the objective value many times to check if the descent condition (24) is satisfied (Note that if we change the value of L k , the objective value f (x k+1 ) of the left-hand side of (24) changes). This means we need to perform FEA many times to decide the step size parameters, which is computationally expensive.
Therefore, we estimate the initial value for the backtracking procedure by where L min is a small positive constant to avoid the numerical instability, and we choose sufficiently large L 0 for the first iteration of the inertial projected gradient method. If L k in (25) does not satisfy (24), then we update L k ← ηL k in the same way as the conventional backtracking procedure. The estimate (25) is motivated by the definition of L in (21). By definition, we see that L k in (25) is no smaller than L min and no greater than L. Although L k ≥ L is a sufficient condition to satisfy (24), in numerical examples, L k in (25) satisfies the descent condition (24) in most cases, and no additional FEAs are needed. By this step size policy, we can automatically adjust the step size regardless of the problem setting (e.g. the design domain, the boundary conditions and the number of finite elements).
Remark
The step size (25) has a relationship with the Barzilai-Borwein step sizes [2]. For the simplicity of notation, we set immediately follows from the Cauchy-Schwarz inequality. The right-hand side and the left-hand side of (26) are inverses of the Barzilai-Borwein step sizes. The Barzilai-Borwein step sizes are derived from an approximation to the secant equation underlying the quasi-Newton method, and converge fast for convex quadratic programming. We can use the inverse of the Barzilai-Borwein step sizes instead of y k−1 / s k−1 in (25). However, a large step size leads to many FEAs, and a small step size leads to slow convergence, thus we use (25).
Based on all of the above discussions, the algorithm of the inertial projected gradient method for topology optimization is described in Algorithm 1. The stopping criteria are discussed in the next section. Each iteration of Algorithm 1 consists of vector additions, scalar multiplications and projections other than FEA, thus computationally cheap even if the problem size is very large.
Numerical examples
We conduct the numerical experiments in three examples: the compliance minimization problem, the heat conduction problem and the compliant mechanism problem. We compare the proposed method with popular optimization algorithms for topology optimization: the optimality criteria method (OC) [1,6] and the globally convergent version of the method of moving asymptotes (GCMMA) [32,33]. As GCMMA is designed for an inequality-constrained optimization problem, for the numerical experiments on GCMMA we change the volume constraints in the compliance minimization problems and the heat conduction problems to the inequality-volume constraint v T x ≤ V 0 . 2 We also make comparisons with the general nonlinear optimization algorithms: the interior-point method (IPM) and the sequential quadratic programming (SQP) of MATLAB fmincon. We use the limited-memory-BFGS (L-BFGS) formula for the Hessian approximation in IPM and SQP. The L-BFGS formula has a low iteration cost and is more suited for a large-scale problem than the BFGS formula or the exact Hessian.
The experiments have been run on iMac (Intel(R) Core i9, 3.6 GHz CPU, 128 GB RAM) and MATLAB R2020b. The MATLAB code of topology optimization is based on [1,7,14]. The following values are common in all the experiments: E 0 = 1, E min = 10 −3 and p = 3. The Poisson ratio in the local stiffness matrix K e (e = 1, . . . , n) is 0.3. The filter radius used for the density filter is 0.05 times the number of elements in the horizontal direction. The initial point of each algorithm is x 0 = (V 0 /n)1. The parameters of the proposed method are follows: L 0 = 10, L min = 10 −3 , η = 1.5, a 1 = 0.1 and a 2 = 10 −6 .
Optimality measure and stopping criterion
The proposed method aims to find a stationary point of problem (7), i.e. the point satisfying the first-order optimality condition (see e.g. [8]): For a given differentiable function f : R n → R, a convex set S ⊂ R n and α > 0, define the gradient mapping G α : R n → R n by As a first-order optimality measure, we use the Euclidean norm of the gradient mapping G α (x) . We can easily see G α (x) coincides with the gradient ∇ f (x) when S = R n . Thus, the gradient mapping is a generalization of the gradient. Moreover, G α (x) is a continuous function of x, and G α (x) = 0 if and only if x is a stationary point of the problem in the form of (7) (see [3] for the proof). Thus, we can use the gradient mapping G α (x) as a first-order optimality measure of x. We set α = 1 for simplicity. Note that G 1 (x k ) corresponds to the proximal residual defined in [25]. This optimality measure can be used for any algorithms. We calculate G 1 (x k ) at each iteration of each algorithm independent of the update of the design variables x k so that we can equally measure the first-order optimality of each point generated by each algorithm. We use G 1 (x k ) < as a stopping criterion for sufficiently small > 0.
The reason why we adopt the gradient mapping for comparison is that other choices are inaccurate or unable to equally compare the optimality of points generated by different algorithms. As the projected gradient method and OC do not calculate the Lagrange multipliers at each iteration, it is difficult to adopt the KKT residual norm, which is used in GCMMA and MATLAB fmincon. Also, the change of the objective function value f (x k+1 ) − f (x k ) or the design variable x k+1 − x k can be strongly influenced by a step size, i.e. if we choose an arbitrary small step size, these values become arbitrarily small, and the algorithm terminates with a very small number of iterations even though the current point is not optimal.
Compliance minimization problem
We consider the compliance minimization problem of the MBB beam shown in Fig. 2. Note that, by utilizing the symmetry, we consider only the right half of the entire design domain. The upper limit of the volume is V 0 = 0.5n. The magnitude of the external force is 1.
Effectiveness of acceleration and step size policy
We compare the proposed method with the original (non-inertial) projected gradient method (PG) to show the effectiveness of the acceleration by the inertial term. Also, to show the effectiveness of the proposed step size policy, we compare it with the constant step size policy (22).
The objective function value and the norm of the gradient mapping at each iteration of 500 iterations for n = 2700 are shown in Figs. 5 and 6, respectively. Note that we omit after 100 iterations in Fig. 5 because of small changes. Also, we omit the figures of the obtained solutions, as not much difference is seen.
Although the proposed method does not have an improved convergence rate theoretically, both the objective function value and the norm of the gradient mapping decrease faster than PG as shown in Figs. 5 and 6. Moreover, when we use a constant step size parameter L k = 10 (∀k) or L k = 0.5 (∀k), the convergence gets slow. In particular, the result of L k = 0.5 shows that too large step size leads to numerical instability (large objective values at the first few iterations). In contrast, the step size parameter L k of the proposed method changes drastically at the first few iterations as shown in Fig. 7. This shows that the proposed step size policy effectively adjusts the step size for faster convergence.
Comparison with existing methods
We compare the proposed method with OC, GCMMA, IPM and SQP. We use the same stopping criterion G 1 (x k ) < 10 −3 for all the algorithms. The maximum number of iterations is 2000. The total computational time and the computational time per iteration in seconds versus the number of finite elements n are shown in Figs. 8 and 9, respectively. Note that the graphs of Proposed and OC are overlapped in Fig. 9. The computational time of SQP is shown only for small values of n because it increases rapidly as n becomes large. To see how the objective value and the optimality measure decrease, we show these values of the proposed method, OC, MMA and IPM with 2000 iterations when n = 10,800 in Figs. 10 and 11. Note that the proposed method Table 1 lists the detailed results: the number of iterations "iter.", the number of FEA, the total computational time t, the computational time per iteration t it , and the objective value f (x) and the Euclidean norm of the gradient mapping G 1 (x) at the last iteration. Note that IPM automatically stops before 2000 iterations for n = 10,800, 19,200 and 30,000, although the obtained solution does not satisfy the stopping criterion. This is because the MATLAB fmincon stops automatically if the step size becomes too small.
From Figs. 8 and 9, we see that the computational cost per iteration of IPM and SQP increases drastically as the number of finite elements n increases. SQP has a particularly high iteration cost as it solves a quadratic programming problem at each iteration. Therefore, these nonlinear programming solvers are impractical in a largescale optimization problem. We omit IPM and SQP in numerical experiments hereafter, as they are particularly slow. GCMMA also has a high iteration cost compared to the proposed method and OC as it solves a convex subproblem at each iteration. The proposed method and OC consist of the vector addition, the scalar multiplication and the bisection method (solution of a single variable equation), and hence have low iteration costs. The number of FEA of the proposed method in Table 1 shows that the proposed step size policy effectively estimates an appropriate step size for stable convergence because almost no additional FEA is needed. As shown in Table 1, OC and GCMMA do not stop until the 2000 iteration. In fact, Fig. 11 shows that the optimality measures of OC and GCMMA do not decrease sufficiently. Note that OC is a heuristic algorithm and the convergence to a stationary point is not guaranteed. In contrast, the proposed method stops at fewer iterations than the other algorithms, and hence the proposed method has a shorter computational time as observed in Fig. 8. Figure 10 shows that the objective function value of the proposed method also decreases faster than those of the other algorithms. Moreover, the solution of the proposed method satisfies the optimality condition with higher accuracy as shown in Fig. 11, which means that the solution is a more reliable optimal solution. In Fig. 12, the obtained solutions are slightly different (compare the angles of the right inclined bars). There is no guarantee that the solution obtained by OC is a local optimum since OC is heuristic (the objective function value is larger as shown in Table 1). In contrast, the solution obtained by the proposed method can be considered at least a stationary point.
Large-scale problems
To show the effectiveness of the proposed method in large-scale problems, we make a comparison with OC and GCMMA. To see the practical performance, we use different stopping criteria, which are commonly used for the three algorithms. The stopping criterion of the proposed method and OC is G 1 (x k ) < 10 −3 [14,27] and x k+1 − x k < 10 −3 [25], respectively. GCMMA stops when the Euclidean norm of the KKT residual [32,33] is less than 10 −3 . The maximum number of iterations is 3000. We show the number of iterations until the stopping criteria are satisfied for smallscale problems in Fig. 13. OC satisfies the stopping criteria for only small values of n as shown in Fig. 13. As OC does not satisfy the stopping criteria until the maximum number of iterations, the computational time of OC becomes huge when n gets large. Therefore, we omit OC for large-scale problems. A convergence guarantee is important to properly stop the algorithm and obtain a high-quality solution.
The total computational time and the optimality measure G 1 (x k ) of the solutions in large-scale problems are shown in Figs. 14 and 15, respectively. We also add the results of the proposed method with the stopping criterion G 1 (x k ) < 10 −2 . Figure 14 shows that the proposed method and GCMMA converge at a moderate amount of time for large-scale problems. However, the solutions of GCMMA satisfy the optimality condition only with low accuracy compared with the proposed method with G 1 (x k ) < 10 −3 , as shown in Fig. 15. The proposed method with the stopping criterion G 1 (x k ) < 10 −2 can obtain the solutions satisfying the optimality condition with similar or higher accuracy than those of GCMMA for much shorter computational time. This shows the effectiveness of the proposed method for obtaining an optimal solution with moderate accuracy in a large-scale problem.
Heat conduction problem
In this section, we consider the heat conduction problem shown in Fig. 3. We use the following parameters: V 0 = 0.4n and p = (10/n)1.
We compare the proposed method with OC and GCMMA. The stopping criterion of the algorithms is G 1 (x k ) < 10 −3 and the maximum number of iterations is 2000. The total computational time versus the number of finite elements n is shown Figs. 17 and 18, respectively. Note that the proposed method satisfies the stopping criterion before 2000 iterations. The designs obtained by the three algorithms are shown in Fig. 19. Table 2 lists the detailed results in the same manner as Table 1 for the compliance minimization problem. Figure 16 and Table 2 show a trend similar to the compliance minimization problem; the proposed method has a low iteration cost, achieves faster convergence and satisfies the optimality condition with higher accuracy than the other methods. Figure 19 shows that the designs obtained by the three algorithms are different from each other. This suggests that the heat conduction problem has more local optimal solutions than the compliance minimization problem.
Compliant mechanism problem
In this section, we consider the compliant mechanism problem shown in Fig. 4. Note that, by utilizing the symmetry, we consider only the lower half of the entire design domain. We use the following parameters: V 0 = 0.3n and k in = k out = 0.01. The magnitude of the external force is 1. We compare the proposed method with OC and GCMMA. The stopping criteria of the algorithms are G 1 (x k ) < 10 −3 and f (x k ) < −0.1. The latter criterion is added to obtain a meaningful solution. The direction of the vector u out in Fig. 4 is the negative direction of the nodal displacement in the global coordinate system. Therefore, we seek to find a solution with a negative objective value. The maximum number of iterations is 2000. The total computational time versus the number of finite elements n is shown in Fig. 20. The objective value and the optimality measure until 2000 iterations for n = 9800 are shown in Figs. 21 and 22, respectively. The designs obtained by the three algorithms are shown in Fig. 23. Table 3 lists the detailed results in the same manner as Table 1 for the compliance minimization problem. Figure 20 and Table 3 show a trend similar to the compliance minimization problem; the proposed method has a low iteration cost, achieves faster convergence and satisfies the optimality condition with higher accuracy than the other methods. However, as shown in Fig. 21, the decrease of the objective value of the proposed method slows down in the region where the sign of the objective value changes (the direction of the displacement of the output node changes). A typical design of that region is shown in Fig. 24. In that region, the norm of the gradient of the objective function is small. GCMMA is also slowed down in that region because it required 25 evaluations of the objective values at the first 5 iterations. Figure 23 shows that the design obtained by the three algorithms are similar to each other.
single linear equality or inequality constraint with a box constraint, we have shown that the projection onto the feasible set can be efficiently computed, and hence the projected gradient method can be applied effectively. We have proposed to use an inertial version of the projected gradient method by Ochs et al. [25] to accelerate the convergence. We have also considered an adaptive step size policy to further reduce the computational cost. The proposed method is easy to implement. Moreover, the proposed method has the global convergence property.
In numerical examples, we have shown that the iteration cost of the proposed method is as low as that of the optimality criteria method. It has been demonstrated that the conventional algorithms used for topology optimization (the optimality criteria method and the method of moving asymptotes) achieve the first-order optimality condition with low accuracy. In contrast, the proposed method converges fast to a point satisfying the first-order optimality condition with higher accuracy. The proposed method is also effective for large-scale problems. We have shown that, for a topology optimization problem with simple linear constraints such as the compliance minimization problem, it is more efficient to use the proposed method than to use a general-purpose nonlinear programming solver such as the interior-point method and the method of moving asymptotes, because the proposed method takes advantage of a simple problem structure.
We have dealt with topology optimization problems with only linear constraints. To deal with large-scale optimization problems with nonlinear constraints, other firstorder algorithms are to be considered. Large-scale optimization is rapidly growing especially in machine learning and data science communities. There may be some efficient large-scale optimization techniques that can be useful for developing new topology optimization algorithms. | 2023-02-18T16:12:50.822Z | 2023-02-16T00:00:00.000 | {
"year": 2023,
"sha1": "20e56a580f4c178dd290679d1f255cbdba2b27b8",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13160-023-00563-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "e30cba9842a7e0cbd0c71ff4c882f909b7a40bf0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
2632712 | pes2o/s2orc | v3-fos-license | Amplification of Galactic Magnetic Fields by the Cosmic-Ray Driven Dynamo
We present the first numerical model of the magnetohydrodynamical cosmic-ray (CR) driven dynamo of the type proposed by Parker (1992). The driving force of the amplification process comes from CRs injected into the galactic disk in randomly distributed spherical regions representing supernova remnants. The underlying disk is differentially rotating. An explicit resistivity is responsible for the dissipation of the small-scale magnetic field component. We obtain amplification of the large-scale magnetic on a timescale 250 Myr.
Introduction
In 1992 Parker discussed the possibility of a new kind of galactic dynamo driven by galactic CRs accelerated in supernova remnants. This dynamo contains a network of interacting forces: the buoyancy force of CRs, the Coriolis force, the differential rotation and magnetic reconnection. Parker estimated that such a dynamo is able to amplify the large scale magnetic field on timescales of the order of 10 8 yr.
It is the aim of our contribution to show that Parker's CR driven dynamo indeed acts efficiently on timescales comparable with the disk rotation time. In the next two Sections we describe the physical elements of the model and the system of equations used in numerical simulations. Section 4 presents the numerical setup, Sections 5 and 6 inform the reader about the results on the structure of the interstellar medium including CRs and magnetic fields, the strength of the amplified magnetic field and the spatial structure of the mean magnetic field. We summarize our results very briefly in Section 7.
Elements of the model
We performed computations with the aid of the Zeus-3D MHD code (Stone & Norman 1992a,b), which we extended with the following features: (1) The CR component, a relativistic gas de-scribed by the diffusion-advection transport equation (see Hanasz & Lesch 2003b for the details of numerical algorithm). Following Jokipii (1999) we presume that CRs diffuse anisotropically along magnetic field lines.
(2) Localized sources of CRs: supernova remnants, exploding randomly in the disk volume (see Hanasz & Lesch 2000). (3) Resistivity of the ISM (see Hanasz et al. 2002, Hanasz & Lesch 2003a) responsible for the onset of fast magnetic reconnection (in this paper we apply the uniform resistivity). (4) Shearing boundary conditions and tidal forces, following the prescription by Hawley, Gammie & Balbus (1995), aimed to model differentially rotating disks in the local approximation. (5) Realistic vertical disk gravity following the model of ISM in the Milky Way by Ferriere (1998).
The system of equations
We apply the following set of resistive MHD equations ∂ ρ ∂t where q = −d lnΩ/d lnR is the shearing parameter, (R is the distance to galactic center), g z is the vertical gravitational acceleration, η is the resistivity, γ is the adiabatic index of thermal gas, the gradient of CR pressure ∇p cr is included in the equation of motion (see Hanasz & Lesch 2003b) and other symbols have their usual meaning. The uniform resistivity is included only in the induction equation (see Hanasz et al. 2002). The adopted value η = 1 exceeds the numerical resistivity for the grid resolution defined in the next section (see Kowal et al. 2003). The thermal gas component is currently treated as an adiabatic medium.
The transport of the CR component is described by the diffusion-advection equation where Q SN represents the source term for the CR energy density: the rate of production of CRs injected locally in SN remnants and p cr = (γ cr − 1)e cr , γ cr = 14/9.
The adiabatic index of the CR gas γ cr and the formula for diffusion tensor are adopted following the argumentation by Ryu et al. (2003).
Numerical simulations
We performed numerical simulations in a 3D Cartesian domain 500pc × 1000pc × 1200pc , extending symmetrically around the galactic midplane from z = −600pc up to z = 600pc , with the resolution of 50 × 100 × 120 grid zones in directions x, y and z, corresponding locally to cylindrical coordinates r, φ and z, respectively. The applied boundary conditions are periodic in the ydirection, sheared-periodic in the x direction and outflow in the z direction. The computational volume represents a 3D region of disk of a galaxy similar to the Milky Way.
The assumed disk rotation is represented locally by the angular velocity Ω = 0.05Myr −1 and by a flat rotation curve corresponding to q = 1. We apply the vertical gravity profile determined for the Solar neighborhood (see Ferriere 1998 for the formula). We assume that supernovae explode with the frequency 2kpc −2 Myr −1 , and assume that 10 % of the 10 51 erg kinetic energy output from SN is converted into the CR energy. The CR energy is injected instantaneously into the ISM with a Gaussian radial profile (r SN = 50pc) around the explosion center. The explosion centers are located randomly with a uniform distribution in the x and y directions and with a Gaussian distribution (scaleheight H = 100pc) in the vertical direction. The applied value of the CR parallel diffusion coefficient is K = 10 4 pc 2 Myr −1 = 3 × 10 27 cm 2 s −1 (i.e. 10 % of the realistic value) and the perpendicular one is K ⊥ = 10 3 pc 2 Myr −1 = 3 × 10 26 cm 2 s −1 .
The initial state of the system is a magnetohydrostatic equilibrium with a horizontal purely azimuthal magnetic field of the strength corresponding to p mag /p gas = 10 −8 . The initial CR pressure in the initial state is equal to zero. The initial gas density at the galactic midplane is 3 H atoms cm −3 and the initial isothermal sound speed is c si = 7km s −1 .
Structure of interstellar medium resulting from the CR-MHD simulations
In Fig. 1 we show the distribution of CR gas together with magnetic field, and thermal gas density together with and gas velocity in the computational volume at t = 2000 Myr.
One can notice in panel (a) a dominating horizontal alignment of magnetic vectors. The CR energy density is well smoothed by the diffusive transport in the computational volume. The vertical gradient of the CR energy density is maintained by the supply of CRs around the equatorial plane in the disk in the presence of vertical gravity. In panel (c) one can notice that at the height z = 370pc the dominating magnetic vectors are inclined with respect to the azimuthal direction, i.e. the radial magnetic field component is on average about 10 % of the azimuthal one.
The CR energy density is displayed in units in which the thermal gas energy density correspond-ing to ρ = 1 and the sound speed c si = 7km s −1 is equal to 1. We note that the CR energy density does not drop to zero at the lower and upper z boundaries due to our choice of outflow boundary conditions for the CR component. We note also that almost constant mean vertical gradient of CR energy density is maintained during the whole simulation.
The velocity field together with the distribution of gas density, is shown in the next panels (d), (e) and (f). The shearing pattern of velocity can be noticed in the horizontal slice (f). The vertical slices (d) and (e) show the stratification of gas by the vertical gravity, acting against the vertical gradients of thermal, CR and magnetic pressures.
Amplification and structure of the mean magnetic field
In the following Fig. 2 we show how efficient is the amplification of mean magnetic field resulting from the continuous supply of CRs in supernova remnants. First we note the growth of the total magnetic energy, by 7 orders of magnitudes during the period of 2 Gyr. Starting from t ∼ 300 Myr the growth of magnetic energy represents a straight line on a logarithmic plot, which means that the magnetic energy grows exponentially. The e-folding time of magnetic energy determined for the period t = 400 ÷ 1500 Myr is 115 Myr. Around t = 1500 Myr, the growth starts to slow down as the magnetic energy approaches an equipartition with the gas energy.
The other three curves in the left panel of Fig. 2 show the growth of energy of each magnetic field component. It is apparent that the energy of radial magnetic field component is almost an order of magnitude smaller than the energy of vertical magnetic field component which is almost one order of magnitude smaller than the energy of the azimuthal one. This indicates that the dynamics of the system is dominated by the buoyancy of CRs and that magnetic reconnection efficiently cancels the excess of the random magnetic fields.
In the right panel of Fig. 2 we show the time evolution of the normalized, mean magnetic fluxes Φ x (t)/Φ y (t = 0) and Φ y (t)/Φ y (t = 0), where Φ x (t) and Φ y (t) are respectively magnetic fluxes at moment t, threading vertical planes perpendicular to x and y axes respectively, and averaging is done over all possible planes of a given type. We find that the radial magnetic flux Φ x starts to deviate from zero, as a result of Coriolis force and open boundary conditions. Due to the presence of differential rotation the azimuthal magnetic field is generated from the radial one. The azimuthal flux grows up by a factor of 10 in the first 800 Myr of the system evolution and then drops suddenly, reverts and continues to grow with the opposite sign undergoing amplification by more three orders of magnitudes, with respect to the initial value.
In order to examine the structure of the mean magnetic field we average of B x and and B y across constant z-planes. The results are presented in Fig. 3 for t = 0 (the initial magnetic field) and then for t = 500, 1000, 1500, 2000 and 2300 Myr. We find that the mean magnetic field grows by a factor of 10 within about 500 Myr, which gives an e-folding time close to 250 Myr. We note that an apparent wavelike vertical structures in B x and B y formed from the initial purely azimuthal, unidirectional state of B y and B x = 0. The evolved mean magnetic field configuration reaches a quasisteady pattern which is growing in magnitude with apparent vertical reversals of both components of the mean magnetic field. We note also that the magnetic field at the disk midplane remains relatively weak.
A striking property of the mean magnetic field configuration is the almost ideal coincidence of peaks of the oppositely directed radial and azimuthal field components. This feature corre-sponds to a picture of an α − Ω-dynamo: the azimuthal mean magnetic component is generated from the radial one and vice versa.
In order to understand better what kind of dynamo operates in our model, we computed the ycomponent of the electromotive force E mf,y = v z B x − v x B z , averaged over constant z-planes and checked that ∂ B x /∂t ≃ −∂ E mf,y /∂x with a reasonable accuracy. However, we found that the space averaged E mf,y fluctuates rapidly in time, so that the approximation of E mf,y by α yy B y , (where α yy is a component of the fluid helicity tensor), implies that α yy oscillates rapidly in time. This property points our model toward the incoherent α − Ω dynamo described by Vishniac & Brandenburg (1997). Finally, we checked that the magneto-rotational instability (Balbus and Hawley 1991) does not seem to play a significant role in our dynamo model. Due to the weakness of the initial magnetic field, the wavelength of most unstable mode of this instability remains shorter than the cell size for the first half of the simulation time.
Conclusions
We have described the first numerical experiment in which the effect of amplification of the large scale galactic magnetic field was achieved by the (1) continuous (although intermittent in space and time) supply of CRs into the interstellar medium, (2) shearing motions due to differential rotation and (3) the presence of an explicit resistivity of the medium.
We observed in our experiment the growth of magnetic energy by seven orders of magnitude and the growth of magnetic flux by a factor of 1300 in 2150 Myr of the system evolution. We found that the large scale magnetic field grows on a timescale 250 Myr, which is close to the period of galactic rotation.
Therefore the galactic dynamo driven by CRs appears to work very efficiently, as it was suggested by Parker (1992). It is a matter of future work to verify whether the presented model is a fast dynamo, i.e. whether it works with a similar efficiency in the limit of vanishing resistivity. | 2014-10-01T00:00:00.000Z | 2004-02-27T00:00:00.000 | {
"year": 2004,
"sha1": "25fe63700eaa99454d47a3985f1e84af1634d4b7",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Arxiv",
"pdf_hash": "25fe63700eaa99454d47a3985f1e84af1634d4b7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119009317 | pes2o/s2orc | v3-fos-license | Inflationary Baryogenesis
In this letter we explore the possibility of creating the baryon asymmetry of the universe during inflation and reheating due to the decay of a field associated with the inflaton. CP violation is attained by assuming that this field is complex with a phase that varies as the inflaton evolves. We consider chaotic and natural inflation scenarios. In the former case, the complex decaying field is the inflaton itself and, in the latter case, the phase of the complex field is the inflaton. We calculate the asymmetry produced using the Bogolyubov formalism that relates annihilation and creation operators at late time to the annihilation and creation operators at early time.
annihilation and creation operators at early time.
Typeset using REVT E X
I. INTRODUCTION
Explaining the origin of the matter-antimatter asymmetry of the universe is an essential ingredient in our understanding of the history of the universe. In this article, we study the possibility of creating the baryon asymmetry of the universe by the production of particles during inflation and reheating by the decay of a complex field related to the inflaton. We consider the case where the complex decaying field is the inflaton itself as well as the case where the phase of the complex field is the inflaton, as in natural inflation. The former case is similar to chaotic inflation but with a complex inflaton. By assigning baryon number to a scalar field present during inflation and introducing a baryon number violating coupling between this field and the inflaton we find that there is a net baryon number asymmetry in the produced particles.
The notion that the inflaton plays a role in baryogenesis is not new. If the reheat temperature is above the mass of certain heavy particles, such as GUT gauge and Higgs bosons, then the latter are thermally produced and their subsequent out-of-equilibrium decays create a baryon asymmetry. [1] The production of heavy GUT gauge and Higgs bosons or squarks by the direct decay of the inflaton when the reheat temperature and/or the inflaton mass is less than that of the heavy bosons has also been considered. Once again, the out-of-equilibrium decays or annihilations of these particles gives rise to the baryon asymmetry of the universe. In all the above scenarios [2][3][4][5] CP violation enters into the couplings of the heavier bosons to lighter particles. In our scenario the baryon asymmetry is produced in the direct decay of a field associated with the inflaton. Furthermore the CP violation must manifest itself in the decay of this field. In this respect it is similar to the scenario mentioned in Ref. [6] and discussed in more detail in Ref. [7] in which the baryon asymmetry is created by the direct decay of the inflaton. However, unlike in Ref. [7], in our scenario CP violation is provided dynamically through the time dependent phase of an evolving complex inflaton or of a complex field associated with the inflaton. We explicitly calculate the asymmetry in our scenario and compare it to the baryon asymmetry of the universe. We follow the work of Ref. [8] in which the asymmetry was calculated in the context of a universe that contracts to a minimum size, bounces back and then expands. The universe was static at both initial and late times. The B-violating coupling was λR(φ * Λψ + ψ * Λ * φ), where R is the Ricci scalar, φ carried baryon number and Λ was a complex function of time, which provided the necessary CP violation. In this work, we have adapted the formalism of Ref. [8] to consider the asymmetry that might be created in a more realistic universe that is initially inflating and then enters a reheating phase followed by the standard evolution of the universe. Furthermore we have given a more realistic source of CP violation, namely, a time varying complex field.
Recently Ref. [9] appeared in which the authors discuss the generation of the baryon asymmetry during preheating in a scenario similar to the one discussed here. We discuss later the differences and similarities between our work and theirs.
As in Ref. [8], we consider a lagrangian consisting of two complex scalar fields φ and ψ. φ and ψ are assumed to carry baryon number +1 and 0 respectively and we assume a B-violating term where λ is a dimensionless constant and η is related to the inflaton field and is complex. The baryon number of φ and ψ particles is established by their interactions with other particles in the Standard Model. 1 The latter are not included in our lagrangian below as they do not enter into our calculations. We assume that the initial velocity of the η field and/or the shape of its potential ensures that its phase varies as the inflaton rolls down its potential.
Thus we have dynamic CP violation.
To obtain the asymmetry in our scenario we use the fact that the annihilation and 1 To ensure that the baryon asymmetry created is not erased by sphaleron processes, we assume that the interaction in Eq. (1) also violates B-L. This may be achieved, for example, if ψ carries no lepton number.
creation operators for the fields φ and ψ are not the same during the inflationary phase and at late times after reheating. However, the annihilation and creation operators at late times can be written as linear combinations of the annihilation and creation operators during the inflationary phase using the Bogolyubov coefficients. So, whereã φ k andb φ k are operators at late times and a φ k and b φ k are operators at early times in the inflationary phase. Similar expressions exist forã ψ k andb ψ k . In the Heisenberg picture, if we choose the state to be the initial vacuum state, then it will remain in that state during its subsequent evolution. One can then see that the number of φ particles and antiparticles of momentum k, given by 0|ã φ † kã φ k |0 and 0|b φ † kb φ k |0 respectively, are non-zero and proportional i.e., our in state, must be chosen judiciously to avoid infrared divergences. This is discussed in Section III.
The framework of this article is as follows. In Section II we present the lagrangian density for the complex scalar fields φ and ψ relevant to our calculation and obtain their equations of motion. We then write down the Fourier decomposition of φ and ψ during the inflationary phase and during reheating. General expressions for the coefficients A − D ′ in Eqs. 2 and 3 have been derived in Ref. [8]. We shall present these results without rederiving them and then present the general result for the baryon asymmetry of the universe. In Section III, we present the particular solutions for our scenario of a universe that undergoes exponential inflation (a ∼ e Ht ) followed by an inflaton-oscillation dominated phase (a ∼ t 2/3 ). We then calculate the total baryon asymmetry for this scenario in the context of chaotic and natural inflation. We conclude in the last section. In the Appendix we discuss issues related to the regularisation of infrared divergences and the necessity of an infrared cutoff to satisfy the conditions of perturbation theory.
II.
Consider a lagrangian density where m φ,ψ are the masses of the respective fields and ξ φ,ψ are their couplings to the curvature. We have assumed that η is minimally coupled. V (η) includes all interactions of η other than the coupling to φ and ψ already listed above. Below we shall consider natural inflation and chaotic inflation scenarios. In the former case the inflaton will be associated with the phase of η. In the latter case we assume that the complex η field is the inflaton field. (Thus we are really considering an extension of chaotic inflation since the inflaton is now complex.) We shall assume a spatially flat Robertson-Walker metric. The equations of motion for the above fields arë We now write with k = 2π L (n x , n y , n z ) and a similar expression for ψ(x). The equations of motion for the Fourier coefficients φ k and ψ k are (note that φ k and ψ k are operators) φ k and ψ k satisfy To use the results derived in Ref. [8] we shall have to solve Eqs. 9 and 10 for times during inflation and during reheating. To simplify our calculations we shall assume that the fields φ and ψ are massless and minimally coupled, i.e., m φ,ψ = 0, ξ φ,ψ = 0. (Typically, a spin zero particle will obtain a mass of order H during inflation, if H is greater than its bare mass [10]. However, this does not occur if the mass is protected by a symmetry.) To facilitate the use of perturbation theory and to be able to define an in state we assume that the B-violating interaction switches on at some time t 1 . Let t 2 be the time when inflation ends, t 3 be the time when reheating ends and t f be the final time at which we evaluate the baryon asymmetry. We assume that B-violation vanishes after t 3 . The annihilation and creation operators at t f can be expressed as linear combinations of the annihilation and creation operators at an early time t i before t 1 in the inflationary era. These relations have been derived perturbatively to order λ 2 in Ref. [8] giving where and and the H ψ i are defined as H φ i with η 2 replaced by η * 2 and χ φ k , χ φ * k and △ ψ replaced by χ ψ k , χ ψ * k and △ φ respectively. Above, χ φ k and χ ψ k are complex functions (not operators) that solve Eqs. 9 and 10, with λ set to 0, respectively. △ φ k and △ ψ k are the retarded Green's functions for Eqs. 9 and 10 respectively, i.e., they satisfy Eqs. 9 and 10 with λ set to 0 and a delta function δ(t − t ′ ) on the r.h.s. of the equations. The subscripts on the coefficients α k and β k and on the functions χ k and △ k refer to |k|. α k and β k are complex and satisfy We assume that the initial state at t i is the vacuum state. Then, in the Heisenberg picture, the number of φ particles of momentum k at t f is given by and where we have used The baryon asymmetry for particles of momentum k at t f is thus where we have used Eq. 26 and The reader is referred to Ref. [8] for a more detailed derivation of the above results. 2 Note that the asymmetry does not depend on α k and β k implying that the asymmetry is independent of the purely gravitational production of particles in the expanding universe. (ξ = 0 does not imply a conformally invariant universe. Therefore there is non-zero purely gravitational production of particles in our scenario but it does not contribute to the asymmetry.) To obtain the net baryon number at t f we sum over all momentum modes and take the continuum limit. Since we ultimately wish to express the baryon asymmetry as the baryon number density to entropy density ratio, and the baryon number does not change after t 3 , we write Working in the approximation that at t 3 the inflaton completely decays and the universe instantaneously reheats to a temperature T 3 , the baryon asymmetry of the universe is where we have assumed that there is no dilution of the baryon asymmetry due to entropy production during the subsequent evolution of the universe. Note that because the effective coupling is a complex function of time the baryon asymmetry is obtained at O(λ 2 ) and not For standard reheating, t 3 ≈ Γ −1 , where Γ is the dominant perturbative decay rate of the inflaton. We take Γ = g 2 8π m inf , corresponding to the decay of the inflaton of mass m inf to some light fermion-antifermion pair. Furthermore, T 3 = 30 [11], where ρ here refers to the inflaton energy density. 3 ρ(t 3 ) ≈ ρ(t 2 )[a(t 2 )/a(t 3 )] 3 . We assume that the reheat temperature is not high enough for GUT B-violating interactions to be in equilibrium and wipe out the asymmetry generated in our scenario. On the other hand, we do not restrict ourselves to reheat temperatures below 10 8 GeV to avoid the gravitino problem [13] as we consider the possibility that the gravitino might be very light.
III.
To obtain the baryon asymmetry, we need to evaluate I 2 and I 3 . This requires obtaining χ φ k and χ ψ k . We shall need to perform the integral for I 2 and I 3 only from t 1 to t 3 as B- 3 The final temperature T 4 at the end of reheating is also a function of the interactions of the inflaton decay products which we have ignored. See Ref. [12] and references therein for a discussion of thermalisation of the decay products.
violation vanishes earlier than t 1 and after t 3 . Solutions of Eqs. 9 and 10 for λ = 0 and a(t) = σt c , (c = 1, −1/3) and a(t) = σe Ht have been obtained in Ref. [14]. Using them we H ν (z) are Hankel functions. The commutation relations Eqs. 11 and 12 imply that and therefore the constants c 1,2 and c ′ 1,2 satisfy The constants c 1 and c 2 define an initial vacuum state in the inflationary era. If one makes a choice of the de Sitter invariant vacuum state (c 1 = 0 and c 2 = 3π/4) as the in state then it is well known that such a state suffers from an infrared divergence. One option then is to choose the constants c 1 and c 2 appropriately so as to cancel the infrared divergences even though such states will no longer be de Sitter invariant. In Ref. [14] it is suggested that one may choose the constants c 1 and c 2 as below so as to cancel the infrared divergences: 5 with p > 0. (We point out in the Appendix that there is also an upper limit on p that was not mentioned in Ref. [14].) As we discuss in the Appendix, the nature of the infrared divergences is slightly different in our case. Though the above choice for the constants c 1,2 with appropriately chosen values of p would make the final integral over k infrared finite, the integrands for the intermediate integrals over t become very large for small values of k, irrespective of the value of p. This leads to a problem with perturbation theory since the latter requires that λ 2 |I 2,3 | 2 should be less than 1. 6 This is discussed in more detail in the Appendix. Therefore we are forced to introduce a low momentum cutoff k L to justify our use of perturbation theory. We choose k L such that k L /a 2 = 1/t 2 . Since the low momentum cutoff automatically regulates the integral over k, we then choose c 1 = 0 and c 2 = 3π/4.
Continuity conditions for φ(x),φ(x) and a(t) at t 2 imply that χ(t) and d/dt(χ(t)/[a(t)] 3/2 ) are continuous at t 2 , and these boundary conditions then give us c ′ 1 and c ′ 2 . We have verified that the values of c ′ 1 and c ′ 2 obtained from the continuity conditions satisfy Eq. 39.
At this stage we need to specify η(t). If we write η(t) as 1 √ 2 σ(t)e iθ(t) (where σ(t) is real), it is the time varying phase of η that provides the CP violation necessary for creating a net baryon asymmetry.
Chaotic Inflation
We first consider the case of chaotic inflation in which the η field represents a complex inflaton field. In the absence of any potential for θ, the equation of motion for θ is In a more realistic model V (η) will imply a potential for θ. The equation of motion for σ is We assume that θ(t) evolves starting from an initial value of 0 at t = t i and an initial velocityθ i . We chooseθ i consistent with a universe dominated by the potential energy of σ. 6 We thank D. Lyth for pointing this out to us.
Therefore we takeθ i = m/2. During inflation,σ 2 ≪ m 2 σ 2 and σ ∼ M P l . Henceσ/σ ≪ H and we ignore the last term in the equation of motion for θ. Then We take H to be constant during inflation and corresponding to the initial energy density of the universe with σ(t i ) = 3M P l . From above, one can see thatθ is much less than m for most of the inflationary era and so we ignore the last term of Eq. 43 for this era. Invoking the slow roll approximation one may also ignoreσ during inflation.
During reheating the σ and the θ fields are coupled and we can not ignore the last terms of their respective equations of motion. However, to obtain θ(t) it is simpler to first rewrite η as 1 √ 2 (κ 1 + iκ 2 ). Then the problem reduces to one of two uncoupled damped harmonic oscillators with solutions, κ 1 = (A 1 /t)cos(mt + α) and κ 2 = (A 2 /t)cos(mt + β). Here we have assumed H = 2/(3t) during reheating. θ(t) is then tan −1 (κ 2 /κ 1 ). The constants A 1 , A 2 , α and β are determined by the values of κ 1,2 and their time derivatives at t 2 which can be obtained from the values of θ and σ and their time derivatives at t 2 . We take t 2 ≈ 2/m, as the inflaton starts oscillating when 3H ≈ m. σ(t 2 ) ≈ M P l /6.
Eq. 44 implies that θ becomes nearly constant within a few e-foldings after t i . If the Bviolating interaction switches on subsequent to this then θ is approximately constant between t 1 and t 2 . Furthermore, sinceθ(t 2 ) is practically zero, there is practically no rotational motion during reheating in the absence of any potential for θ. So during reheating θ takes values of θ 2 and θ 2 + π during different phases of the oscillation of σ, where θ 2 is the value at t 2 .
(θ changes discontinuously at the bottom of the potential where σ is 0.) Since the relevant phase in I 2 and I 3 is 2θ the above implies that the CP phase is practically the same for the interval t 1 to t 3 and hence one should expect very little asymmetry.
Natural Inflation
We now consider natural inflation, in which case σ(t) = f where f is the scale of spontaneous symmetry breaking in the natural inflation scenario. In the presence of an explicit symmetry breaking term that gives mass m θ to the inflaton θ the equation of motion for θ is We assume that θ is constant during inflation between t 1 and t 2 and is of O(1). Our results are insensitive to t 1 for t 1 earlier than about 10 e-foldings before the end of inflation. At t 2 ≈ 2/m θ when 3H ≈ m the θ fields starts oscillating in its potential. Between t 2 and t 3 θ evolves as Now we obtain the asymmetry numerically. Smaller the value of g, longer is the period of reheating contributing to the asymmetry. But for g ≤ 10 −3 , I 2 and I 3 become independent of g and then the g dependence in BAU enters through a(t 3 ) and the reheat temperature As we have mentioned before, perturbation theory requires that λ|I 2,3 | 2 must be less than 1. This translates into an upper bound on λ of 10 −11 . Then even for g = 10 −3 we get insufficient asymmetry. Other values of g give even less asymmetry.
IV. CONCLUSION
In conclusion, we have discussed a mechanism for creating a baryon asymmetry during inflation and reheating. While the scenario illustrated above does not create sufficient asymmetry, it may be easily modified to accommodate a potential for θ which can give rise to a much larger asymmetry. A possible potential for θ for the chaotic inflation scenario is W (θ) = m 2 θ σ 2 (1 − cos θ), which is equivalent to tilting the inflaton potential. Unlike in the analogous axion and natural inflation models, here both σ and θ would be varying with time. Hence such a potential may allow for chaotic orbits and so would have to be studied with care.
We point out here that we include both the inflationary phase and the reheating phase in our calculation. Contributions during both phases do get mixed up in the evaluation of the asymmetry because of the presence of |I 2 | 2 and |I 3 | 2 , where the time integrals in I 2 and I 3 include both the inflationary and the reheating eras. In fact we find in the natural inflation case that though the phase is taken to be constant during the inflationary era, the net baryon asymmetry for a fixed value of λ decreases if we do not include the inflationary era in the integrals I 2 and I 3 . 7 This indicates that one should not ignore the inflationary era when calculating the asymmetry.
While we were writing up this paper Ref. [9] appeared. In this paper the authors discuss the generation of baryon asymmetry during reheating in a scenario similar to ours. The two calculations have some differences however. Our calculation is carried out in curved spacetime while the mode functions in Ref. [9] are obtained in Minkowski space. We consider standard reheating while they consider the more complicated preheating scenario. In both calculations the source of CP violation is a time varying phase. The authors of Ref. [9] suggest that the CP violating potential for the baryonic fields may be induced by their direct coupling to the inflaton or through loop effects involving the baryonic fields and other fields and then presume a form for the phase. We provide a specific scenario in which the inflaton, or a field related to the inflaton, which is coupled to the baryonic fields, is complex and its time varying phase dynamically provides CP violation. Involving the inflaton and its phase seems to us to be a simple and a very natural approach to obtain a time dependent phase. Our calculation includes both the inflationary and the reheating eras which, as we have pointed out, seems to be appropriate for our case.
of the Bessel functions depending on whether the universe is in the inflationary or the reheating era. The k dependence for I 2 is similar. When k is small the argument of the Bessel functions becomes small. For low k values the first term goes as k 2p−3 , the second as k 3−2p and the third is k independent. Now perturbation theory requires that λ 2 |I 2,3 | 2 be less than 1. However, since 2p − 3 and 3 − 2p cannot both be greater than 0, I 2 and I 3 will become very large at low k values. Thus, without a low momentum cutoff, perturbation theory breaks down at some point for any finite value of λ. We emphasise again that this is an issue related to the validity of perturbation theory and not to the infrared divergence of the integral over momentum. The latter can be regulated by the choice of constants c 1 and c 2 mentioned above, irrespective of whether or not perturbation theory is valid. | 2019-04-14T02:32:43.805Z | 2001-03-30T00:00:00.000 | {
"year": 2001,
"sha1": "e14b9629584e2fa8c13fc0d0aaffb69d655195a4",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0103348",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9a747fb02041ecb056c69af1d2c1d7e5c3949341",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
268475473 | pes2o/s2orc | v3-fos-license | Three-dimensional copula framework for early warning of agricultural droughts using meteorological drought and vegetation health conditions
ABSTRACT This study develops an early warning system for crop yield (CY) failure based on meteorological drought and vegetation health conditions. The framework combines three drought indices – the Standardized Precipitation Evapotranspiration Index (SPEI), standardized Normalized Difference Vegetation Index (stdNDVI), and standardized CY (stdCY) values – using copulas. Datasets for five major wheat-producing cities in Turkey between 2000 and 2022 are used for analysis. Results indicate that the time periods used to calculate SPEI and NDVI are critical in determining agricultural drought and CY conditions. The critical threshold values for SPEI and NDVI, with a 10% probability of causing agricultural drought, are found to be ~0.28 and ~0.42, respectively. Using a three-dimensional copula model resulted in more precise CY simulations than a two-dimensional model. The validation efforts showed that all of the observed CYs fell within the simulated range, indicating the robustness of the methodology in capturing drought impacts on CY conditions.
Introduction
Drought events are one of the major climatic hazard types that can significantly impact agriculture and food security (Hameed et al. 2020).Agricultural drought assessment is an important tool for understanding the magnitude of droughts and identifying their risks to crop yield (CY) conditions (Rojas et al. 2011).Such assessments and early warning systems developed using different data sources can help decision makers mitigate the negative impacts of climate extremes by providing them with timely information about the magnitude of upcoming drought events (Merz et al. 2020).
One practical method of agricultural drought monitoring is to combine meteorological drought indices, vegetation indices, and CY amounts to assess the impact of meteorological droughts on CYs and analyse the existing interactions among them.In recent years, the use of multiple indices simultaneously has been seen as a superior alternative to the use of a single index in drought monitoring analysis.Different studies have shown that using multiple drought indices can provide a more robust and comprehensive understanding of drought events (Khan et al. 2020, Afshar et al. 2021, 2022).Moreover, combining indices from different sources, such as meteorological drought conditions, soil moisture anomalies, vegetation health conditions, and CY amounts, can capture different aspects of agricultural droughts and provide a better representation of them (Bayissa et al. 2018).
Three commonly used indices for agricultural drought monitoring are the Standardized Precipitation Evapotranspiration Index (SPEI) (Vicente-Serrano et al. 2010), the Normalized Difference Vegetation Index (NDVI) (Tucker 1979), and CY anomalies (Kogan 1995, Dutta et al. 2015, Sun et al. 2020, Trnka et al. 2020, Javed et al. 2021).SPEI is a meteorological drought index that uses both precipitation and evapotranspiration information to measure the severity of drought events.NDVI is a vegetation index that uses remote sensing observations to measure global warming (Vicente-Serrano et al. 2010) and the state and health of crops (Bezdan et al. 2019).CY anomalies, in turn, are used to measure the deviation of CYs from their longterm averages and can provide realistic information about the impact of agricultural drought events on crop productivity.These indices together can be used to understand the impact of drought events on agricultural production.
Copula functions are powerful tools that have recently been used more frequently in modelling the dependency structure between multiple variables (de Melo E Silva Accioly and Chiyoshi 2004), including bivariate (Shiau 2006, Reddy and Ganguli 2012, Sraj et al. 2015) and trivariate (Zhang andSingh 2007, Wong et al. 2010).Copulas are mathematical functions that are used to describe the association between variables, regardless of their marginal distributions.In the context of integrating different drought indices such as SPEI, NDVI, and CY anomalies, copulas can provide a flexible framework for quantifying the degree of association between these indices and understanding how changes in one index may impact the other indices.
Overall, probabilistic modelling of the relationships between meteorological and agricultural drought indices using copula functions has several advantages over traditional methods such as correlation analysis or linear regression analysis (Wang et al. 2021a, Kamali et al. 2022).Copula functions can account for non-linear relationships and capture the tail dependencies among multiple variables, making them particularly beneficial in modelling complex systems such as droughts.Moreover, probabilistic approaches allow the modelling framework to represent the uncertainty within the modelling (Brigode et al. 2013, Huang andFan 2021), which is important in dealing with the modelling of some variables under the influence of multiple different factors (Zampieri et al. 2017).On the other hand, regression models assume a linear relationship between the variables involved within the modelling framework which may not be the case in realworld scenarios such as relating CY anomalies to meteorological droughts.
While probabilistic modelling of the relationships between drought indices using copulas has several advantages over traditional regression methods and correlation analysis, some challenges also need to be considered.One of the challenges of using copulas in such a modelling framework is related to their complexities.Copula functions, in comparison to regression methods, are difficult to understand, apply, and interpret, which makes them difficult to use in practice (Afshar and Yilmaz 2017).Another challenge in using copulas is related to the availability of the number of observations considered in the analysis.Copulas, particularly those functions that require calibration and fitting efforts in their functionality process, need large, global datasets (Spinoni et al. 2019, Ionita andNagavciuc 2021), and gathering such data can be difficult in the context of relating CYs with environmental factors.Moreover, copulas are computationally intensive and have high computational costs for generating uncertainties associated with relationships among meteorological, and agricultural drought indices (Hasan et al. 2019).Overall, however, despite the computational demands and other challenges associated with copulas in drought analyses, the opportunity they provide to accurately capture the tail dependency structure among drought indices using time series clustering (de Luca and Zuccolotto 2021) makes them a more robust and comprehensive choice over correlation analysis and regression models and leads them to generate improved drought predictions.
In the context of agricultural drought analysis, copulas have demonstrated their utility in modelling the interdependencies between drought indices and CYs in various countries, such as Poland, the USA, Canada, and North China (Heim 2002, Quiring and Papakryiakou 2003, Łabędzki and Bąk 2015, Liu et al. 2018).These studies have consistently shown that copulas offer an accurate representation of these intricate relationships.Likewise, several investigations in Europe and Spain have explored the connections between CY conditions and diverse environmental factors (Iglesias andQuiroga 2007, Iglesias et al. 2012), as well as climate conditions affecting drought risk (Larsen et al. 2013, Filipa Silva Ribeiro et al. 2020, Yoon et al. 2020, Bali and Singla 2022).These studies concluded that copulas serve as valuable tools for predicting CYs under varying environmental and drought conditions.However, there remains an evident gap in the literature, with no prior attempts to employ copulas in a three-dimensional framework to simultaneously address the interdependencies between meteorological drought, vegetation health conditions, and CY anomalies.
A significant novelty of this work lies in its pioneering approach, which introduces a novel approach to CY prediction studies.This approach harnesses copula functions to compute the anticipated CY under various drought scenarios through the use of conditional joint probability, denoted as P(CY = cy | SPEI = spei & NDVI = ndvi).To determine the associated probabilities of specific CY levels under varying environmental and climate conditions, this study relies on the numerical calculation of the density of conditional copula functions.This represents a departure from traditional copula applications that typically provide cumulative distribution functions (CDFs) of joint probabilities.This innovation constitutes a substantial contribution to time series prediction and holds immense promise for CY forecasting.Additionally, within the regional context, this study stands as the first of its kind in the region to predict CY as a function of meteorological drought and vegetation health conditions, highlighting its unique and distinctive nature.
Hence, the aim of this study is to develop a framework that integrates three distinct drought indices from disparate sources, namely SPEI, NDVI, and CY values, and facilitates the timely provision of early warnings regarding CY failure probabilities.In line with this objective, SPEI, NDVI anomalies, and wheat CY anomalies of Turkey's five main wheatproducing regions have been sourced and interconnected using a novel copula-based framework.Moreover, to validate the developed framework, we test the performance of the approach by comparing the simulated and observed wheat CY conditions during severe drought events that occurred in the study area throughout the study period between 2000 and 2022.
Study area
The study area covers five cities -Ankara, Eskisehir, Konya, Kayseri, and Cankiri -located in the so-called Central Anatolian (CA) region of Turkey in the centre of the country (Fig. 1).The region encompasses 11 cities covering an area of approximately 65 000 km 2 .The region is characterized by a semi-arid climate with hot summers and cold winters.The topography of the region is dominated by rolling plains and high plateaus, with elevations ranging from 650 to 2 600 m above sea level.
The CA region of Turkey is a significant producer of wheat and other crops such as barley, corn, and sunflower, due to its fertile soil and favourable agricultural conditions, making it a key contributor to the country's agronomic economy.However, the region is prone to drought (Hesami Afshar et al. 2016, Afshar et al. 2020, Danandeh Mehr et al. 2020) which can affect CYs, especially for rainfed crops.Data collected by the State Statistical Institute (TÜİK 2019) throughout 2000-2019 shows that the CA region provided around 24% of the total wheat production in Turkey, with the cities of Konya and Ankara being the largest producers, accounting for 1.9 million tons and 1.0 million tons in 2019 alone.This highlights the important role of the region in the analysis of agricultural drought, particularly in terms of wheat crop production in Turkey (Bulut 2021).
The datasets used in this study contain multiple environmental variables.Daily precipitation, temperature, and solar radiation values for the five abovementioned cities are retrieved from European Environment Agency (ERA5) datasets between the years 1980 and 2019 to calculate the SPEI.Moreover, remotely sensed observations of Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua and Terra satellites (MOD09GA.006and MYD09GA.006products; (Vermote andWolfe 2015a, 2015b) are used to calculate NDVI values over rainfed areas of the considered cities in the study using Copernicus global land cover layers (CGLS-LC100 Collection 3) datasets (Buchhorn et al. 2020) within the Google Earth Engine environment (Gorelick et al. 2017).Finally, to evaluate the CY amounts, the winter wheat CY values of these cities are collected from TUIK, considering only rainfed products among both rainfed and irrigated statistics.
While the datasets collected from ERA5 (Hersbach et al. 2020) that are used in the calculation of SPEI were collected between the years 1980 and 2019 to achieve a better understanding of drought events, the NDVI and CY datasets were collected between the years 2000 and 2019, as the early years of wheat yields (between 1980 and 2000) were not steady and showed drastic changes in the products.Moreover, the available trend in NDVI and CY time series for the five considered cities is removed linearly to be more consistent with climatic conditions, and the SPEI time series.
Drought indices
Within the scope of this study, three indices coming from different sources, namely SPEI, standardized NDVI (stdNDVI), and standardized CY (stdCY) values, are integrated to evaluate agricultural droughts over the considered study area.The SPEI uses the difference between precipitation and potential evapotranspiration (PET) amounts (referred to as water balance amounts) to represent the available and under-demand water amounts and assess drought events accordingly.The SPEI values can be calculated at different time scales (accumulated balance amounts) ranging from daily to multi-year scales.The choice of time scale for SPEI depends greatly on the aim of its utilization, and while balance amounts for short periods (e.g.weekly) can be used to analyse flash droughts, the longer time scales can provide insights into seasonal dynamics and agricultural drought assessments.In this study, the SPEI has been calculated on a daily basis over a time frame that spans between specific dates.Notably, this time frame does not follow calendar months but has been tailored to align with the phenology of the wheat crop, aiming to maximize the correlation between SPEI, NDVI, and CY anomalies.For more detailed information, please refer to the final paragraph of Section 2.2.
There are multiple ways to calculate PET amounts, such as Thornthwaite (Thornthwaite 1948), Food and Agriculture Organization of the United Nations (FAO)-56 methodology (Allen et al. 1998), or Hargreaves-Samani (Hargreaves and Samani 1985).In this study, the Penman-Monteith equationbased PET estimation using the dataset published by Singer et al. (2021) are utilized to calculate SPEI values over the regions.The other climatic data comprising precipitation used in the calculation of SPEI values over the regions are retrieved from ERA5 datasets.In this study, the calculation of SPEI is followed by the methodology proposed in the SPEI package (Beguería and Vicente-Serrano 2013) in R environment (R Core Team 2021), with some modifications that helped this study perform these calculations daily.
It is noteworthy that in this study, the choice of SPEI over alternative indices, such as the Standardized Soil Moisture Index (SSI), offering potentially richer insights into agricultural droughts is influenced by practical limitations in accessing daily soil moisture data.Daily datasets derived from satellite-based soil moisture, which are commonly regarded as more reliable for global soil moisture measurements (Bulut et al. 2019), often include gaps in their time series (Afshar et al. 2022).Considering our focus on daily accumulations of drought indices and dataset constraints, in this study, the SPEI is preferred to represent meteorological drought aspects of the analysis.
The next index used in this study is stdNDVI, which considers the deviation of a certain period's NDVI values from the long-term average.The NDVI is a remotely sensed vegetation index that uses red (with the electromagnetic spectrum centred at 858 nm) and near-infrared (with the electromagnetic spectrum centred at 645 nm) bands' reflectance to assess the vegetation dynamics and the healthiness of vegetation cover in a given area.The NDVI values usually vary between 0 and 1, representing the bare soil to dense vegetation cover conditions, respectively, in a specific region.
The stdNDVI values, on the other hand, can be calculated by subtracting the long-term average of NDVI values for a specific time and location from the normal NDVI values and then dividing the calculated deviation at the previous step by the long-term standard deviation of the considered NDVI values.The stdNDVI values can provide valuable information about agricultural droughts in different locations, as this normalization process removes the range of NDVI values and turns it into a normally distributed time series, making it easier to compare over different periods and locations.
In this study, the selection of NDVI over other vegetation indices was guided by its widespread utility in remote sensing applications.The NDVI's simplicity and straightforward interpretation of values make it a practical indicator for assessing the condition of vegetation around the globe.While other indices may have their unique merits, NDVI's ease of calculation, robustness, and compatibility with various sensors and platforms make it a reliable option for drought analysis.
The third index considered in this study is the stdCY.The process of stdCY calculation is very similar to the calculation of stdNDVI, with the difference that the CY values, unlike NDVI or SPEI values, are obtained annually and, hence, their standardization does not include the selection of a particular time period.
The periods for which the SPEI and stdNDVI are being calculated are found by considering the linear relationship between SPEI and stdNDVI, and stdNDVI vs. stdCY values.For this purpose, correlation analysis has been conducted to find the best window which provides the highest correlation between SPEI, stdNDVI and stdCY values over the five selected cities (Ankara, Cankırı, Eskisehir, Kayseri, and Konya) in the CA region of Turkey.The analysis has been carried out by considering different durations of the accumulated water balance (precipitation minus PET) from 1 October until May 30 (approximately equivalent to the sowing date and flowering time, respectively, of wheat crops in Turkey, mainly in CA).
Copula functions
The modelling of multivariate hydrological processes is generally achieved using joint distributions that describe the present correlation among variables.Despite the potential usefulness of using joint distribution functions, the implementation of multivariate joint distribution functions in the modelling of hydrological processes has remained limited due to the challenges posed by fitting correlated variables to the same type of distribution.To address these difficulties, copulas, a concept developed by Sklar (1959), have been widely adopted in multivariate distribution analysis.The key advantage of using copulas is the ability to dissociate dependence effects from marginal distribution effects, thereby providing greater flexibility in the selection of univariate marginal distributions (Shiau 2006).
Copulas are mathematical tools used for characterizing and analysing the complex interdependence between multiple variables.These functions provide a way to represent multivariate distributions in univariate or multivariate forms.The concept of copulas involves linking two or more random variables, X, Y, . . ., Z through a unique function, C, that is expressed as cumulative joint distribution function F(x, y, . . ., z).Considering a situation with two random variables, Sklar's Theorem states that it is a two-dimensional CDF with marginal CDFs and then there exists a copula C such that: Under the assumption that the marginal distributions are continuous with probability density functions (PDF), the joint PDF then becomes: where c is the density function of C, defined as Multiple copula functions are available and each is suitable for a specific kind of study, depending on the nature of the existing relationships among the considered variables.The Archimedean (Clayton, Frank, and Gumbel) and elliptical classes of copulas (Gaussian and T) are the families commonly used in hydrological applications.In this study, since SPEI, stdNDVI and stdCY values follow a normal distribution, the Gaussian copula is used to model the joint relationships among them.The appropriate form of the copula function (true values of parameters) can be determined through methods such as error formulation or maximum likelihood estimation.In this study, to determine the existing correlation between SPEI, stdNDVI, and stdCY values, the time series of five cities are combined to create a longer time series, resulting in a more robust copula function model.
Univariate and multi-variate copula models using conditional probabilities
In this study, to analyse the interactions between drought indices and develop a framework for assessing agricultural drought events, several different formulations based on conditional probability functions and their densities have been developed.
In this manner, the approach used in this study hinges on harnessing copula functions, which have the ability to calculate joint CDFs.This capacity is essential for the purpose of calculating the conditional probabilities like PðX ¼ xjY ¼ yÞ, which is a fundamental aspect of the analysis conducted in the study.To facilitate clarity and comprehension of the formulations and illustrations of the numerical method used in this study, some notations are employed.These notations involve the introduction of a buffer, x þ Δx, enabling the computation of In the interest of clarity, x þ Δx is referred to as xp (representing "x positive") and x À Δx as xn (representing "x negative").These notations may vary according to the nature of the equation, whether transitioning from x to y or z, depending on the presentation of bivariate or trivariate conditional probabilities.
One of the relevant probabilities in the context of agricultural droughts is related to the probability of variable X being less than x given that Y is equal to y, denoted by PðX < xjY ¼ yÞ.In this formulation, as an example, considering variable X to be stdCY amounts and Y to be SPEI, or stdNDVI values, would help agricultural decision makers to make informed predictions about CYs and assess the risk of crop failure when encountering meteorological drought events or critical NDVI vegetation index values.Equation ( 4), which computes this conditional probability, is extended to a trivariate case in Equation ( 5) to consider the risk of obtaining stdCY values less than threshold values when SPEI and stdNDVI values have already reached specific thresholds.It is noteworthy that in following Equations ( 4) -( 11), the variable X can be considered to represent stdCY values, and Y, and Z, depending on the nature of problem, can be considered to represent the SPEI and stdNDVI values.
where as yp ¼ y þ Δy, yn ¼ y À Δy, zp ¼ z þ Δz, zn ¼ z À Δz, and Δy, and Δz are small values such as 0.005 that can be added to y or z values to satisfy the numerical derivation of the conditional PDF.
Equations ( 4) and ( 5) are particularly important in determining the thresholds for SPEI and NDVI that trigger CY loss.They enable interested parties to pinpoint specific environmental conditions that significantly elevate the risk of CY failure associated with agricultural drought.
In contrast, the probabilities such as PðX ¼ xjY < yÞ and PðX < xjY < yÞ can offer another dimension of this relationship.These conditional probabilities consider the impact of drought across a range of SPEI values, rather than focusing on a single specific value as seen in PðX < xjY ¼ yÞ.The abovementioned probabilities can inform drought analysis about the likelihood of obtaining a specific yield or a yield below a certain threshold in the context of current drought conditions.To compute these probabilities, the below equations (Equations ( 6) and ( 8) for two-dimensional and Equations ( 7) and ( 9) for three-dimensional cases) can be employed.These pivotal equations (Equations ( 6) -( 9)) can aid managers and also stakeholders in quantifying the relationship between expected drought conditions and CY outcomes when precise conditions are uncertain. where and Δx, Δy, and Δz are small values such as 0.005 that can be added to x, y, or z values to satisfy the numerical derivation of the conditional PDF.
Another type of conditional probability that contributes to a more precise understanding of the relationship between stdCY amounts and SPEI or NDVI, either individually or simultaneously in a trivariate scenario, is PðX ¼ xjY ¼ yÞ.This probability represents the likelihood of X being exactly equal to x given that Y is exactly equal to y.This specific conditional probability offers advantages over PðX ¼ xjY < yÞ or PðX < xjY < yÞ as it directly considers the specific value of Y and its impact on X, rather than just considering that the overall trend of Y is less than a certain value.In cases involving more than two variables, a trivariate copula can be used to describe their interdependence.The full representation of the interdependence between the variables can be achieved through the calculation of pairwise correlations between all the variables.The considered probability can be calculated using Equations ( 10) and ( 11) for bivariate and trivariate cases, respectively: and Δx, Δy, and Δz are small values such as 0.005 that can be added to x, y, or z values to satisfy the numerical derivation of the conditional PDF.Equations ( 10) and ( 11) play a pivotal role in predicting CYs and other desired time series.They can quantify the probability of achieving a specific CY under predetermined environmental conditions; and given the time frame of data collection for independent indices in this study, this predictive capacity can also serve as an effective early warning system, potentially assisting in various applications beyond agriculture.
Validation
In this study, several different conditional probability formulations have been used to find the relationships between SPEI, stdNDVI values, and stdCY amounts.To validate these formulations, the latest considered scenario (i.e.Equations ( 10) and ( 11)) is used to simulate the seven drought events ranging from moderate to severe drought types that occurred over the region between the years 2000 and 2022.Using Equations ( 10) and ( 11) and conditioning stdCY values on the occurrence of SPEI and stdNDVI, during validation efforts, we attempted to simulate ranges of CY values and find their associated conditional probabilities, and, finally, to compare the most probable CY values with the observed CY amount during the selected drought event.It is noteworthy here that for easier understanding of CY conditions and relating them to SPEI and NDVI values, in the Results and discussion section, the real values of NDVI and CY amounts are used instead of their standardized values to provide a better indication of their conditions.
Results and discussion
The fitting of two-and three-dimensional Gaussian copulas to the datasets was performed to investigate the dependency structure between the drought indices (SPEI, stdNDVI, and stdCY values) and to create a framework for providing early warning of CY failure probabilities.
The results from this correlation analysis showed that a window with a duration of 23 days (between 11 May and 3 June, approximately overlapping with the period in which NDVI time series reach their peak values over the region) for retrieving NDVI values in it can provide the highest correlation between averaged stdNDVI over the desired window and stdCY values (correlation value equal to 0.6).Moreover, the results of correlation analyses between SPEI and stdNDVI values suggested that the accumulated daily water balances for the calculation of SPEI would provide a better correlation with NDVI if they were calculated between 28 December and 23 May (correlation value equal to 0.75).Hence, for linking these three components (i.e.SPEI, stdNDVI, and stdCY values) over the CA region, the SPEI time series are calculated considering accumulated water balance values between 28 December and 23 May of the following year, the NDVI values are averaged between 11 May and 3 June of each year, and CY values have been obtained in a yearly based rotation (single crop season per year).
A more in-depth exploration of correlations between SPEI, stdNDVI, and stdCY values reveals the importance of the seasonal dynamics of agricultural processes and environmental factors.For instance, the strong correlation between SPEI and stdNDVI indicates how soil moisture, as represented by SPEI, influences the vegetation dynamics captured by NDVI.This observation aligns with the region's characteristics, as the study area (CA) generally receives sufficient solar radiation during the wheat growing season and often experiences water deficit as the primary limiting factor.Furthermore, an analysis of the temporal lag between meteorological drought events and their impact on CYs can offer valuable insights into the lead time required for the effective implementation of mitigation strategies.In this context, this lead time is estimated to be approximately 1-2 months.
Since the drought indices (SPEI, stdNDVI, and stdCY) were standardized using the quantile matching method, they were expected to conform to a normal distribution.After verifying the normality of these indices by comparing their CDFs with the CDF of a normal distribution, in this study, the Gaussian copula is selected for conducting further probabilistic analysis of the interactions between SPEI, stdNDVI, and stdCY (Fig. 2).
The parameter estimates for the Gaussian copula were obtained using the maximum likelihood method, and the results showed that the copula parameters varied depending on the combination of the variables.For the SPEI and stdNDVI combination, the parameter value was estimated to be 0.74, while for SPEI and stdCY, it was found to be 0.48.For the stdNDVI and stdCY combination, the parameter value was estimated to be 0.6, and for the three-variable combination of SPEI, stdNDVI, and stdCY, the parameter value was estimated to be equal to the combination of three previously fitted copulas.These findings suggest that there is a moderate to strong dependence between the variables, which agrees with the results of the correlation analysis.
Determination of critical ranges of SPEI and NDVI values to produce standardized CYs values of less than −1 and −2
The threshold limits for the SPEI and the NDVI values are determined through the use of conditional probabilities represented in Equations ( 4) and ( 5).The analysis was conducted in two stages, firstly through the application of a twodimensional copula model to determine the limits for each variable (i.e.SPEI and NDVI) individually, followed by the implementation of a three-dimensional copula model to obtain the joint range for both SPEI and NDVI.Application of this analysis resulted in the estimation of the probabilities for obtaining stdCY values less than −1 (CY < 189.7 kg/da equivalent to moderate agricultural drought) and less than −2 (CY < 189.6 kg/da equivalent to extreme agricultural drought) for ranges of SPEI and NDVI values (Fig. 3).The critical threshold for SPEI to cause agricultural drought (CY < 189.7 kg/da) with a 10% probability was found to be ~0.28 (represented with black colour in Fig. 3(a)).Similarly, the critical thresholds for NDVI that were associated with a 10% probability of CY values below 189.7 kg/da or 142.4 kg/da were found to be approximately 0.42 or 0.33, respectively, as depicted in Fig. 3(b).
Moreover, the results associated with the three-dimensional copula (trivariate) showed that the dual impact of SPEI and NDVI values can change their critical threshold values.For example, the critical threshold value for NDVI can be increased to 0.44 when the SPEI value drops below −2 (the contour with probability of 0.1 in Fig. 3(c)).Considering the probability of moderate agricultural drought occurrences for the joint happening of some values linked with NDVI and SPEI amounts together, it can be seen that with the same SPEI values (e.g.−1), when the NDVI values are decreased from 0.43 to 0.30 (moving downward vertically), the probability of moderate agricultural drought increases from 0.1 to 0.7, while if the NDVI values are held constant and the SPEI values are decreased from +3 to −3 (moving leftward horizontally), the probability of moderate agricultural drought occurrence increases by 0.1 which is much less than 0.6 (for the case of keeping SPEI constant; Fig. 3(c)).A very similar trend can be also observed for severe agricultural drought conditions (CY < 142.4 kg/da; Fig. 3(d)); that the change in conditional probabilities is mostly vertical rather than horizontal reveals that the dominant variable in controlling agricultural drought in the region is NDVI.
The determination of critical thresholds for SPEI and NDVI presented in this study holds significant practical implications for agricultural management and decision making.Farmers and policymakers can leverage these thresholds as valuable tools to anticipate and mitigate the impacts of agricultural drought on CYs.The identified critical threshold for SPEI, approximately 0.28, implies that when this value is surpassed, there is a 10% probability of moderate agricultural drought (CY < 189.7 kg/da).Similarly, NDVI thresholds of 0.43 and 0.30, associated with a 10% probability of CY values below 189.7 kg/da or 142.4 kg/da, respectively, highlight the sensitivity of CY to vegetation health.Understanding these critical thresholds and obtaining them for different crop growth stages enables stakeholders to implement proactive measures, such as adjusting irrigation practices or considering alternative crops, when conditions approach these limits.Moreover, the three-dimensional copula analysis reveals the interplay between SPEI and NDVI, offering a nuanced understanding of their joint impact on agricultural drought.This insight allows for more targeted interventions, as changes in NDVI values appear to have a more pronounced effect on agricultural drought occurrences.
Discussion of copula model runs using two-and three-dimensional and their comparisons
To compare the results of two models (two-and threedimensional copulas) in terms of expected NDVI and CY values for different levels of SPEI, the most probable values of NDVI as well as CYs are searched for using Equations ( 6) and ( 7) considering different drought scenarios (for the occurrence of SPEI to be less than 0, −0.5, −1.0, −1.5, −2.0).The results of these analyses are tabulated in Tables 1 and 2 for two-and threedimensional cases, respectively, where the two-dimensional conditions (represented in Table 1) reflect different levels of drought scenarios, defined by SPEI values ranging from 0 to −2, and the corresponding expected NDVI and CY values with their one-standard-deviation uncertainty range.The threedimensional conditions, on the other hand, are represented in Table 2 and reflect the impact of adding the third dimension to the analysis by integrating NDVI values with SPEI values, to predict the most probable CY.
The results show that as the SPEI values decrease and approach more severe drought conditions, the most probable NDVI values decrease as well.The same trend is also visible over CY values with a larger decrease observed in conditions of more severe drought.Moreover, the three-dimensional conditions, incorporating both SPEI and NDVI, result in more precise predictions of CY (with narrower uncertainty range), with the CY values consistently lower than the corresponding two-dimensional conditions.
These results indicate that using a three-dimensional copula model results in lower CYs compared to a twodimensional conditional model.Specifically, for SPEI values less than or equal to −2, the three-dimensional copula model yields approximately 170.8 kg/da, while the two-dimensional model yields 185 kg/da (the expected CY drops around 14 kg/da).
The revelation of the superiority of the three-dimensional copula model over its two-dimensional model marks a significant stride in enhancing the practical applications of agricultural drought prediction.This advanced model not only refines the precision of CY predictions but also opens a gateway to considering additional variables, such as solar radiation or other limiting factors, that play crucial roles in agricultural productivity.This is especially important in the context of meeting food demands, as the multi-dimensional approach empowers stakeholders with a more comprehensive toolkit for decision making, offering the potential to integrate a broader spectrum of variables into future analyses.
Expected most probable CYs considering a wider range of drought scenarios
For a more robust estimate of expected CYs in the CA region of Turkey under different drought scenarios, a broader range of SPEI and NDVI can be considered (Table 3).This table presents the expected CY based on values of the SPEI less than (0.0 to −2.0) and NDVI less than (0.43 to 0.32).The expected CY values are given in the table along with their ranges of uncertainty covering a one-standard-deviation boundary for the most expected CY values.
The table indicates that as the values of SPEI and NDVI decrease, the expected CY also decreases.For example, when SPEI is less than 0 and NDVI is less than 0.43, the expected CY is 213 with a range of uncertainty of 172-254.On the other hand, when SPEI is less than −2 and NDVI is less than 0.32, the expected CY is 166 with a range of uncertainty of 130-202, which is the lowest expected CY among all the combinations.It can also be observed that the range of uncertainty becomes narrower as the values of both SPEI and NDVI decrease, implying that the expected CY is more certain when both indices have lower values.
Additionally, it is worth noting here that the expected CY values remain the same for certain combinations of SPEI and NDVI values -for instance, when NDVI is less than −0.32 and SPEI is less than −1.5 or −2, the expected CY is 166 with a range of uncertainty of 130-202, implying that the expected CY is not affected by small changes in the SPEI values within a certain range.
The comparison of two-and three-dimensional copula runs are also done using a continuous range of CY values represented in Fig. 4, where the probability of CYs for the SPEI being less than −1 and −2 and NDVI being less than 0.37 and 0.32 are explored using two-dimensional copulas and combination of these considerations in three-dimensional copulas.
The cumulative probabilities represented in the bottom panel of Fig. 4(b) show that when the expected CY is found to be below 189.7 kg/da (StdCY < −1), the cumulative joint probability computed using Equation ( 9) ranges from 0.5 to 0.6 with a mean of 0.55 for NDVI values less than 0.37, and from 0.71 to 0.74 with a mean of 0.72 for NDVI values less than 0.32.On the other hand, when the expected CY is found to be below 142.4 kg/da, (stdCY < −2) the cumulative probability drops to a mean range of 0.15 to 0.27 with interval values of 0.12 to 0.18 and 0.25 to 0.29 for NDVI values of less than 0.37 and 0.32, respectively.The cumulative probabilistic results to obtain an estimate of CYs are tabulated in Table 4 as well to present a better understanding of the above discussion considering both changes in SPEI (<−1 and <−2) and NDVI (<0.37 and <0.32) together using the three-dimensional model.
Overall, the results represented in Fig. 4(a) and (b) show outcomes consistent with the results presented in Tables 1 and 3 as well as Table 4, confirming that the expected CY values decrease with the utilization of three-dimensional copulas.These results highlight the complex relationship between meteorological and agricultural droughts and the necessity of adding more dimensions (such as adding soil moisture drought index or evapotranspiration amounts measured over the regions) into the copula simulations to give more robust results.
Validation -probabilistic analysis of CYs for different ranges of drought events and comparisons with the observed records
The proposed methodology has been validated through a comparison of simulated values with observed CY values during drought years.To this aim, Equations ( 10) and ( 11) are utilized to calculate the corresponding probabilities for a range of CY values covering the observed values of SPEI and NDVI indices.The results of this comparison, presented in Table 5 and Fig. 5, provide compelling evidence of the accuracy and effectiveness of the proposed methodology.The comparison of observed CY data with the developed framework's outputs for various drought years provides valuable insights into the model's performance and its implications (Table 5).In the year 2007, for instance, where the observed SPEI was −1.01 and NDVI was 0.38, the most probable CY was estimated at 194.5 kg/da.However, the observed CY was lower at 174.9 kg/da, indicating a potential limitation in the model's ability to capture the true impact of drought conditions.These discrepancies highlight the need for a nuanced understanding of local factors influencing CYs during drought events.The results underscore the importance of considering additional variables, such as detailed crop type maps and higherresolution weather data, to enhance the model's accuracy.
The validation of the proposed methodology is also presented in Fig. 5, where the CYs for the drought years of 2014 and 2021 are compared with observed CYs using both twoand three-dimensional copula model approaches.Focusing on the moderate drought year 2014, as illustrated in the top panel of Fig. 5, the observed SPEI was −1.07, and NDVI was 0.4.The comparison of the observed CY of 210.9 kg/da with the most probable CY through simulation results, estimated at 208.7 kg/da, indicates a close alignment, showcasing the model's capacity to accurately predict CYs during drought conditions.The consistent agreement between the observed and the most probable CYs during the 2021 drought year further supports the robustness of the model.These results highlight the potential of the proposed methodology as a reliable tool for early warning systems and decision making processes in agricultural drought management, providing farmers and policymakers with valuable insights for more informed and timely actions.
The findings of this study are consistent with previous studies (such as Huang et al. 2014 andNagy et al. 2021) which also found that the best time for reliable estimation of NDVI values is around the flowering time of crops.The research of Serinaldi et al. (2009) highlights the necessity of using higher-order dimensional copulas to correctly model the stochastic structure of variables, which is also reflected in the current study's use of a three-dimensional copula for generating more robust estimations of expected CYs.The results of this study confirm the outcomes of Li et al. (2021), which used vine copula and linked SPEI-NDVI and found similar results with the current study in terms of correlation analysis.These findings highlight the potential utility of copulas and demonstrate the importance of considering multiple factors in better development of copula models for agricultural drought management efforts.Ribeiro et al.'s (2019) research on wheat CYs, similar to the present study, identified SPEI and vegetation condition index (derived from NDVI) as the dominant indicators within a two-dimensional copula framework.However, the current study takes this a step further by utilizing a threedimensional copula to generate more robust estimations of expected CYs.This approach is particularly useful for developing early warning systems for agricultural drought management and forecasting the probabilities of different ranges of CY conditions based on observed SPEI and NDVI conditions a few months before harvest time.This differs from previous studies such as Fang et al. (2019) and Li et al. (2021), which used probability values, as the current study employs the density of conditional copulas and considers various conditions of SPEI and NDVI values to generate expected CYs amounts.Overall, the present study contributes to the ongoing effort to enhance agricultural drought forecasting methodologies for improved food security and sustainable agriculture.
Although the results of the current study present valuable insights into the early detection of agricultural drought using a combination of SPEI, NDVI, and CY data, several limitations should be noted.Firstly, the study did not include a detailed crop type map, which is a crucial factor in accurately assessing the impact of drought on CY.The land cover classification used for determining rainfed areas may not necessarily be representative of winter wheat crops, leading to potential inaccuracies in the analysis.Secondly, the weather data used in this study had a resolution of 0.25°, which may not be sufficient for such an analysis.A denser station dataset with higher resolution could provide more accurate and reliable results.Finally, the reliability of CY values is crucial for the development of an effective early warning system.For example, in this study, there were instances where both SPEI and stdNDVI indicated severe drought, while the stdCY values did not reflect drought conditions.Such discrepancies add uncertainty to the analysis and increase the error rate of developed early warning systems.Thus, future research is needed to address these limitations to improve the accuracy and reliability of early detection of agricultural drought.
Concluding remarks
Agricultural drought is highly influenced by a multitude of factors, such as meteorological, hydrological, and vegetation health conditions.This research aimed to investigate the effect of various drought factors on CY, considering the temporal variability of these factors, and to develop an early warning system for CY conditions through the implementation of a conditional probabilistic approach with two-and threedimensional analyses.
To this aim, this study utilized copula functions to combine three drought indices (SPEI, stdNDVI, and stdCY values) between the years 2000 and 2022 in Turkey and create a framework for providing early warning and information regarding CY failure probabilities based on meteorological drought and vegetation health conditions.The main outcomes of the study suggest that the timing and threshold limits play an important role in determining agricultural drought conditions and CYs.In this study, the window between 11 May and 3 June was identified as the best period for retrieving NDVI values to obtain the highest correlation with stdCY values (0.60), while the time frame between 28 December and 23 May was found to be the best for calculating SPEI with the highest correlation with stdNDVI values (0.75).Moreover, the critical threshold values for SPEI and NDVI for causing agricultural droughts with 10% probability were determined to be ~0.28 and ~0.42 respectively.Additionally, the results of this study also highlighted that the dual impact of SPEI and NDVI values can change their critical threshold values, with NDVI being the dominant variable in controlling agricultural drought.
This study also found that utilizing a three-dimensional copula model results in more precise CY simulations than a two-dimensional conditional model, providing CYs with narrower uncertainty ranges.Furthermore, the validation efforts of this study through a comparison of simulated CY values with observed CY values during drought years showed that all of the observed CY amounts fell within the simulated expected range, demonstrating the robustness of the methodology in capturing the impact of drought conditions on CY.
Overall, the outcome of this study provides an important toolkit for decision makers to measure the impact of meteorological drought and vegetation health conditions during critical stages of crop growth and develop early warning systems and preventative measures during drought events to mitigate their effects in meeting food demands.
Figure 1 .
Figure 1.Location of the study area (Central Anatolia (CA) region of Turkey) and rainfed areas over it.
Figure 2 .
Figure 2. Comparison of the cumulative distribution function (CDF) of the normal distribution and different drought indicators of (a) Standardized Precipitation Evapotranspiration Index (SPEI); (b) standardised Normalized Difference Vegetation Index (stdNDVI); and (c) standardised Crop Yield (stdCY).
Figure 3 .
Figure 3.The probabilities of moderate and severe agricultural drought occurrences based on different ranges of SPEI and NDVI values (the probabilities are calculated using Equations (4) and 5).CY: crop yield; S: SPEI; and N: NDVI.
Figure 4 .
Figure 4.The PDF and CDF curves of two-and three-dimensional copula runs for a continuous range of CY values over different scenarios of SPEI less than −1 and −2 as well as NDVI values less than 0.37 and 0.32 (the probabilities are calculated using Equations (6) -9)).PDF: probability density function, CDF: cumulative distribution function, CY: crop yield.
Figure 5 .
Figure 5.Comparison of observed and expected CY values under two drought conditions occurred in the years (a) 2014 and (b) 2021.Shaded areas cover one standard deviation around the expected CY.The probabilities for the range of CY values are calculated using Equations (10) and (11).CY: crop yield, S: SPEI, and N: NDVI.
Table 2 .
Three-dimensional scenarios (Equation (7)) for generating expected CYs considering different SPEI and their corresponding expected NDVI values.CY range shows the uncertainty associated with the expected CY values covering one standard deviation around the expected CY.S: SPEI, N: NDVI; CY: crop yield.
Table 1 .
The expected values of Normalized Difference Vegetation Index (NDVI)
Table 5 .
The expected CY values and their associated uncertainty range (covering one standard deviation around the expected CY) considering three-dimensional copula models using observed SPEI and NDVI values for the recorded droughts in the CA region.CY: crop yield. | 2024-03-17T17:14:49.696Z | 2024-03-11T00:00:00.000 | {
"year": 2024,
"sha1": "e04095e45e1b994fb7113e8c4a434e7509ef0491",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/02626667.2024.2326187?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "32f878b59f218e4ec49a009eb4ad7953323da400",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
} |
55204573 | pes2o/s2orc | v3-fos-license | Habitat use , relative growth and size at maturity of the purple stone crab Platyxanthus crenulatus ( Decapoda : Brachyura ) , calculated under different models
We describe the most noteworthy changes occurring during the post-metamorphic phase in both sexes of the purple stone crab Platyxanthus crenulatus. Spatial structure of the populations by size and early changes in colour pattern and relative growth of chelae suggest an ontogenic migration from intertidal to deeper waters. Before reaching maturity and laying eggs, females undergo a tight sequence of morpho-physiologic changes over a narrow size range (44-64 mm carapace width [CW]). In contrast, males undergo two main phases related to sexual maturity. Early in their lives, they develop sperm and accelerate the relative growth of the chelae (35-45 mm CW). Morphologic maturity of males comes later, when relative growth rate reaches the maximum and decelerate (65-70 mm CW). Adult males are larger and develop conspicuously largest chelae than females. Morphometric analyses were performed by two different techniques: the traditional procedure, which describes relative growth relationships as power functions; and an alternative, smoothing spline–based model that is nondependent on previous assumptions. The results of the alternative analysis were coherent with other reproductive indicators and ancillary observations, allowing a more comprehensive understanding of the relative growth. We provide supporting material containing the respective script written in R program to be used freely in future studies.
INTRODUCTION
Life cycles of benthic invertebrates with pelagic larvae may suffer drastic morphological, physiological and behavioural changes at metamorphosis; however, other important changes, although less dramatic, may occur after settlement.Such changes are mostly adaptations to sequential exploitation of different niches during growth and are often accompanied by habitat shifting and, consequently, the spatial structuration of the populations by size (Pardo et al. 2007).The efficient exploitation of ontogenetic niches generally requires different morphologies and colour patterns associated with changing diets (Jensen and Asplen 1998) and predatory risks (Palma and Steneck 2001, Reuschel andSchubart 2007).
A landmark within the post-metamorphic phase is the attainment of sexual maturity, which involves a suite of morphological, physiological and behavioural changes that must occur in harmony for effective reproduction to take place.In crabs sexual maturity is attained progressively through a series of steps.The sequence and synchronicity of these steps are particular to each species (Fernández-Vergaz et al. 2000, Leal et al. 2008) and therefore the size at maturity may vary depending on the particular ontogenic change observed.For most purposes, it is desirable to estimate the sexual maturity based on the direct observation of the size at which individuals are reproducing in nature.However, to do so it is necessary to capture a large number of mating pairs and egg-bearing females, which can be a difficult task in full marine species, given that reproductive individuals tend to hide and/ or migrate to areas of difficult access for researchers.Consequently, estimations of the size at maturity of crabs have been traditionally based on a combination of direct observations of the qualitative traits related to reproduction (hereafter "reproductive indicators") and morphometric analysis (e.g.López-Greco and Rodríguez 1999, Hall et al. 2006, Sal Moyano et al. 2010).Reproductive indicators, such as the morphology of gonads, seminal receptacles and vulvae, have discrete response values (mostly binary as mature/inmature) that are readily observable, so the classification of an individual as mature usually pose no further problems.
In contrast, determining maturity based solely on morphometric traits (hereafter "morphometric indicators") may be rather problematic.The relative growth that leads to an overall mature shape occurs in a continuum during the ontogeny, thus requiring analyses that are methodologically more complicated.Moreover, the potential existence of different morphotypes of mature males (as evidenced by the growing list of species with alternative mating strategies, see Shuster 2008) complicates the analysis even more.The best method has been a long-debated topic (e.g.Gould 1966, Somerton 1980, Watters and Hobday 1998, Katsanevakis et al. 2007, Packard 2012) but the vision that relative growth follows the allometric function (in the strict sense that the relationship between the two measured quantities fits well to a power law) has prevailed in literature.The most widely used method is that of Somerton (1980), in which the relative growth is assumed to follow a twophase power function and size at maturity is therefore estimated as the breakpoint in a two-segment regression applied to the log transformed data.This practice has been the object of criticism (Watters and Hobday 1998, Katsanevakis et al. 2007, Packard 2012), mainly because it requires several a priori assumptions that are not necessarily met but are rarely tested.
Platyxanthus crenulatus (Milne-Edwards 1879) belongs to Platyxanthidae, a family of large edible crabs endemic to South American coasts (Thoma et al. 2012).These are robust crabs, frequently found in crevices of intertidal and subtidal rocky bottoms of the southwestern Atlantic temperate coasts, from 23° to 44°S (Boschi 1964).Their claws have marked heterochely and laterality (i.e.dimorphism between chelae and the tendency for them to appear on a particular side of the body), traits associated with shell-breaking feeding habits (Laitano et al. 2013).Due to their abundance, large size and diet, P. crenulatus presumably has an important ecological role as a predator in benthic communities, similar to that of other stone crabs such as the Mennipidae in the northwestern Atlantic Ocean (Gerhart and Bert 2008).
Here we describe post-settlement changes of male and female P. crenulatus (habitat use and colour patterns; size and sexual differences in relative growth) and estimate the size at sexual maturity on the basis of several reproductive and morphologic indicators.Additionally, we have taken this opportunity to compare the performance of alternative morphometric analyses appearing in literature, testing the consistency of their results within the framework of other ontogenic changes described here and the scarce previous information on the species biology.
MATERIALS AND METHODS
Samples were taken on a monthly basis in Mar del Plata (MDP, 38°02'S, 57°31'30"W), from May 2006to April 2009, and sporadically during November 2007, April 2009and January 2014 in San Antonio Oeste (SAO, 38°53'S, 62°07'W), Argentina (Fig. 1A).Mean water temperature ranges from 7.5°C in August (austral winter) to 20.6°C in January (australsummer) in MDP and from 4.4°C in August to 25.2°C in January in SAO (Servicio Argentino Hidrografia Naval, http:// www.hidro.gov.ar/).With few exceptions, samplings were carried out during daylight.Crabs were collected from 0 (intertidal sampling) to 20 m depth (subtidal sampling) on rocky and sandy bottoms.Subtidal samplings were performed on hard bottoms by SCUBA diving, on soft bottoms with a trawl net towed by a small outboard boat, and on intertidal flats simply by hand, turning small boulders and inspecting caves and crevices.External qualitative traits such as colour pattern and handedness (whether the crusher claw is carried at the right or left side of the body), and those clearly related to reproduction (namely egg-bearing and vulva condition) were observed and registered for each crab.Additionally, the following body dimensions (Fig. 2) were measured with a vernier caliper with an accuracy of 0.1 mm: carapace length (CL) and width (CW); length (L), height (H), and width (W) of the propodus of the crusher (Cr) and cutter (Cu) chelae; gonopod length (GL) of males; and maximum width of the sixth abdominal segment of females (AW).Thus, a total of 11 and 13 measurements/observations of external features were recroded for each male and female, respectively.All statistical analyses (detailed below) were conducted using the R language for statistical computing (R Development Core Team 2011).
Size-frequency distribution by habitat, sex and colour pattern
After rejecting the hypothesis of differences between sites and among years, size data from all years and localities were pooled to produce larger carapace width (CW)-frequency distributions for each of the six possible combinations of sex and habitat type (intertidal, subtidal rocky bottom and subtidal soft bottom).Kernel density estimators (KDEs) were applied to each group, using the script in R provided in Langlois et al. (2012) slightly modified to our data.Briefly, KDEs are a non-parametric way to estimate the probability density function of a random variable.Then we tested habitat partitioning by size and sex by comparing the respective KDEs.The statistical method used to compare frequency distributions is sensitive to differences in both the shape of the distribution and its position on the horizontal axis (or in short form, the shape and site, respectively, Langlois et al. 2012).Therefore, to test for differences of shape alone, frequencies were also analysed standardized by median and variance (y=x-median/stdev).Histograms were made using CW classes selected to best match the shape of the probability densities generated by the KDEs.Bandwidth of KDEs was chosen by the Sheather and Jones (1991) bandwidth selection procedure using the 'dpik' function in the 'KernSmooth' package (Wand 2013).
The colour pattern was classified on the basis of the dorsal part of the carapace, which may be either homogeneous purple or disruptive.Previous observation showed that the disruptive pattern is common among small individuals but absent in larger ones.Therefore, we studied the relationship between size and colour pattern by fitting a binomial model to the pooled data using the "glm()" function provided with the R base program.
Relative growth and size at maturity calculated from morphometric indicators
Morphometric data were log transformed and simple linear least-squares regressions were fitted with CW as the independent variable and the slope b estimated for all log-log relationships.In order to detect sexual differences in relative growth, the slope b was compared between males and females.Following Fernández-Vergaz (2000), body dimensions showing sexual differences in b were considered secondary sexual features valid to be used to determine morphometric maturity.Claw data were analysed regardless of the crusher's laterality.Crabs were excluded from the analysis only when recent regeneration of chelae was evident by the presence of buds.
Once the body dimensions had been selected, morphometric maturity was calculated using two alternative methods: the traditional Somerton's method (Somerton 1980) and a smoothing spline-based method (slightly modified from Watters and Hobday 1998).Briefly, the Somerton's method consists of a predefined two-segment model which assumes that when a certain body part and CW are plotted against each other on a double logarithmic scale, the points lie along two straight lines.If the two-segment model fits significantly better than the simple linear model, then it is assumed that the first line describes the relative growth of juveniles and the second the relative growth of adults.This analysis was performed using the algorithm of the MATURE program described in Somerton (1980) and rewritten as a script in the R program.
The spline-based method consists of three steps.The first step is to bin the original data set (i.e.transform the original absolute size by replacing the original values which fall in a given small interval, technically a 'bin', by the mean value of that interval).Then, the measurements of the body part whose relative growth is under study are grouped in its respective bin (hereafter 'size class') and the median of each group is calculated, thus generating a paired data set with the mean size class as a predictor variable and the corresponding median of the body measure of interest as the response variable.The median was chosen instead of the mean to make the analysis robust to outliers on the response variable, particularly against the potential incorporation of small but well-developed claws, which are difficult to assign to a late stage of regeneration or to natural variation.The second step is to fit different spline models to these body size-specific medians and to choose the best spline, based on the trade-off between goodness of fit and smoothness (here the General Cross Validation criteria [GCV] was used).If the selected spline is significantly different from the straight line, the last step is to find the carapace size at which the second derivative of the fitted spline is maximized.The smoothing spline fitted to the morphometric data can be seen as the growth trajectory averaged for an individual lifespan.Thus, the first derivative of the fitted function is the instantaneous coefficient of relative growth, while the second derivative is the instantaneous rate of change for that coefficient.The point at which the second derivative of the fitted spline is maximized (i.e. the maximum rate of change in relative growth) is considered the size at morphometric maturity.On the other hand, when the second derivative of the fitted spline is equal to 0, there is an inflection point in relative growth, and thus it changes from an increasing to a decreasing trajectory or vice versa.Unlike the traditional two-segment linear model (or any variant of segmented models), the spline's method has the advantage of not requiring a priori assumptions about the shape or the number of significant changes in relative growth rate during the ontogeny.Both Somerton's method and the spline method were performed in the R program using scripts written for this specific purpose and are provided here as Appendix 1.
Size at maturity calculated from reproductive indicators
Physiological maturity was assessed by macroscopic features of the respective female and male gonads as follows: 1) immature ovaries, small and barely visible as a translucent filament or opaque thin tubes; 2) mature ovaries, large and conspicuous, varying from soft, translucent or pale pink colour, to a swollen, violet to red colour, with visible oocytes inside; 3) immature testes, very thin, translucent white filaments, with vas deferens indistinguishable from testes to the naked eye; and 4) mature testes, conspicuously thicker than immature testes and white in colour, with vas deferens easily distinguishable from testes, translucent white, highly convoluted and swollen with visible granules inside; maturity was corroborated by the presence of spermatophores.
Sexual maturity from a morphological, behavioural and functional perspective was assessed in females on the basis of vulvae, seminal receptacles and the presence of embryos, respectively.Vulvae were classified as mature or immature on the basis of their external shape and size.The presence of sperm in seminal receptacles was used as evidence of copulation regardless of their fullness degree.Although mature seminal receptacles vary greatly in size depending on the amount of sperm loaded, they are always opaque white and very conspicuous, while immature seminal receptacles are distinctly smaller and translucent yellowish in colour.The relationship between each kind of maturity and the specific indicator used for its determination is summarized in Table 1.
Size at functional maturity of females was calculated simply as the mean CW of ovigerous individuals.The indicators occurring only once during the lifespan (first gonad maturity in both sexes and the acquisition of mature forms of spermathecae and vulvae in females) were analysed as binomial variables in order to calculate size at physiological maturity (both sexes) and morphological and behavioural maturity (females).The proportion of individuals classified as adults in each size interval was fitted to a logistic curve and size at maturity was estimated using the equation where P i is the proportion of mature individuals at a certain carapace size class CW i , with a and b as constants.When P i = 0.5 then CW i is the mean size at maturity (CW 50% ).
Size-frequency distribution by habitat, sex and colour pattern
A total of 1018 crabs (501 males, 517 females) were captured, ranging from 5.17 to 95.63 mm CW.Carapace acquired the characteristic colour that led to the vernacular name of P. crenulatus as size increases, in both sexes.Dorsally, smaller individuals had a disruptive colour pattern, so they become cryptic against intertidal backgrounds while large crabs were homogeneously purple to violet (Fig. 1B, C).All crabs were ventrally white.The frequency of animals with a disruptive pattern falls abruptly and disappears completely at approximately 40 mm CW.The size to which 50% of individuals still have a disruptive pattern is 20 mm CW (C.I.95% 18-21 mm CW).Handedness was biased to the right in both sexes (p<0.001) but left-handed males were more frequent than females, (11.48% of females and 20.29% of males, p<0.001).
Overall sex ratio did not differ from 1:1 (Chi Yates = 0.088, p=0.767).There was sexual difference in both site and shape (sensu Langlois et al. 2012) of overall size-frequency distribution (KDE test for equal site and shape, p=0; Fig. 3), with males reaching larger size than females.The largest male captured was 95.63 mm CW, while the largest female was 81.46 mm CW.
Spatial distribution of P. crenulatus was clearly size-structured (Fig. 4).Small crabs were more frequent in intertidal and almost absent in subtidal samples, regardless of sex, while large crabs occupied mostly crevices and caves under large stones on subtidal rocky bottoms.On the intertidal flats, they were commonly found in patches of coralline algae and among pebble rocks and broken shells accumulated in the bottom of pools.On subtidal soft bottoms the individuals were scarce and their sizes encompassed most of the range, except the largest sizes; the presence of smaller juveniles could not be detected there given the limitations of subtidal sampling.
Relative growth and size at maturity calculated from morphometric indicators
All body dimensions analysed showed sexual differences in allometric growth (Table 1).Both male and female chelae were positively allometric (b>1), except the cutter claw of females.Among the relative growth constants of chelae, CrL of males had the smallest variance and the best fit to the simple linear model (see Table 1).Thus, if CrL differed from the linear pattern, the other dimensions of the chela must differ even more and therefore we chose CrL to determine male morphological maturity.We also used GL of males and AW of females since both are undoubtedly secondary sexual traits directly involved in reproduction.
In males, CrL was most appropriately modeled by a spline with 5 df and 3 df for males and females, respectively.The best spline for GL had 5 df, describing an asymptotic-like decreasing parabola; AW was better modeled by a 6 df spline describing an S-shaped curve (Fig. 5).The spline method determined a major change in male relative growth of CrL and GL at 59 mm CW (95% C.I. 57-67 mm) and 57 mm CW (95% C.I. 55-60 mm), respectively, indicating that at these sizes morphometric maturity occurs.The main change in growth rate of CrL and GL estimated by MATURE was at 66 mm CW (95% C.I. 64-68 mm) and 65 mm (95% C.I. 63-67 mm), respectively (Table 2).In females, the spline method detected a change in the relative growth of the sixth abdominal segment width (AW) at 46 mm CW (95% C.I. 44-48 mm) and MATURE yielded a change in growth rate at 58 mm CW (95% C.I. 57-59 mm; Table 2).Growth trajectories of male and female CrL (determined by splines) coincided during the first stages of the ontogeny.Relative growth of male's CrL rapidly increased when body size exceeded approximately 45 mm CW, initiating the conspicuous sexual dimorphism of chelae observed among large individuals (Fig. 6).Interestingly, such divergence in trajectories of relative growth also coincided with the maximum relative growth rate of females' crushers and an inconspicuous bump in the relative growth trajectory of males' crushers, as marked in Figure 6.Moreover, when both methods applied to male CrL data were compared (Fig. 7), a survey of the residuals also suggested the existence of a second breakpoint.Consequently, the relative increase in male crusher length could be divided into three phases.The first two differed in the rate of relative growth but both had a positive allometry, with roughly the same variability in relative growth of claws.The third phase had negative allometry and much more variation than the previous two (see Fig. 7).
DISCUSSION
In considering our results it is necessary to bear in mind that the data come from two separate populations.This is particularly important for the analysis of relative growth given that exogenous factors (mainly the annual cumulative temperature and nutrition status) control the somatic growth (see Smith and Chang 2007 for a more in depth review) and therefore differing environmental conditions might result in differences in growth at all levels.However, tests for differences between these two populations performed previously to pool the data showed no significant differences.This fact is not surprising given that in this particular case the temperature regime is similar between sites (actually the annual temperature range in SAO encompasses that of MDP) and there are no reasons to think a priori that the nutritional conditions differ significantly between populations.Moreover, in decapods the temperature affects mostly the duration of the intermoult period rather than the size increment per moult (Smith and Chang 2007 and references therein) and therefore differences in temperature will impact the output of age-based models rather than size-based models such as those developed in this work.
There was, however, a considerable difference between MDP and SAO in the number of ovigerous females.The strikingly low number of ovigerous females found in MDP (eight ovigerous females in three years of monthly samples) has two possible explanations.Either the ovigerous females in MDP segregate from the other individuals (something that does not happened in SAO), perhaps because they perform reproductive migrations outside the sampling area or, more simply, because there is a bias in our sampling due to differences in catchability between sites.Subtidal sampling in MDP was performed mostly within the port, in the breakwaters that are built with medium to small, irregularly stacked boulders of orthoquartzite rock, thereby generating an intricate structure with many caves and hollows inaccessible to divers.On the other hand, hard bottoms in SAO are composed of large platforms of sedimentary rocks with countless cracks, hollows and crevices that provide good shelter but are shallow enough for the hidden animals to be within the reach of the human arm.Although we cannot rule out the segregation of ovigerous females outside the sampling area, considering the distinctive behaviour of the ovigerous female P. crenulatus, which tends to hide and remain immotile for long periods (NEF pers.obs.), we believe the most likely explanation is that the highly structured habitat in MDP prevented ovigerous females from being caught.
At a species level, Platyxanthus crenulatus has male-biased sexual size dimorphism, which is likely the rule among free-living brachyurans.The populations are spatially structured by size, suggesting the possibility of different ontogenic niches and a migration of smaller individuals to deeper areas during growth.The species is clearly associated with hard substrates, as shown by the relatively small number of individuals found on sandy bottoms, which were likely roaming among the rocky outcrops scattered in the area.Changes in colour pattern accompany the ontogenic habitat shifting (see Fig. 1B, C), perhaps as an adaptation to the different colour backgrounds, diet and/or predator features that individuals would find while growing and moving to deeper waters.
In relation to the relative growth, the use of a nonpre-defined model revealed morphological changes that would also be involved in size-dependent use of habitat.Although slight in males, the first recognizable change in relative growth rate of chelae (shared by both sexes, see Fig. 6), may be an adaptive response to changes in feeding habits, intraspecific agonisms and/ or new predatory risks that might require a different claw shape and relative size.A similar relative growth pattern was described for the chelae of male stone crabs of the genus Menippe.In those crabs the first change in relative growth is also shared by both sexes, occurring at 35 mm, and is considered to be an adaptive response to habitat shifting at that size (Gerhart and Bert 2008).As observed in P. crenulatus during this study, other stone crabs (e.g.Reuschel and Schubart 2007, Manríquez et al. 2008, Krause-Nehring et al. 2010) showed an early habitat shift that was also accompanied by loss or acquisition of cryptic colouration to fit the new environment.
The spline method showed that the relative growth rate of male gonopods decreased gradually while body size increased.Since there is positive correlation between gonopod length and width (NEF, unpublished data), this growth pattern may be explained simply by the trade-off between the total body growth and the limitations imposed by the size of the mature vulvae of females, as stated by Hartnoll (1974).In P. crenulatus the vulvae underwent an abrupt change in size and shape at a body size that coincided with the size at maturity estimated from the abdomen width by the spline method (Fig. 8).Instead of the two linear segments to which the traditional method is restricted, the spline method applied to females' abdomen showed a rather sigmoid growth pattern (Fig. 5).The sigmoid shape of the growth trajectory is consistent with the expected compromise between the need to expand the abdomen to accommodate as many embryos as possible in order to increase fertility against the restriction imposed by the female sternum width (Hartnoll 1974).Interestingly, in species in which females have determinate growth and terminal moult, a model with two discontinuous phases with different levels of allometry (as defined by the traditional method) seems to fit better the relative growth of female abdomen (e.g.Sainte-Marie and Brêthes 1995, Sampedro et al. 1999).In contrast, in brachyurans with indeterminate growth such as those studied here, an S-shaped model seems to describe the relative growth best (e.g.Luppi et al. 2004, Katsanevakis et al. 2007, this work).
Our results on the attainment of sexual maturity of P. crenulatus revealed the following sequence of maturation in females.First, relative growth of the abdomen reached its maximum (i.e. it will continue growing afterwards but at a lower rate).Later, while the abdomen grew through successive molts, the ovary matured to the point that the vulva changed to its mature form in a single moult, and the abdomen became broad enough to accommodate the egg clutch.When the female had reached sexual maturity, the vulva remained soft, independently of the moult cycle.In turn, physiological maturity precedes morphological maturity in male P. crenulatus and both are lastly followed by behavioural maturity.This pattern is shared by other brachyurans (Watters and Hobday 1998), although in some species such as Chaceon affinis A. Milne-Edwards and Bouvier, 1894 the onset of the different maturity stages is fairly synchronized (Fernández-Vergaz et al. 2000).
This study illustrates the advantages of studying relative growth in a comprehensive manner, using methods that do not rely on previous assumptions.Such an approach may provide further information useful for generating new hypotheses on life history and mating strategies, thus leading future research on the species beyond the mere size at onset of maturity.Whereas conspicuous changes in the morphology of female abdomens and vulvae, and male gonopods, are undoubtedly related to the maturation process, this is not necessarily the case for the chelae.The high predominance of right-handed individuals (those carrying a crusher claw at the right side) in both sexes suggests that natural selection may influence the size and shape of the chelae to fit the durophagous habits (Laitano et al. 2013) as much as sexual selection may enhance the ability to mate.Changes in relative growth are consistent with this hypothesis.Morphometric maturity of males, calculated by the spline method applied to the crusher claw, did not differ significantly from that using the gonopod length.Unexpectedly, spermatophores were present in very small males (physiological maturity 33-37 mm CW) long before their gonopods were morphometrically mature (55-60 mm CW) and also before females developed oocytes for the first time and acquired the open mature form of the vulva that allows gonopod penetration.If small males (i.e.ones that are physiologically mature but morphologically immature) have spermatophores, one might expect them to be able to mate with females larger than themselves, at least in absence of competition with other males, as reported in some other crabs (e.g.Wilber 1989, Sainte-Marie and Lovrich 1994, Gerhart and Bert 2008).This is an interesting idea for future testing under experimental conditions, given the implications regarding mating strategies and the socio-spatial structure of the populations.
Fig. 1 .
Fig. 1. -A, Sampling localities; B, members of a mating pair of Platyxanthus crenulatus (the smaller one is the female); C, a small juvenile showing the disruptive colour pattern against the broken shell bottom commonly found in intertidal pools.
Fig. 2 .
Fig. 2. -Body dimensions of Platyxanthus crenulatus measured for morphometric analysis.CW, carapace width; CL, carapace length; CrL, length of the propodus of the crusher; CrH, height of the propodus of the crusher; AW, maximum width of the sixth abdominal segment of females; GL, gonopod length
Fig. 3 .
Fig. 3. -Sexual size dimorphism in Platyxanthus crenulatus.Comparison of kernel density estimate (KDE) probability density functions estimated for both sexes.Grey bands represent one standard error either side of the null model of no difference between the KDEs for each sex.Significance tests (p) were based on permutation tests of the area between the two probability density functions.Significance tests on raw data (top) provide a test of differences in both site and shape of the length-frequency distributions, whereas tests on standardized data (bottom) provide a test of shape only.
Fig. 4 .
Fig. 4. -Size-frequency distribution of both sexes of Platyxanthus crenulatus sampled in different habitats.For the correspondent analysis the bandwidth of KDE was chosen by the Sheatherand Jones (1991) bandwidth selection procedure.To facilitate visual comparisons, size classes on plots were all set to match the bandwidth for the males on subtidal hard bottom.Rug plot just above the x-axis indicates individual observations.The two dashed vertical lines limit the confidence interval for 50% probability of change in colour pattern; 'n' indicates sample size.
Fig. 5 .
Fig.5.-Relative growth and morphometric maturity of Platyxanthus crenulatus.Size at morphometric maturity as determined by the splines method(Watters and Hobday 1998) applied to morphometric data of selected measures of males and females.The most appropriate model was chosen according to the generalized cross-validation criterion (GCV).The peaks in the second derivative of the selected model are marked changes in relative growth that correspond to morphometric maturity.
Fig. 7 .
Fig. 7. -Top: the splines model and the Somerton's model applied to growth of the propodus length of the crusher chela (CrL) in males of Platyxanthus crenulatus.Middle: residuals from Somerton's model.Bottom: residuals from splines.Note that Somerton's residuals can be divided into three well-defined moments (indicated by black arrows and the dotted line).
Fig. 8 .
Fig. 8. -Graphic summary of noteworthy events of the post-settlement phase of male and female Platyxanthus crenulatus.Size of primiparous females is shown as mean value; size at gonad first maturity and spermathecae and vulvae maturation are CW 50% determined by the logit function fitted to the proportion of individuals having the mature version of the respective maturity indicator.Horizontal bars correspond to 95% CI for each event.
Table 1 .
-Allometric growth in Platyxanthus crenulatus.Estimated parameters for allometric equations (i.e.simple linear regressions applied to log-log transformed data) of different body parts of both sexes.Slopes are the "constants of relative growth" of each body part.
Table 2
. -Summary of statistics and estimated parameters of the simple linear model and two-phase linear model (Somerton's method fitted using the MATURE program) applied to the four morphometric characters used to estimate first maturity of both sexes of Platyxanthus crenulatus.SLM, simple linear model; SSE, sum of squared errors.Somerton's immature or juvenile (Juv) and mature (Mat) are the first and second segments of the two-phase linear model, respectively.CW, carapace width; CrL, length of the propodus of the crusher; AW, maximum width of the sixth abdominal segment of females; GL, gonopod length | 2018-12-07T08:23:13.039Z | 2014-12-30T00:00:00.000 | {
"year": 2014,
"sha1": "259a07f0cdcc13e5b304b4d6cff50cba6d91724d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3989/scimar.04108.10a",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "259a07f0cdcc13e5b304b4d6cff50cba6d91724d",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
148766649 | pes2o/s2orc | v3-fos-license | Trauma Theory : No “ Separate Peace ” for Ernest Hemingway ’ s “ Hard-Boiled ” Characters
This paper applies trauma theory to Hemingway’s post World War I writing. His work, for example, A Farewell to Arms, shows how soldiers are traumatized by their war experiences, and how they suffer from such aftereffects as flashbacks, nightmares, inability to sleep and social maladjustment. Although examining Hemingway’s work in terms of shell-shock has been established, this paper suggests that traumatized characters in Hemingway’s work carry what the trauma theorist Cathy Caruth calls an “impossible history.” It suggests that survivors of trauma experience a sudden or catastrophic event that is beyond the normal realm of human experience. Since traumatized individuals often do not process catastrophic events as other normal events, they have access to it through disturbing flashbacks and nightmares. These psychological manifestations provide snippets of the individuals’ impossible history which they never fully possess or normally store. By tracing these psychological manifestations, e.g. flashbacks and nightmares, this paper shows that traumatized survivors struggle with a traumatic history, haunting them in day and night.
term was common among British and American soldiers during World War I and described soldiers' incapacity to fight and follow orders when they were exposed to heavy shelling.The British psychiatrist Charles Myers was the first one to mention shell-shock in his article "A Contribution to the Study of Shell Shock," which he published in Lancet in 1915.Although he explained that soldiers' presence in the front line and their exposure to shell bursts were the main reasons for shell-shock, he later called shell-shock "a singularly ill-chosen term," for many soldiers showed symptoms of shell-shock in their billets and before they reached the front line (1940,26).
In the 1990s, the term shell-shock ("battle fatigue" in World War II and Post-Traumatic Stress Disorder in the Vietnam War), associated with medical and military fields, piqued such critics' interest as Cathy Caruth, Bessel van der Kolk, Onno van der Hart, Shoshana Felman and Dori Laub.Caruth regards trauma "an impossible history" which, in its belated return as flashbacks and nightmares, disturbed survivors.Since traumatized subjects are haunted by "an impossible history," they reexperience it through disconnected images that disrupt the linearity of history.In "Bearing Witness or Vicissitudes of Listening," Laub contends that the traumatized who survive a terrible event often have "no prior knowledge, no comprehension and no memory of what happened" (58).Thus, far from linear history that provides meaningful descriptions of historical events, traumatic history challenges the traditional modes of narration and telling.The traumatized, Laub argues, avoid any encounter with their terrible history: "That he or she profoundly fears such knowledge, shrinks away from it and is apt to close off at any moment, when facing it" (58).
Bessel van der Kolk and Onno van der Hart distinguish between narrative (ordinary) memories and traumatic memories.They maintain that survivors of trauma are inflicted with memories that are distressing and indelible.Unlike narrative memories which can be easily remembered and restored, traumatic memories are intrusive and unexpected.They are fragmented and full of "holes" and silences, the gaps that reflect the survivors' inability to articulate their traumas.The traumatized, van der Kolk and van der Hart maintain, may remember parts of their overwhelming experiences or have access to them through nightmares or flashbacks, yet they often cannot retell their experiences fully as they remember narrative memories.
Hemingway's work, for example, A Farewell to Arms, shows how combat experience may often inflict soldiers with traumatic memories 2 .Trevor Dodman maintains that in FTA, Frederic Henry "suffers from the compulsion to remember and retell his traumatic past from the standpoint of a survivor both unable and perhaps unwilling to put that very past into words" (83).Frederic's narrative, Dodman claims, exhibits shell-shock symptoms from "the very first page of the novel" (85).He contradicts himself, though, when he writes that Frederic recollects "pain that registers at the 'outer' level of the body, breaking apart the perceived unity of the physical self in the presence of terrific bodily suffering" (83).But Frederic experiences his violent wounding and shows the aftereffects of his trauma, for example, drinking heavily and enduring nightmares and flashbacks in chapter 9. Michael Reynolds contends, "Despite Frederic's reticence, his behavior should let the reader see that he has been changed by his violent wounding" (119).Frederic's relation with Catherine, Reynolds maintains, shifts from being a game to being "a psychic dependence" and love (Reynolds 120).
Many soldiers found it difficult to retell their traumatic stories due to military restrictions during and after World War I. Soldiers were encouraged to show bravery and patriotism while normal feelings such as fear and crying were considered "unmasculine" (Herman 21).Those who exhibited "unmasculine" behaviors were accused of being malingerers and court-martialed (21).Alexs Vernon argues that characters, such as Frederic in A Farewell to Arms, "are unmanned by the war" and try not to narrate their stories of "cowardice" to other people (44).Sarah Anderson, too, explains that Hemingway's male war heroes are governed by their gender, preventing them from expressing their fears during and after the war.But If gender prevented soldiers from showing their fears, their trauma, then, was somehow cultural.Hemingway's war heroes, I argue, often are unable to remember or narrate what happen to them, not because they are afraid to be labeled "feminine," or they want to show their machismo, although some may, but because they carry an "impossible history" that they cannot simply narrate.When soldiers are wounded or exposed to heavy shelling, they become helpless and cannot comprehend their terrible experiences such as Frederic in FTA.After soldiers leave the war, for example, Nick in "A Way You'll Never Be" and Frederic in AFT, they often develop PTSD.
Although Carl Eby argues that Hemingway's bouts of depression came from Agnes von Kurowsky' rejection letter and had nothing to do with PTSD, many critics, for example, Roland Smith, Charles Coleman and Peter Hays, have explored Hemingway's oeuvre as a post-traumatic narrative.Roland Smith points out that Nick Adams in "A Way You'll Never Be" shows symptoms of PTSD, for example, drinking too much, difficulty in sleeping and suffering intrusive flashbacks.Smith states that "In area A of the Diagnostic Criteria for Post-Traumatic Stress Disorder: two of the first criteria in establishing whether or not an individual suffers from PTSD are that the individual must have been exposed to a traumatic event which threatened death and which elicited a response of 'intense fear, helplessness, or horror'" (41).Although Nick experiences a violent event that inflicts him with serious wounds, he is traumatized by the intrusive flashbacks and disturbing nightmares that haunt him and intensify his fear.Charles Coleman maintains that Nick in "WYNB" suffers from PTSD and traumatic brain injury (1).He states that "people suffering from PTSD create various types of cerebral timelines, sometimes as simple as short vignettes of loosely connected mental pictures, some with action, speech, soundtracks, words, and associated odors" (2).Frederic's dissociative description of his wounding in FTA can be read as cerebral timelines describing soldiers' traumatic memories.
TEXTUAL ANALYSIS
In A Farewell to Arms, the narrator Frederic is violently wounded, a terrible experience that makes him unable to possess its memory entirely.After a mortar shell suddenly falls, turning him helpless and numb, he provides sensory depictions of sounds, colors and images without a coherent narrative.He remembers the sound of the trench mortar: "I heard a cough, then came the chuh-chuh-chuh-chuh" (47).He also hears the blast of the trench door, sees a flash caused by the extremity of the shell and loses his breath.His traumatic experience happens in a moment and is disrupted by his feeling that his soul disintegrates from his body: "I tried to breathe but my breath would not come and I felt myself rush bodily out of myself and out and out and out and all the time bodily in the wind" (47).This description indicates that Frederic does not fully register this violent experience; and as a result, he will suffer from such aftereffects as flashbacks and nightmares.
Traumatic memories do not appear at will and intrusively haunt survivors.Since they are not narrative memories and cannot be retrieved as other stored memories, they make the traumatized live through their terrible past anew.Nightmares are manifestations of an "impossible history" that haunts survivors and disturbs their sleep.In A Farewell to Arms, Frederic has no control over his traumatic memories, which bother him at night: I know that the night is not the same as the day: that all things are different, that the things of the night cannot be explained in the day, because they do not then exist, and the night can be a dreadful time for lonely people once their loneliness has started.But with Catherine there was almost no difference in the night except that it was an even better time.(318) Although Catherine makes his night "an even better time," Frederic cannot avoid his fear of the night (318).In The Sun Also Rises, Jake Barnes is impotent due to a war wound.In his flat, he sees his old scars: "Of all the ways to be wounded.I suppose it was funny" (25).It may sound funny for Jake to be wounded in his testicles (and for people too), but his war memories are distressing and cause him to stay awake: "I lay awake thinking and my mind jumping around.Then I couldn't keep away from it" (26).In "Now I Lay Me," Nick cannot sleep at night because he is afraid his "soul would go out of my body.I had been that way for a long time, ever since I had been blown up at night and felt it go out of me and go off and then come back" (276).In "A Way You'll Never Be," Nick cannot sleep "without a light of some sort" when Paravicini asks him to sleep (309).
While the traumatized struggle with their nightmares at night, they also relive their traumatic flashbacks when they are awake.In "A Way You'll Never Be," after Nick finishes his conversation with Paravicini, he lies on a bunk and suddenly encounters a flashback that reminds him of his terrible past.Like many soldiers, Nick recollects how he is afraid of death and wounding, and to hide his fear, he wears a chin strap on his mouth.Nick's flashback is dissociative and fragmented, in which the past and the present are overlapped, and history is effaced: "Knowing it was all a bloody ballsif he can't stop crying, break his nose to give him something else to think about.I'd shoot one but it's too late now.They'd all be worse.Break his nose" (310).This flashback moves from the past "was" to the present "can't" and ends with the immediate imperative "break his nose."This back-and-forth shift between the present and the past disrupts history and suggests that Nick cannot control his traumatic past.
The memory of a traumatic event is more frightful than the event itself because its belated, literal return causes the individuals to reenact their violent traumas.In the same flashback, Nick involuntarily encounters the images of the yellow house, stable and canal.He does not know what happens to him, and the only remnants of his wounding experience are these incomprehensible images.For Nick, the yellow house "meant more than anything and every night he had it.That was what he needed but it frightened him especially when the boat lay there quietly in the willows on the canal, but the banks weren't like this river" (311-312).He feels that he is there "a thousand times and never seen it."This feeling suggests that he carries an "impossible history," which he does not fully possess: Now he was back here at the river, he had gone through that same town, and there was no house.Nor was the river that way.Then where did he go each night and what was the peril, and why would he wake, soaking wet, more frightened than he had ever been in a bombardment, because of a house and a long stable and a canal?(311) The absence of the yellow house and the presence of the river, although the river is different from the one he experi-ences in his flashback, create an uncertainty concerning his traumatic past.He cannot identify the yellow house and the lower river and can know his traumatic past through intrusive, disturbing flashbacks.
In the last flashback, a new traumatic image of a man with beard pointing his gun towards Nick appears: "He shut his eyes, and in place of the man with the beard who looked at him over the sights of the rifle, quite calmly before squeezing off, the white flash and clublike impact, on his knees, hot-sweet choking, coughing it onto the rock while they went past him, he saw a long, yellow house with a low stable and the river much wider than it was and stiller" (314).This traumatic image does not override the main traumatic flashback of the yellow house and the lower river.It may fill in some gaps in Nick's traumatic history (and may lead to a future recovery), yet the ending does not suggest that.After he encounters the flashback of the man with beard, Nick suddenly says, "'Christ …I might as well go" (314).
Survivors of trauma prefer not to talk or think about their terrible past.In Beyond the Pleasure Principle, Sigmund Freud argues that "perhaps they [survivors of trauma] are more concerned with not thinking of it" (7).In "A Way You'll Never Be," Paravicini offers Nick grappa, causing him to remember "completely and suddenly" when he used to get "stinking in every attack" (309).These memories are frightening and make Nick change the topic.Also, when the adjutant asks Nick about his "scars," he changes the topic and talks about grasshoppers.Nick says, "These insects at one time played a very important part in my life" (312).In "Now I Lay Me," the memories of trout-fishing and grasshoppers help Nick stay awake because he is afraid to sleep at night.In "Big Two-Hearted River," although there is no mention of the war, the short story shows how Nick is psychologically disturbed 3 .He feels that he leaves "everything behind, the need for thinking, the need to write, other needs.It was all back of him" (164).This trip, then, suggests that Nick wants to forget his past, probably his war memories.
In A Farewell to Arms, when Frederic is taken to the hospital in Milan, Rinaldi does not recognize Frederic's psychological disturbance and insists that Frederic recount his "heroic act."Frederic reservedly responds: "I was blown up while we were eating cheese" (55).In chapter 34, Frederic does not want to read the papers because he wants to forget the war: "The war was a long way away.Maybe there wasn't any war.There was no war here.Then I realized it was over for me" (213).In The Sun Also Rises, Brett Ashley Reminds Jake of his impotence in her secret affairs with Robert Cohen, Mike Campbell and Pedro Romero: "This was Brett, that I had felt like crying about.Then I thought of her walking up the street and stepping into the car, as I had last seen her, and of course in a little while I felt like hell again" (28).
However, some trauma survivors try to understand their "impossible history" by reading about the war.In "Soldier's Home," Harold Krebs' lies about his war experience create a "feeling of nausea" and lead him to retreat to his own private world: "sleeping late in bed, getting up to walk downtown to the library to get a book, eating lunch at home, reading on the front porch until he became bored and then walking down through the town to spend the hottest hours of the day in the cool dark of the pool room" (147).He does not tell his townspeople about his fears, except in the dressing room when he tells a soldier that he "had been badly, sickeningly frightened all the time.In this way he lost everything" (146).Krebs is interested in reading history books, which, although Milton Cohen uses to disregard PTSD, may provide some answers to Krebs' war experience.He wants to make sense of what happened to him in the war-Fighting in such major battles as Belleau Wood, Soissons, the Champagne, St. Mihiel and the Argonne is traumatic enough to inflict him with indelible memories.
If Harold Krebs goes to war books to make sense of "all the engagements he had been in," in "A Way You'll Never Be," Nick goes to the front itself to understand his unassimilated history and its intrusive return.He knows that he is wounded in the front line; nevertheless, he still does not know or possess that overwhelming experience entirely.He tries to track it down yet fails, making him anxious and deteriorating his mental state: "If it didn't get so damned mixed up he could follow it all right.That was why he noticed everything in such detail to keep it all straight so he would know just where he was, but suddenly it confused without reason as now" (311).As an archeologist, he notices scattered objects, such as postcards, letters, helmets, gas masks, which belong to a recent offensive.He also observes swollen, dead bodies with coats that are open, and pockets that are out.This close observation suggests that Nick is eager to know the missing pieces of his traumatic past.
CONCLUSION
This paper has examined how soldiers find often difficult to recollect their "impossible history" as other ordinary memories.Considering traumatic memories as "an impossible history" helps us see why Hemingway's combat characters are troubled by their inability to sleep and their social maladjustment.These characters also try not to remember their "impossible history," yet they are compelled to relive it as intrusive flashbacks and nightmares.Most importantly, his paper has shown that trauma lies not in the event itself, but in its immediate and belated return as distressing recollections and nightmares.
END NOTES
1. Cathy Caruth, one of the leading figures in trauma theory, published two foundational books, Trauma: Exploration in Memory (1995) and Unclaimed Experience: Trauma, Narrative and History (1996).In Trauma: Exploration in Memory, she edited articles that cover a wide range of disciplines such as history, film, literature and psychiatry.In the preface, she maintains that traumatized individuals are not abnormal, nor suffer from pathological illness that can be psychoanalyzed.The traumatized, she argues, go through an overwhelming experience that they find hard to remember entirely.A traumatic experi-ence, she argues, is "an impossible history" to which the traumatized subject has no access except through flashbacks and nightmares.Her recent book Literature in the Ashes of History compares the traumatized individual's "impossible history" with Jacques Derrida's "archive fever," a term which suggests that traumatic memory is an indelible history which paradoxically erases itself.This erasure assures the incomprehensibility of traumatic experience which persistently and intrusively revisits trauma survivors.It is worth mentioning that many critics and historians, e.g.Ruth Leys' Trauma: A Genealogy (2000), reject Caruth's assumptions that traumatic experiences are incomprehensible and cannot be known or represented.2. Although not a war trauma, Marc Seals regards Hemingway's loss of his early Paris manuscripts as traumatic.He bases his analysis on Caruth's definition of trauma, which she sees as a "response to an unexpected or overwhelming violent event or events that are not fully grasped as they occur, but return later in repeated flashbacks, nightmares, or other repetitive phenomena."Seals also suggests that Hemingway fictionalized his traumatic loss of the manuscripts, which might serve as recovery and "a therapeutic outlet for trauma" (62).But Seals' reading is problematic.First, Hemingway's loss of his early manuscript cannot be equated with being exposed to shell bursts or being wounded.Second, Caruth considers trauma as unhealable wound, which argues against Seals' suggestion that Hemingway's writing about his lost early manuscripts served as a recovery.3.In The Moveable Feast, Hemingway states that "Two Big-Hearted River" "was about coming back from the war but there was no mention of the war in it" (76).The absence of war reference suggests Nick (perhaps Hemingway) as a trauma survivor does not want to remember the war, for it may psychologically deteriorate his mentality.4. Milton Cohen disregards "Soldier's Home" as a post-traumatic narrative and argues that Krebs does not suffer from PTSD.When Krebs meets a soldier in the dressing room, he "fell into the easy pose of the old soldier among other soldiers" (146).He, then, reveals his fears of the war.Cohen explains, "The problematic word here is 'pose.'If it means 'stance' or 'attitude,' it suggests that Krebs, relaxing, can now tell the truth to the fellow soldier.But if 'pose' means a false appearance, then Krebs is simply falsifying his experience once again by pretending he was badly frightened in combat" (162).While Cohen sees the first evidence as a falsifi-cation, he is still ambivalent and admits that the "pose" can be interpreted both ways.Another evidence Cohen uses to discredit PTSD in this short story is when Krebs expresses his interest in reading history books about the war.Cohen contends, "No one suffering from PTSD would look forward "eagerly" to reading about the bat-tles he was in" (163). | 2019-01-02T09:37:21.257Z | 2017-10-10T00:00:00.000 | {
"year": 2017,
"sha1": "59cbe25d1c2ac6d943bc54e8de781610518db179",
"oa_license": "CCBY",
"oa_url": "http://www.journals.aiac.org.au/index.php/IJALEL/article/download/3769/3036",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "59cbe25d1c2ac6d943bc54e8de781610518db179",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Psychology"
]
} |
234136150 | pes2o/s2orc | v3-fos-license | Forest restoration by different nucleation techniques in Urochloa grassland
– The objective of this work was to evaluate the effect of brushwood, black plastic mulch, herbicide, and artificial perch on the natural regeneration of native species in Urochloa grassland. The experiment was conducted between February 2014 and February 2016 in the Dense Ombrophilous Forest, in the municipality of Morretes, in the state of Paraná, Brazil. The treatments were: herbicide, herbicide + perch, black plastic mulch, black plastic mulch + perch, brushwood + herbicide, brushwood + herbicide + perch, and a control treatment. The evaluations were carried out at 4, 8, 12, 18, and 24 months after the installation of the experiment, by counting and identifying regenerating woody species and estimating visually the percentage of herbaceous coverage. Initially, brushwood and black plastic mulch reduced the Urochloa grasses; however, this effect was lost over time due to the rapid growth of the grasses from the edges to the center of the plots. The use of perches in the treatments does not allow a significant increase of other species because of the continued inhibiting conditions for the establishment of seedlings. The herbicide is effective in removing the grasses; however, the recruitment of woody species is only satisfactory when perches are used to attract the dispersing fauna. For a successful ecological restoration of pastures, there is a need for the local elimination of Urochloa grasses.
Introduction
The conversion of forests to pasture for livestock is common in tropical landscapes, leading to unconnected habitats and serious damage to biodiversity (Sobanski & Marques, 2014). The lack of planning in the occupation of lands destined for agricultural activities, associated with inadequate management practices, causes accelerated soil degradation, resulting in unproductive areas that end up being used as pasture (Elgar et al., 2014;Guidetti et al., 2016). For this reason, in Brazil, many actions to restore degraded ecosystems have been carried out in abandoned open areas or previous grazing pastures (Guerra et al., 2020).
Many pastures are formed by grasses of the genus Urochloa, which inhibits natural regeneration (Fragoso et al., 2017;Weidlich et al., 2020). In grasslands, seedling recruitment is reduced due to factors such as: competition for water, light, and nutrients; allelopathy; absence of dispersing fauna; predation; inadequate microclimate conditions; and physical and chemical soil degradation (Maza-Villalobos et al., 2011;Sobanski & Marques, 2014).
Traditionally, forest restoration programs are executed by planting mixed stands of tree species and physically protecting the area (Chazdon & Uriarte, 2016). In this phase, cultivation treatments, such as weeding, brushing, and herbicide application, are necessary, but rarely performed (Weidlich et al., 2020). Furthermore, in grazing areas that are solely protected and receive no cultivation treatments, natural regeneration (secondary succession) is slow or nonexistent (Bechara et al., 2016).
In this context, several restoration practices, which are still poorly applied or developed, focus on a set of processes that benefit species succession (Connell & Slatyer, 1977). Nucleation, for example, seeks to induce natural regeneration from one point (the nucleus) that is different from the surrounding matrix, in order to attract seeds and/or favor their germination and development. For nucleus management, artificial perches and brushwood (Reis et al., 2014) are included in the area, while inhibitory vegetation is removed by using herbicides (Elgar et al., 2014) or black plastic mulch (Tomazi & Castellani, 2016).
The use of artificial perches has been proposed to attract seed-dispersing birds, increasing seed rainfall (Reis et al., 2014). However, because it is important that the seeds deposited below the perches have adequate conditions to germinate and grow, seedbed improvement practices have also been suggested, without which the dispersed seeds would be unlikely to survive (Almeida et al., 2016;Tomazi & Castellani, 2016).
Brushwood is a method of environmental complexation, which seeks to improve the quality of the seedbed and, consequently, to create an environment conducive to incoming seeds (Carpanezzi & Nicodemo, 2009). This is possible because brushwood shading and gradual decomposition improve soil aspects, such as organic matter and microorganisms (Reis et al., 2014). In this method, inert plant residues, such as materials from trees, trunks, bamboos, and forest residues, are collected to form natural regeneration nuclei (Reis et al., 2014).
Herbicides and black plastic mulch are other methods used to prepare the seedbed, but by removing inhibitory vegetation (Elgar et al., 2014;Weidlich et al., 2020). When done correctly and other options are not feasible, the application of herbicides to control undesirable species is an effective and inexpensive restoration tool (Simberloff, 2014;Galindo et al., 2017). Black plastic mulch, developed as a method to control weeds, is commonly used in agriculture; in ecological restoration programs, it allows the creation of an environment favorable for the regeneration of native species by reducing inhibitory plants (Marushia & Allen, 2011).
Since all these methods are simple and cheap to carry out, they have awoken much interest, which has been increased by the promise of a lower need for cultivation methods both during and after their implantation phase (Zahawi et al., 2013;Chazdon & Uriarte, 2016). However, ecological restoration actions must take into account each environmental scenario, in order to provide technical recommendations that are adapted to the local reality and supported by field results (Holl et al., 2017).
The objective of this work was to evaluate the effectiveness of brushwood, black plastic mulch, herbicide, and artificial perch on the natural regeneration of native species in Urochloa grassland.
Materials and Methods
The experiment was conducted between February 2014 and February 2016 at the experimental station of Embrapa Florestas, located in the municipality of Morretes, in the coastal region of the state of Paraná, Southern Brazil (25°26'56"S, 48°52'18"W), in the phytoecological region known as lowland Atlantic rainforest, a Dense Ombrophilous Forest.
The relief was flat and the soil was classified as a Cambissolo Háplico Tb distrófico according to Brazilian soil classification system (Santos et al., 2018), i.e., a Dystric Cambisol according to FAO's World Reference Base for soils (IUSS Working Group WRB, 2015), with a moderate A horizon and clayey texture. According to Köppen's classification, the climate is Cfa, humid subtropical, reaching average temperatures close to 17°C in the cooler months and to 24°C in the hottest ones, with infrequent frosts and trend of concentration of rainfall in the summer, but no defined dry season. The mean annual rainfall is between 2.000 and 2.200 mm, and the average annual temperature is close to 21°C.
Initially, the area was used for crops and later was converted to pasture with Urochloa humidicola (Rendle) Morrone & Zuloaga as forage for buffalo breeding, being kept in this condition for about 15 years. Afterwards, the area was first abandoned for 10 years, when soil mechanization was performed using crawler bulldozers to remove vegetation, with the consequent partial decapitation of the A horizon, and then abandoned again for 2 years. During the pasture abandonment periods, the Urochloa subquadripara (Trin.) R.D.Webster and Urochloa decumbens (Stapf) R.D.Webster grasses invaded the area, becoming dominant. At the beginning of the experiment, the predominant vegetation was Urochloa, containing small patches of spontaneous herbs. The surrounding area was predominantly rural, with farms intended for livestock and agriculture, but, at approximately 500 m or less, there were also many natural forest fragments in different stages of succession, which can act as important sources of seeds for regeneration.
The experiment was established with 28 plots of 8x5 m (40 m 2 ), corresponding to seven treatments and four replicates, organized in a randomized complete block design, totaling 1,120 m 2 . In all plots, except in the control treatment, first, mowing was performed with a backpack machine, leaving the residue on the soil. Then, the following treatments were applied: herbicide, herbicide + artificial perch; black plastic mulch; black plastic mulch + artificial perch; brushwood + herbicide; brushwood + herbicide + artificial perch; and a control, with no treatment application.
In the herbicide treatment, 15 days after mowing, 62.35 g ha -1 haloxyfop-P-methyl, a selective herbicide of the aryloxyphenoxypropionic acid chemical group, were applied post-emergence to the plants in the entire area of the plot at a spray volume of 100 L ha -1 .
In the herbicide + perch treatment, in addition to herbicide application to the soil, an artificial perch was installed at the center of the plot. The perch was made of a treated eucalyptus pole, kept 4 m above the ground. Two sticks of 1 m in length were placed in the upper portion, arranged horizontally to the ground, crosswise, and spaced 40 cm apart and from the perch apex.
For the black plastic mulch treatment, a black plastic film with 100 μm thickness was used to cover the total area of each plot for a period of 60 days.
The black plastic mulch + perch treatment was similar to the previous one, but an artificial perch was placed at the center of the plot after the plastic film was removed.
The brushwood + herbicide treatment involved arranging plant residues and then applying the post-emergent herbicide haloxyfop-P-methyl. The brushwood was formed by seven layers and reached a height of approximately 0.5 m. The layers were arranged in the following order: small wood and peach palm logs, leafless bamboo sticks, wooden slabs and leafless bamboo sticks, palm leaves, leafless bamboo sticks, bamboo sticks with leaves, and palm leaves. Initially, the brushwood treatment did not include herbicide application; however, it was necessary due to the aggressive growth of the surrounding grasses 60 days after the treatment's installation, in order to reduce the reinvasion of the plots.
The brushwood + herbicide + perch treatment was similar to the previous one, but an artificial perch was placed at the center of the plot.
The percentage of herbaceous species covering the ground was estimated visually by sequentially placing a 0.50x0.50 m quadrant over a 0.50x5 m subplot, located at the center of the 8x5 m plot, resulting in ten sampling points or quadrants. Three classes were considered: grasses (Poaceae family), other herbs, and lack of vegetation on the ground (bare soil). The percentage of grasses regrowing from the edges of the plots towards the center was calculated, being represented by quadrants (Q) from 1 to 5: Q1, mean of quadrants 1 and 2; Q2, mean of quadrants 3 and 4; Q3, mean of quadrants 5 and 6; Q4, mean of quadrants 7 and 8; and Q5, mean of quadrants 9 and 10. Q3 was at the center of the plot, and Q1 and Q5 at the edges. The present herbaceous species were identified and classified according to: their origin, as native or subspontaneous (ruderals, cosmopolitan, and exotic) (Flora do Brasil 2020; and dispersal syndromes, as zoochorous, anemochoric, or autochorous (Pijl, 1982).
The homogeneity of variances was evaluated by Bartlett's test and, subsequently, the data were subjected to the analysis of variance, in a split-plot design. The main plots corresponded to the seven treatments, and the subplots, to the five evaluation periods (4, 8, 12, 18, and 24 months). When statistically significant, the averages of the studied variables were subjected to Tukey's test, at 5% probability.
Results and Discussion
At the end of the trial period, there were 2,175 woody plants in all treatments, distributed into five shrub species and 26 tree species (Table 1). Of the woody species found, 11 were not identified, as they were seedlings and difficult to classify. There were few shrub and tree species in all treatments, except in the herbicide + perch treatment (Table 1), which presented a higher number of tree species (21), compared with shrub species (5). The higher number of tree species in this treatment seems to be related to the effectiveness of the herbicide in controlling grasses at the base of the perch, together with the increased seed rain from the perches, leading to the establishment of native species (Elgar et al., 2014). This result is reinforced by the fact that all trees, except Sapium glandulosum (L.) Morong and Mimosa bimucronata (DC.) Kuntze, were restricted to the projection of the perch rods. Furthermore, most trees were located under the perch rods, confirming zoochory (in this case, ornithochory) as the main dispersal mechanism for these species (Table 1).
The greatest richness in woody species found in the plots subjected to the herbicide + perch treatment indicates how tools to attract seed dispersers are important in accelerating succession. However, although they can increase seed rain, artificial perches do not guarantee seedling establishment in areas without favorable conditions, as reported in other studies (Almeida et al., 2016;Tomazi & Castellani, 2016). Therefore, the recruitment of new species in tropical pastures depends not only on seed dispersal, but also on actions that provide a higher seedbed quality (Fragoso et al., 2017). In the brushwood and black plastic mulch treatment, the absence of a seedbed favoring natural regeneration explains why the perch did not increase the number of individuals and the richness in woody species (Table 1). In the herbicide treatment without the artificial perch, both the formation of a seedbed that favored natural regeneration and the control of grasses led to a higher density of shrubs; however, the increase in the number of tree species was lower precisely due to the absence of the artificial perch.
It should be noted that about 85% of the tree species found in the plots with herbicide + perch only appeared after 18 months of evaluation, suggesting the time required to recruit trees. This is consistent with the findings of a previous study about the area's seed bank, which showed a low occurrence of shrub and tree plants (Fragoso et al., 2018). The absence of woody species is common in seed banks in abandoned pastures, as many species lack prolonged dormancy and their presence in the area is largely dependent on seedbed quality, especially since the large grass biomass in the soil hinders the incorporation of allogeneic propagules into the seed bank (Maza-Villalobos et al., 2011). Therefore, even when artificial perches were used, the natural regeneration of trees was slow and occurred well after the herbaceous and shrub layers were established, which seems to be related to the increased structural complexity in these plots (Zahawi et al., 2013). In addition, the high level of shading promoted (Galindo et al., 2017). According to the analysis of variance, there were significant interactions between all treatments and evaluation periods (4, 8, 12, 18, and 24 months), indicating that they are not independent ( Table 2). The highest density of woody plants was observed in the herbicide and in the herbicide + perch treatments, especially at 18 (5.23 woody plants per square meter) and 24 (6.24 woody plants per square meter) months, respectively. The brushwood and black plastic mulch treatments with and without the perch had a few woody plants, even when compared with the control, supporting the idea that there are barriers to seedling recruitment (Elgar et al., 2014). Only in the herbicide treatments with and without the perch was the density of woody species compatible with that found in areas without vegetation inhibiting natural regeneration (Marcuzzo et al., 2013;Cruz et al., 2020). In the other treatments, there were no suitable sites for seed germination and seedling establishment due to the continued presence of grasses (Galindo et al., 2017); therefore, the natural regeneration of woody species was rather slow or absent, despite nearby forest remnants (Fragoso et al., 2017).
The higher density of woody species observed in the herbicide treatments can be understood as a result of the dynamics of the colonization by non-forage plants (Maçaneiro et al., 2017). The presence of other life forms, and not only of woody plants, is important for the resumption of natural regeneration processes in these areas and can help control invasive grasses (Maza-Villalobos et al., 2011). Herbaceous and sub-shrub species start flowering and fruiting early, which make up the main elements of the first stages of succession. These species are also highlighted in nucleation methods as facilitators of natural regeneration that significantly improve environmental conditions and, consequently, enable the emergence of other more demanding species (Piaia et al., 2017). Therefore, the obtained results are indicative that the effective control of grasses, by the application of a selective herbicide, allows for the establishment of herbaceous plants with a rapid growth and broad coverage, especially of Desmodium triflorum (L.) DC. and Sphagneticola trilobata (L.) Pruski, which were the spontaneous herbs found at the highest percentages in the herbicide plots at 12 and 18 months ( Table 3). Many of the herbaceous species identified in the present study were raised in the seed bank (Fragoso et al., 2018). The entry of light caused by the removal of grasses stimulated the germination of the seeds contained in the seed bank, which increased the predominance of species with abiotic dispersion syndromes (72%), e.g., anemochory and autochory, and contributed to their continued presence in the herbicide plots. This is attributed to the fact that colonizing herbaceous plants are highly dependent on light to germinate and may remain dormant in the soil for long periods (Maza-Villalobos et al., 2011). Other very common herbaceous plants These families are frequently found in open areas, such as abandoned pastures, which is attributed to their successful colonization in degraded environments and to the formation of a persistent seed bank (Fragoso et al., 2018). Once established, herbaceous plants efficiently prevented surrounding grasses from entering the herbicide plots, favoring the establishment of shrub species, such as Vernonanthura beyrichii (Table 1), and triggering a successional process. At 24 months, there was a reduction in the percentage of spontaneous herbs in the herbicide plots with and without the perch (Table 3), which was possibly due to the shading caused by the increased density of V. beyrichii and the greater accumulation of litter on the ground (Galindo et al., 2017). The V. beyrichii shrub presented the highest density in all treatments, including the control, representing about 70% of the woody species, among which it was one of the first to establish ( Table 1). The species is described, in the Atlantic Forest, as a colonizing plant and is common in the natural regeneration of abandoned pastures (Scheer et al., 2011). Therefore, it was also present in initial natural regeneration areas near the experiment, forming groups about 2.5 m tall. Furthermore, the shrub established itself at high densities, especially in the plots with herbicide treatments, and could be responsible for maintaining the low percentage of pasture reinvasion verified in the reoccupation analysis of the plots (Figure 1) due to the shade it produced, which consequently weakened the grasses. Considering the importance of this shrub for regeneration in pastures and the results obtained in the herbicide treatments, the selective favoring of V. beyrichii could facilitate the establishment of other species, as has been shown in studies about the facilitating effect of pioneer shrubs in areas dominated by aggressive grasses (Medeiros et al., 2014;Galindo et al., 2017). The concept of facilitating plants is included in a set of mechanisms that act on plant communities, contributing to the direction of natural succession (Connell & Slatyer, 1977). In this case, succession results, in part, from the changes in the environment caused by dominant colonizers in the initial phases (Maçaneiro et al., 2017), which is compatible with the results obtained in the present study for V. beyrichii and may have reflected in the establishment of the tree species found in the herbicide + perch plots.
Therefore, the successional dynamics promoted by the herbicide + artificial perch treatment could have allowed for a higher density and richness of woody species (Tables 1 and 2). This process is particularly useful when there are remnant forests nearby that act as seed sources, as in the study area, promoting the combined action of seed rain and a favorable seedbed for species germination and growth (Cruz et al., 2020). Other studies in areas dominated by aggressive growth grasses confirm the hypothesis that improving the seedbed by controlling inhibitory vegetation increases woody species in nucleation plots (Zahawi et al., 2013;Elgar et al., 2014).
In the brushwood treatments, with and without the perch, the subsequent herbicide application was less efficient in controlling the reinvasion of Urochloa grasses, as observed in the plot reoccupation analysis ( Figure 1). As previously explained, this treatment initially did not include herbicide use, which became necessary to reduce the reinvasion in the plots caused by the aggressive growth of surrounding grasses 60 days after the installation of the treatment. The absence of inhibitory grasses generates positive results for the establishment of natural regeneration (Cruz et al., 2020), which was not the case in the present work, since pasture was a limiting factor for the success of the brushwood treatment. At first, the brushwood would kill the grasses due to the initial shading it provided, allowing the development of woody species seedlings as it began decomposing and gradually letting light reach the soil (Marcuzzo et al., 2013). However, the initial shading of the soil did not prevent the grasses from growing, as the dominant forage, U. subquadripara, spreads mainly by vegetative growth. Therefore, there was no longer a distinction between the edges (Q1 and Q5) and the center (Q3) of the plots after 12 months ( Figure 1). Likewise, the subsequent application of the herbicide on the brushwood did not effectively prevent grasses from advancing, which hindered the establishment of woody species. Since herbicides are primarily absorbed by leaves, the most sensitive stage for their application on grasses is that of four to six leaves, making the control of larger plants less efficient (Pereira et al., 2018). Therefore, in abandoned pastures, in addition to brushwood (Reis et al., 2014), the herbicide should be applied at the base and around the nuclei to favor recovery through natural regeneration. It should be pointed out that the 40 m 2 experimental plots were inefficient, even though they were larger than those of 1-12 m 2 usually used in nucleation experiments in Brazil (Bechara et al., 2016). These result agree with those of another study that was carried out in an area with inhibitory vegetation, which showed that, for the recruitment of woody plants, the minimum nuclei size necessary is of 100 m 2 (Zahawi et al., 2013 In the black plastic mulch treatments, despite the significant reduction in the biomass of inhibitory vegetation up to 8 months (Table 3), other herb species were also weakened due to the cessation of photosynthesis. This happened because, after the plastic film was removed, Urochloa grasses from the edges of the plots quickly covered the area, preventing other herbaceous species from recovering. This is supported by the analysis that showed the reoccupation of the plots by grasses (Figure 1), revealing that this phenomenon was faster in mulch treatments, both with and without the perch, with differences between the edges (Q1 and Q5) and center (Q3) of the plots only up to 8 months. In addition, since the first evaluation, the percentage of grasses at the plot edges did not differ from those of the control treatment. In another study (Tomazi & Castellani, 2016), the application of black plastic film in an abandoned pasture presented few lasting effects due to the rapid reoccupation of the dominant grasses, confirming the need for the periodic management of the surrounding areas of mulch plots.
Similar results have been observed for persistent inhibitory grasses in the ecological recovery of pastures through other nucleation techniques (Vogel et al., 2015;Gerber et al., 2017). Therefore, early-stage cultural treatments are necessary for grass control and other purposes, suggesting that, despite the initial growth of woody species, the short periods evaluated in these studies may not reflect long-term results or extra-experimental conditions. The herbicide treatment associated with the perch was the one that facilitated the most the initial regeneration in the pasture. In the brushwood and black plastic mulch treatments, the seedbed did not improve and, therefore, the combined use of the artificial perch did not result in a greater establishment of natural regeneration. This shows that, after the removal of inhibiting vegetation, positive interactions are important for forest restructuring in degraded environments. Moreover, the positive effect of the artificial perch in attracting seed dispersers suggests that, besides the environmental restrictions imposed by grasses, additional factors, such as propagule availability, act synergistically in the presence of pastures (Elgar et al., 2014;Iguatemy et al., 2020). | 2021-05-11T00:07:13.748Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "98e37ea3ba072a0fc963de182e630fa5cc735e3c",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/pab/a/sS5Q3S4vzCDVLNbLCgtJG7s/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "cbdb3d16a07c67ec590b6a515247032d109e6cb3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
250593067 | pes2o/s2orc | v3-fos-license | Simulation of the Effect of Keyhole Instability on Porosity during the Deep Penetration Laser Welding Process
: The quality of a laser deep penetration welding joint is closely related to porosity. However, the keyhole stability seriously affects the formation of porosity during the laser welding process. In this paper, a three-dimensional laser welding model with gas/liquid interface evolution characteristics is constructed based on the hydrodynamic interaction between the keyhole and molten pool during the laser welding process. The established model is used to simulate the flow and heat transfer process of molten. The Volume of Fluid (VOF) method is used to study the formation and collapse of the keyhole and the formation of bubbles. It is found that bubbles are easy to form when the keyhole depth abruptly changes. There are three main forms of bubbles formed by keyhole instability. The front wall of the keyhole collapses backward to form a bubble. The back wall of the keyhole inclines forward to form a bubble. The lower part of the keyhole produces a necking-down effect, and the lower part of the keyhole is isolated separately to form a bubble. In addition, when the keyhole does not penetrate the base metal, the stability of the keyhole is high and the percentage of porosity is low.
Introduction
Laser welding technology has been widely used because of its high efficiency, high welding quality, high precision, good performance, small heat input and small deformation [1][2][3]. However, the defects of joints, such as porosity and spatter, have been extensively studied by experts. Porosity defects caused by the keyhole in laser welding of Aluminum Alloy are an important defect in laser deep penetration welding [4,5].
During the laser welding process, the driving force of the keyhole wall promotes keyhole formation and maintains keyhole stability. J. Zhou [6][7][8] carried out force analysis on the keyhole wall, which was affected by the metal vapor recoil pressure and surface tension in the normal direction of the keyhole wall. The shear direction is affected by the Marangoni shear force caused by the surface tension gradient. In a study on the force on the keyhole wall, C.S. Wu and R. Kovacevic [9,10] believe that the force driving the formation of the keyhole is the metal vapor recoil pressure. In addition, the keyhole wall is also affected by the Marangoni shear force caused by the surface tension gradient. The main role of the former is to promote the formation of the keyhole, thus forming deep and narrow weld seams. The main effect of the latter is the formation of Marangoni flow. In the study of the electron beam deep penetration welding mechanism, D. Schauer [11,12] believes that the keyhole wall is affected by the recoil pressure of metal vapor and surface tension by analyzing the force inside the keyhole, and observing that there is a force equilibrium point on the keyhole wall. Under the equilibrium point of force, the metal vapor recoil pressure is greater than the surface tension. Above the equilibrium point of force, the surface tension is greater than the metal vapor recoil pressure. Under the action of the force, the liquid metal in the upper part of the keyhole flows slightly inward driven by surface tension, reaches the pressure balance area, forms a metal protrusion, and destroys the original pressure balance. After that, the bulge intercepts a part of the incident high-energy beam, so that its temperature increases rapidly, and strong evaporation occurs. The keyhole wall recovers to a smooth state, and the molten pool is further deepened in the process to establish a new pressure balance. Based on the study of keyhole behavior in laser deep penetration welding, H. Wang [13] analyzed the temperature field and molten pool flow field in the laser welding process and studied the influence of keyhole morphology on weld seam formation. The flow state of the molten pool was simulated by analyzing the distribution of keyhole gas pressure. The distribution law of the flow velocity of the liquid metal is obtained, and it is pointed out that the flow velocity decreases at the end of the molten pool. The driving forces of molten pool flow include buoyancy, surface tension, and the force between metal vapor rushing out of the keyhole and the liquid metal.
In the numerical simulation of the high-energy beam welding process, the study of force balance on the keyhole wall and the metal evaporation process is the basis of keyhole morphology and molten pool flow field. The analysis of the front two aspects provides pressure and heat flux boundary conditions for the study of the third aspect from the perspectives of mechanical balance and energy balance.
In this paper, the numerical calculation model of keyhole shape changes and molten pool flow is established, and the transient analysis of keyhole formation and molten pool flow during laser welding is carried out. The relationship between molten pool flow and keyhole stability and porosity defects is established. In the laser welding process, the spontaneous fluctuation and spontaneous perturbation of keyhole depth promote the formation of bubbles. The relationship between bubble formation and keyhole dynamics is studied systematically. The results show that keyhole stability is the main cause of bubble formation. Therefore, this paper is helpful to understand the mechanism of laser deep penetration welding and analyze the causes of porosity defects and put forward the method of weld seam quality control.
Materials and Methods
The 6061-T6 Aluminum Alloy used for the test is a 4 mm thick plate. The chemical composition and mechanical properties [14] are shown in Table 1. The welding sample size is 100 × 50 × 4 mm. The leading welding equipment used in the test includes pieces of equipment such as lasers and welding robots, as shown in Figure 1.
The process parameters affecting the laser welding of 6061 Aluminum Alloy include laser power, welding speed, defocusing amount, and shielding gas [15]. In this paper, only the influence of laser power and welding speeds on keyhole morphology and percentage of porosity are discussed. Four sets of tests are conducted here. In the first set of tests, under the determination of small laser power with high welding speed. The second group is obtained under the determination of small laser power with low welding speed. The third group increased the laser power and then used high welding speed. The fourth set of tests was performed under the determination of large laser power with low welding speed. The test process parameters are shown in Table 2. The calculation domain of the model proposed in this paper is shown in Figure 2, including entrance, exit, symmetry plane and wall. The YZ plane is the symmetric plane. The top region of the XY plane is the plasma region. The plane above the plasma region is the entrance, set to a constant speed. Both sides of the plasma region along Y direction and opposite side of symmetry plane are the exits, set to the isobaric surface. The region above the XY plane is the Aluminum Alloy area. The two sides of Aluminum Alloy area along Y directions, opposite side of symmetry plane and bottom are walls. Model simplification and assumptions:
1.
Assuming that the material is isotropic and the liquid state is incompressible Newtonian fluid in laminar flow mode; In the laminar boundary layer, the fluid motion is extremely regular. As the velocity increases, the boundary layer exhibits extremely irregular turbulent flow. As a result of the interaction that leads to the chaotic flow state, the velocity and pressure at any point in the turbulent boundary layer fluctuate. The variation of the velocity boundary layer on the plate is shown in Figure 3. In order to simplify the calculation, the liquid flow state is assumed to be laminar.
2.
The liquid region is assumed to be a porous medium with isotropic permeability; 3.
Ignoring the effect of shielding gas on molten pool flow and keyhole fluctuation; 4.
The shielding effect and absorption effect of the plasma on the laser beam are ignored; 5.
The calculation area of molten pool and keyhole is symmetrical about weld seam; 6.
The free surface of the molten pool is solved by the Volume of Fluid (VOF) equation.
The equation is as follows: where u is the fluid velocity and F is the volume fraction. In the model established in this paper, the aluminum alloy is the first phase and the plasma is the second phase. When F = 1, the control unit is all-aluminum alloy composition. The specific process of the modeling is shown in Figure 4 below. The basic equations of calculational fluid flow including mass, momentum and energy continuity equations are used to describe the heat transfer, mass transfer and fluid flow in the laser welding process.
The instantaneous time of laser heat source acting on the surface to 6061 Aluminum Alloy is defined as the initial time of calculation.
The molten pool and keyhole will be formed on the surface to be welded from the base metal under the action of the laser beam, which mainly includes the combined action of laser beam irradiation, thermal convection, thermal radiation and metal evaporation.
For the keyhole-free interface in the molten pool, the recoil pressure and surface tension are the main driving forces for the molten pool flow, and they will affect the fluid flow in the molten pool surface area.
In this paper, we only consider the welding process of thermal convection and thermal radiation.
In order to accurately calculate the heating effect of the laser beam on the plate butt structure in the welding process, this paper uses the combined heat source model to simulate the energy transfer of the laser beam in the welding area. The combined heat source model is composed of a Gaussian surface heat source and a Gaussian rotator heat source.
In order to verify the accuracy of the model, the actual process parameters of 6061 Aluminum Alloy plate butt welding are applied to the model for simulation, and the calculation results are compared with the experimental results. For the model verification method, this paper adopts the fusion line contour comparison method for the verification.
In order to fully verify the accuracy of the model, the simulation results under 4 groups of process parameters in the welding test are verified, and the verification results are shown in Figure 5. It can be seen from the figure that the simulation results are in good agreement with the experimental results, although some molten pool and welding seam size parameters present small deviations from the experimental results. In general, the simulation results of the welding seam cross-section profile are basically consistent with the experimental results, and the average error of model accuracy is less than 1%, which indicates that the model used in this paper is more accurate and can be applied to the simulation analysis of 6061 Aluminum Alloy plate butt laser welding process.
Formation of Keyhole and Molten Pool in Laser Welding Process
It can be seen from Figure 6 that, when the time was 4 ms, the base metal surface was heated and the metal melted. When the time reached 20 ms, the recoil pressure of the molten pool decreased. The recoil pressure here was the lowest, which promoted the formation of a narrow and deep keyhole. As the keyhole depth increased, the surface tension increased, and the surface tension interacted with the recoil pressure to prevent keyhole growth. Finally, the keyhole driving force kept balanced, which made the keyhole maintain a relatively stable state. Before keyhole formation, the laser beam irradiated the base metal surface. When the keyhole was formed, the laser beam was irradiated by metal vapor, and the multiple reflections of the laser beam were realized through the plasma on the keyhole wall, which improved the efficiency of energy absorption. Therefore, the keyhole became deeper.
Force Analysis of Keyhole Wall in Laser Welding Process
A point on the keyhole wall is selected for force analysis. The normal upward pressure on the point includes: metal vapor recoil pressure Pg, additional pressure caused by the surface tension of the curved liquid surface Pσ, liquid static pressure caused by gravity Pl, pressure caused by centripetal force considering the rotation of the molten pool Pc, and the laser beam impact force, which can be ignored. Shear force in the tangent direction includes shear force caused by tangential velocity F_s and shear force caused by surface tension gradient Fσ, as follows in Figure 7. In the laser welding process, the direction from the liquid phase to the gas phase is specified as the positive direction. When the keyhole reaches the dynamic balance state, the normal direction resultant force on the keyhole wall is zero.
where A is a coefficient related to the surrounding environment, A = 1 at atmospheric pressure [16]; B is a coefficient associated with the integral constant, and the value of parameter B is determined by referring to the saturated vapor pressure of pure Aluminum at 2740 K under a standard state, B = 3.55 × 10 10 ; T(r) means the temperature distribution in the free surface.
In the Young-Laplace equation, r1 and r2 refer to the maximum and minimum curvature radius of the surface, respectively, and 1 2 · (1⁄r1 + 1⁄r2) refers to the degree of curvature at any point of the surface. P l = ρgz (4) Considering the rotation of the molten pool, the pressure distribution caused by centripetal force is solved according to the law of statics. When the centripetal force is volumetric force, the centripetal force of unit mass at the radius r on the keyhole wall is rω 2 .
With the decrease of keyhole wall radius, namely the increase of keyhole depth, the liquid static pressure increases, the rotational centripetal force of the molten pool decreases, the metal vapor recoil pressure increases and the additional pressure that is caused by surface tension increases. The metal vapor recoil pressure and additional pressure are much larger than the static pressure and centripetal force. The main force to maintain the dynamic balance of the keyhole is the additional pressure caused by the metal vapor recoil pressure and surface tension. The metal vapor recoil pressure promotes the formation of the keyhole and maintains the stability of keyhole morphology. The additional pressure caused by surface tension inhibits keyhole formation.
In the upper half part of the keyhole, the curvature of the keyhole wall is relatively uniform, so that the additional pressure caused by surface tension is evenly distributed. In addition, the small temperature gradient on the keyhole wall also causes the distribution of additional pressure caused by surface tension uniform. At the same time, the distribution of metal vapor recoil pressure tends to be uniform.
In the lower part of the keyhole, the curvature of the keyhole wall is large, and the additional pressure caused by surface tension increases rapidly with the increase of the depth of the keyhole. Concomitantly, in order to maintain the stability of keyhole morphology, the metal vapor recoil pressure needs to be increased rapidly to offset the effect of surface tension.
Influence of Process Parameters on Dynamic Behaviour of Keyhole and Bubble Formation
The effects of laser power and welding speed on keyhole morphology were discussed. The keyhole depth, width, area, front wall inclination angle of the keyhole and back wall inclination angle of the keyhole were measured at different times, as shown in Figure 8. For any measured values for each group of test parameters, 80 consecutive simulation results were analyzed. In the molten pool, a keyhole with dense plasma metal vapor can be regarded as a cavity. The filling mode of the liquid metal determines whether bubbles are formed during the laser welding process. The backfill method of liquid metal mainly has two forms. One is that the front wall of the keyhole bulges and collapses backward. The other is the keyhole back wall bulges and collapses forward. Another way is that the neckingdown phenomenon occurs at the neck of the keyhole, and the lower part of the keyhole is separated to form a separate bubble.
In this paper, the keyhole depth and the outlet diameter at the top of the keyhole under different process parameters are analyzed, and the following regulations are found, as shown in Figures 9 and 10. When laser power is insufficient to form a keyhole through the base metal, the keyhole depth increases with the decrease in welding speed. When laser power is large enough to form a keyhole through the base metal, the keyhole depth is equal to the thickness of the base metal. When welding speed is high, the increase in the outlet diameter size of the keyhole top is more significant than that at low welding speed.
Comparing different laser powers, it was found that when the laser power is large, excessive heat input leads to a large fluctuation of keyhole depth. Moreover, when the heat input is large, the effect of increasing the outlet diameter of the keyhole top is more significant than that of small heat input.
In addition, through the statistics of the keyhole depth and the outlet diameter of the keyhole top under different process parameters, it was found that bubbles were easily formed when the keyhole depth changed, as shown in Figure 11, along with the outlet diameter size of the keyhole top fluctuations. Bubble formation induced by the keyhole leads to rapid instability of keyhole and changes in the keyhole depth. Therefore, the formation of bubbles can be judged according to the change of keyhole in the laser welding process, which provides help for the detection of porosity defects in the laser welding process.
In the laser welding process, due to the interaction between different dynamic mechanisms in the molten pool, the keyhole is unstable and the keyhole profile fluctuates over time. Inside the keyhole, the convergence of the liquid metal leads to a bulge and collapse of the keyhole wall. As shown in Figure 12, according to the simulation results, the keyhole profile at different times during the laser welding process is extracted, and bubbles formed by the keyhole collapse are counted. The porosities formed by the keyhole are closely related to bubbles. The variation trend of bubble formation in keyholes under different process parameters was quantitatively compared. In order to evaluate the stability of a keyhole, this paper determines the maximum deviation of the keyhole depth relative to the average keyhole depth. The ratio of maximum deviation to keyhole depth is expressed as follows [17]: where σ is the ratio of maximum deviation to keyhole depth; d is the maximum deviation depth of the keyhole; and D is the average keyhole depth. The percentage of porosity, R p , is defined as the sum of porosity diameters per unit of weld seam length in the welding direction [4].
where L w is the weld seam length and L p is the sum of porosity diameters in the welding direction. According to Figure 11, by comparing the ratio of maximum deviation to the keyhole depth, it is concluded that the keyhole is relatively stable when the laser power is low. When the laser power is high, reducing the welding speed can make the keyhole relatively stable. When the laser power is high, the keyhole is relatively stable. However, the percentage of porosity is increased because the keyhole runs through the base metal leading the bottom of the keyhole to contact with the air and to enter the air. Therefore, the percentage of porosity is low when the keyhole stability is high. Figure 13 shows the influence of process parameters on the keyhole area, the front wall inclination angle of the keyhole and the back wall inclination angle of the keyhole. Through horizontal comparison, the relationship between the keyhole area, front wall inclination angle and back wall inclination angle of the keyhole is studied under different welding speeds and it is researched through longitudinal comparison under different laser powers. It was found that when the laser power is too high, the keyhole is formed through the base metal, and the keyhole area increases sharply. The front wall inclination angle and the back wall inclination angle of the keyhole through the base metal are relatively stable. When the laser power is low, the front wall inclination angle and back wall inclination angle of the keyhole are relatively stable.
Relationship between Dynamic Behaviour of Keyhole and Bubble Forming
The tip at bottom of the keyhole tilts backward, the lower half part of the liquid metal on the back wall of the keyhole flows clockwise, and the upper half part flows counterclockwise. The middle position of the molten pool behind the keyhole intersects and forms a bulge, which finally leads to the collapse of the keyhole and forms a bubble below the keyhole. If bubbles cannot escape from the molten pool due to the flow behavior of the liquid metal, porosities will eventually form [18].
In this paper, it was found that there are two main reasons for a 'bulge'. One is a sudden change in the temperature at a certain position of the keyhole wall, resulting in a mutation in metal vapor recoil pressure, resulting in the emergence of bulges. The other is the abrupt change of curvature at a certain position of the keyhole wall, which leads to the mutation of surface tension and thus induces a bulge.
During laser welding, a bulge on the keyhole wall makes laser radiation and gas pressure at the bottom of the keyhole decrease rapidly. Simultaneously, a low-pressure zone appears near the keyhole outlet when extremely high-speed metal vapor is ejected from the outlet at the top of the keyhole. Low pressure will make the keyhole absorb the surrounding gas. As a result, ambient gases enter the keyhole.
In the laser welding molten pool, a keyhole with dense plasma metal vapor can be regarded as a cavity. The filling mode of the liquid metal determines whether bubbles are formed during the welding process. There are three forms of liquid metal backfill. One is that the keyhole back wall bulge causes a collapse forward. The other is the front wall of the keyhole bulges and collapses backward. Another way is that necking-down occurs in the lower part of the keyhole, and separate bubbles are formed in the lower part of the keyhole.
During the laser welding process, the tip of the keyhole's lower part inclines forward when the keyhole is stable. Before keyhole instability, the tip of the keyhole bottom inclines, as shown in Figure 14. The metal vapor pressure continues under the cation of the laser beam for a period of time. At the same time, the keyhole plasma concentrates the heat source energy in the keyhole region, which increases the temperature gradient near the top outlet of the keyhole. Inside the molten pool in the upper half of the keyhole, the shear stress of the Marangoni and vapor pressure force the liquid metal to resist the gravity flow out of the top outlet of the keyhole. This is a resistance to liquid metal collapse from the top of the keyhole. However, in the deep region of the keyhole, Marangoni shear stress and metal vapor pressure are weak in the lower half. Therefore, the gravity effect is generated and stable liquid metal backfill is carried out [19].
The tip of the bottom keyhole leans forward leading to the collapse of the front wall of the keyhole forming a bubble, as shown in Figure 15.
In the laser welding process, the front wall of the keyhole is slightly bent backward, and a bulge is generated in the part where the local bending degree of the front wall changes slightly. The liquid metal near the bulge evaporates violently under the highenergy irradiation of the laser beam. The vaporized metal vapor is ejected at a large speed on the back wall from the concave and if impacted by the metal vapor flow, is a liability to fall and collapse under the action of complex physical conditions such as gravity. As a result, the keyhole collapses and closes, eventually forming bubbles [8].
In the laser welding process, various driving forces in the molten pool interact with each other, and the keyhole is in an unstable state with intense oscillation. At the time of t0, the keyhole begins to shrink, the diameter of the keyhole's middle part becomes smaller, and the front wall and back wall of the keyhole's middle part bulges, as shown in Figure 16. The liquid metal intersection at the neck of the keyhole makes the keyhole shrink, and the cavity at the bottom of the keyhole is blocked at the bottom of the molten pool. In the molten pool, there is a liquid metal flowing downward, and there is also a liquid metal flowing upward. Their encounter leads to the bulge of the keyhole wall. The keyhole wall collapses and bubbles form at the bottom of the molten pool, which suddenly reduces the keyhole depth. Liquid metal flow in the molten pool makes bubbles shrink and become trapped in the molten pool. At the top of the molten pool, liquid metal flows from the center to around the sides. At the bottom of the molten pool, there are two opposite flows. One flows from the top to the middle. One flows from the bottom to the middle. Due to the existence of two opposite flow directions, the surface of the keyhole wall is bulged, resulting in the collapse and contraction of the keyhole.
According to the stability theory of capillarity, when the height of capillarity is greater than the circumference of its cross-section, the capillarity would be in an unstable state, and the necking-down and expansion would alternate. Especially, there is a trend of shrinkage and closure at the necking-down stage. Corresponding to the actual laser welding process, the keyhole is in this state. When the energy density of the laser beam is higher than a critical value, the recoil pressure generated by the metal vapor would expand the degree of necking down. At the same time, the necking-down collapse trend of liquid metal around the keyhole interacts with the metal vapor recoil pressure, making the keyhole vibrate back and forth in the radial direction. When the laser beam energy density at the bottom of the keyhole is lower than the critical value for maintaining the keyhole stability due to the absorption of laser energy by the plasma in the keyhole or other factors, the keyhole would be closed at the necking-down stage to form a bubble.
Conclusions
A three-dimensional thermal-fluid coupling model was established for the laser welding of a 4 mm thick 6061 Aluminum Alloy plate butt structure. The driving force of the keyhole during the laser welding process was analyzed and added to the model to simulate the dynamic of the keyhole under different process parameters. The conclusions drawn from the present study are given below.
(1) When P = 2500 W, v = 3 m/min, is the critical value of weld penetration. The formation of a bubble is usually caused by the driving force of the molten pool and keyhole surface during laser deep penetration welding. The main reason for the formation of keyhole porosity is the instability of the keyhole. (2) In the laser welding process, the keyhole is unstable due to the interaction of different driving forces, and the keyhole contour fluctuates over time. The liquid metal flow at the keyhole wall leads to a bulge, which eventually leads to the collapse of the keyhole wall. | 2022-07-17T15:11:20.010Z | 2022-07-14T00:00:00.000 | {
"year": 2022,
"sha1": "0835ea55b9fc93022af41d7ee1fdc92ba57c7ca7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4701/12/7/1200/pdf?version=1657821670",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "d34c11292825cdce7a5066de84da249dffb3f17d",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
} |
16823344 | pes2o/s2orc | v3-fos-license | Pleuritic chest pain; where should we search for?
Pleuritic pain is not an unusual problem in children. Other concomitant symptoms should be considered for diagnostic approach in a child with pleuritic chest pain. In this report we discuss chest pain in a 6-year-old child with regard to other signs and symptoms. Finally, we found a rare life-threatening complication of juvenile systemic lupus erythematosus (JSLE) in our patient.
Introduction
Pleuritic pain is not an unusual problem in children. Other concomitant symptoms should be considered for diagnostic approach in a child with pleuritic chest pain. In this report we discuss chest pain in a 6-year-old child with regard to other signs and symptoms.
Case presentation
This six-year old boy was presented to the emergency department of Children's Medical Center, Tehran, with dyspnea and pleuritic chest pain. The symptoms had begun two months ago and gradually aggravated for the past two weeks.
The patient awoke by chest pain at night. He preferred to remain in up-right position.
Pleuritic chest pain has a broad differential diagnosis. Pain is exaggerated by deep breathing, coughing, and straining. Some of the differential diagnoses of chest pain in the pediatric patient are pneumonia, pleurisy, pneumothorax, pericarditis, endocarditis, costochondritis (tietze syndrome), herpes zoster (cutaneous), angina (familial hypercholesterolemia, anomalous coronary artery), epidemic pleurodynia, trauma and rib fracture, lesions of the dorsal root ganglia, tumors of the spinal cord, and gallbladder disease. Gastrointestinal diseases like peptic ulcer, esophagitis (gastroesophageal reflux, infectious, pill), cholecystitis, perihepatitis (Fitz-Hugh-Curtis syndrome), esophageal foreign body and spasm are less common causes of chest pain in children.
The chronicity of the symptom indicates that a systemic chronic problem could be the main cause.
Clinical Problem-Solving Iran J Pediatr Dec 2011; Vol 21 (No 4), Pp: 557-562
Cardiac diseases like pericarditis, endocarditis, mitral valve prolapse, and arrhythmias are among the ''must not miss'' diagnoses and should be ruled out. However chest pain is not a usual presentation of cardiac diseases in childhood.
He had short stature and a cachectic appearance. Heart beat was 110/min, respiratory rate 38/min, blood pressure 110/70, temperature 39°C, body weight 15 kg and height 107 cm.
Both weight and height were under the 3 rd percentile. The low growth indicators suggesed a chronic disease involved.
The medical history was notable for 25 days of hospitalization at 4 years of age, because of a history of 3 months fever and bad appetite. He had received antibiotics during hospitalization and one month after discharge. The medical files were not available, but his mother had been told that her son was treated for typhoid.
Six months after getting discharged from the hospital, the patient again developed fever and general asthenia, and his mother noted that since then he continuously felt weak, had low growth rate, and developed fever occasionally. One year ago the patient contracted pneumonia and was hospitalized for treatment. He also had an intermittent fever which according to his mother lasts for a few years. As his mother told, he has not received vaccination since four years of age.
The main reason of his past hospitalization is not known. However, we had to check for typhoid, but it should be considered that another underlying chronic febrile disease involvement was probable. He developed dyspnea, which progressed gradually. Now in physical examination we searched for signs and symptoms of a chronic disease and specific organ involvement.
At admission, the patient was in apparent respiratory distress, which worsened on supine position and he preferred to remain in semi-sitting position. Chest x-ray and ECG were normal.
Conjunctivae were pale, and auscultation of heart and lung was normal. On abdominal examination, a generalized tenderness interfered with the examination process. Right wrist and heap joints were tender. He had no symptoms of clubbing, edema or cyanosis.
The mother reported of a generalized bone pain, weight loss, nocturnal sweating and fever during the last three months. His father also had a three years history of night-fever and cough without any medical evaluation.
The symptoms of pale conjunctivae, weight loss, night sweating and fever, indicate the chronic pattern of the illness. Iran is an endemic area for tuberculosis, so it also had to be considered, especially with the positive suspicious family history. Because of bone pain, malignancies should be among the list of differential diagnoses. Cardiopulmonary causes had to be ruled out, because of pleuritic chest pain and orthopnea.
Echocardiography performed soon after his admission, revealed mild pericardial effusion.
The pericardial effusion could justify the chest pain and the respiratory symptoms. Infectious, rheumatologic and maybe malignant causes of serositis are among the possible diagnoses, which could be the reason for other signs and symptoms of the patient.
The most important findings were the very high level of ESR, and low hemoglobin level. Again we searched for infectious, rheumatologic and malignant causes. However, malignancies were less probable as the ill condition of the patient started four years ago and malignancies would have caused much more problems during this period of time.
Because of fever and elevated ESR, the patient was admitted to the Infectious Disease Ward. The following tests were completed and respective results obtained.
Negative PPD test, bone marrow aspiration with normal cellularity and negative for Ziehl-Neelson staining, and negative culture results for typhoid.
Radionucleotide scan showed some hyperactivity in the right hip and ankle. Wright, Coombs Wright, and Widal tests, as well as blood and urine cultures were also negative.
These results indicate a low probability of infectious and malignant diseases, so rheumatologic diseases had to be taken into account and evaluated.
More laboratory tests were performed with suspicion to malignancies, rheumatologic, autoimmunity and immunodeficiency. Anti nucleotide antibody (ANA) was positive [16 (neg <0.8, pos >1.2)]. The complete blood count series obtained for seven days showed no significant changes. Amylase, lipase, uric acid, cholesterol, triglycerides, calcium, phosphorous, liver function tests, lactate dehydrogenase, total protein and albumin were normal.
Positive ANA, justified more tests and examinations for rheumatologic diseases to be done. The presence of serositic arthritis and a positive ANA made the rheumatologic diseases, especially SLE to be the first in the list of the differential diagnosis. Systemic-onset juvenile rheumatoid arthritis could also be a possible cause. Some more tests, especially anti-dsDNA, and RF were needed. Antinuclear Antibodies (ANAs) can be positive in systemic lupus erythematosus, druginduced lupus, juvenile arthritis, juvenile dermatomyositis, vasculitis syndromes, scleraderma, infectious mononucleosis, chronic active hepatitis and hyperextensibility. On the criteria for lupus, diagnosis of SLE was made (Table 1) and treatment with prednisolone tablets (2mg/kg/day) and hydroxychloroquine (5mg/kg/day) initiated. A few days later, the patient's condition improved gradually and the fever subsided.
Neurologic disorder
Seizures: in the absence of drugs that can be the cause or metabolic impairments (e.g., uremia, ketoacidosis, electrolyte imbalance) Psychosis: in the absence of drugs that can be the cause or metabolic impairments (e.g., uremia, ketoacidosis, electrolyte imbalance)
Immunologic disorder
Anti-DNA antibody abnormal titer + or Anti-Smith: Antibody to Smith nuclear antigen or Positive Antiphospholipid antibodies Antinuclear antibody An abnormal titer in the absence of drugs recognized to be associated with "drug-induced lupus syndrome" + The diagnosis of lupus is established by combination of clinical and laboratory manifestations. The company of 4 (serositis, arthritis, abnormal titer of anti-DNA antibody, antinuclear antibody) of 11 criteria serially or simultaneously strongly suggests the diagnosis. Patients who are suspect to have lupus, but show fewer than 4 criteria should receive proper medical treatment. A positive ANA test is not necessary for diagnosis; absence of ANA in lupus is very rare. Hypocomplementemia is not diagnostic, and very low levels or absence of total hemolytic complement suggests the likelihood of complement component insufficiency. The treatment for SLE should be started.
Having different pictures, lupus must be among the differential diagnoses of many problems, from fevers of unknown origin to arthralgias, anemia, and nephritis. The differential diagnosis depends on the presenting manifestation and affected organ and includes systemic-onset juvenile rheumatoid arthritis, acute poststreptococcal glomerulonephritis, acute rheumatic fever, infective endocarditis, leukemia, immune thrombocytopenic purpura, and idiopathic hemolytic anemia. Sometimes the early presentation is atypical such as parotitis, abdominal pain, transverse myelitis, ordizziness. Lupus should also be considered in patients with multiorgan involvement, especially in the presence of hematologic or urinalysis problems. Clinical manifestations of SLE include constitutional symptoms (fatigue, prolonged fever, anorexia, lymphadenopathy, weight loss), musculoskeletal (arthralgias, arthritis) cardiovascular, pulmonary (pulmonary hemorrhage, pleuritic pain), skin, renal, hematologic, neurologic (seizures, psychosis, stroke, cerebral venous thrombosis, pseudotumor cerebri, aseptic meningitis, chorea, global cognitive deficits, mood disorders, transverse myelitis, and peripheral neuritis (mononeuritis multiplex).
On the sixth day of treatment our patient's condition suddenly deteriorated. He was found in apparent respiratory distress, high fever, dyspnea and having an enlarged liver span.
He was then transferred to PICU, and cotrimoxazole, ceftazidim, and stress dose of hydrocortisone was initiated.
CBC showed decrease in WBC, Hgb and Platelets. The trend of CBC tests in PICU is seen in Table 2. Other blood work test results in PICU were as follow: Ferritin 8654 ng/ml ( Other tests such as creatine phosphokinase, serum IgA and IgM, BUN, creatinine, blood glucose and serum potassium were in normal range.
After respiratory distress as the first presenting manifestation, we evaluated our patient for pulmonary and cardio-vascular involvement, which may occur in the course of SLE. He developed fever again while receiving immunosuppressive medications and empirical antibiotics for opportunistic infections started.
The acute deterioration of the patient's condition during treatment of SLE, is suggestive of macrophage activation syndrome (MAS). The diagnosis is supported by acute leucopenia, high liver function tests, hepatomegaly, and high ferritin level. This diagnosis was suggested by clinical presentation and confirmed by bone marrow biopsy. In the most cases of MAS, bone marrow demonstrates hemophagocytosis. Urgent treatment with intravenous pulse of methyl-prednisolone, cyclosporine, and sometimes, etanercept, are generally effective. Administration of IVIG, is useful in infections and MAS syndrome and was highly recommended in our immunodefficient patient. Performing a CXR, echocardiography, and bone marrow aspiration would help doctors in deciding appropriately. Having MAS in mind, which is an occasionally fatal condition, may save the patient. MAS may not have its typical manifestation at the beginning, but if it progresses, it would be more difficult to manage.
Intravenous immunoglobulin IVIG 2mg/kg was administered. A chest x-ray showed the possibility of atypical bronchopneumonia and patchy bilateral paracardiac opacities. The size of the heart was at the upper normal limit.
Echocardiography revealed no pericardial effusions, no vegetation, no Limbman-Sacks lesion, good systolic and diastolic function, and an ejection fraction of 50%.
A new bone marrow aspiration showed: Hypocellular marrow without any specific diagnosis.
As no significant improvement was observed, a three-day pulse-therapy with methylprednisolone (30mg/kg/day) was prescribed and cyclosporine A added to the regimen. Because of severe neutropenia, Granulocyte Colony Stimulating Factor (GCSF) was initiated on the fifth day of admission to ICU. Packed cells were also transfused several times.
The level of serum B-type natriuretic peptide (BNP), raised in response to abnormal ventricular wall tension, heart failure, systolic dysfunction, volume overload and cardiomyopathy. Measurement of BNP (elevated in heart disease), can help distinguish cardiac from pulmonary causes of pulmonary edema. A BNP >500pg/mL suggests heart problems, <100pg/mL suggests lung disease. The level of ESR, creatine phosphokinase, lactate dehydrogenase, and BNP may be elevated in acute or chronic myocarditis [1] .
We expect that treatment with immunosuppressives along with antibiotics and supportive care, may cause the presenting critical condition to subside and also improve heart condition involved in the process of background disease.
Receiving medication for opportunistic infections, MAS and supportive care, the patient's respiratory distress and general condition improved gradually and he was transferred to Rheumatology Ward and after 2 weeks he was discharged with an appropriate regimen for SLE.
In long time follow up the disease went into reemission and treatment was reduced gradually. A flare up of the disease after 1.5 years forced to increase the drugs which could be decreased again after remission. After 3 years follow up, the disease is in remission and he is on low dose prednisolone (5 mg/daily) and hydroxychlroquine (100 mg/daily).
Commentary
The patient initially came to the emergency room with mild respiratory distress, pleuritic chest pain and other signs and symptoms that indicate a chronic disease. The initial evaluation revealed diminished growth pattern, bone pain, night sweating and fever, pleuritic dyspnea, high ESR and serositis, which have a long list of differential diagnosis (cardiopulmonary, infectious, rheumatologic and malignancies), but the chronic pattern of the illness made the malignancies less probable. The infectious causes were ruled out by laboratory tests, and soon after, the rheumatologic causes were taken into consideration.
MAS is a potentially fatal complication of childhood systemic inflammatory diseases [2,3] . It can be one of the causes of secondary hemophagocytic lymphohistiocytosis (HLH) [3,6] . High ferritin level is one of the diagnostic criteria for HLH [12] . This syndrome is characterized by excessive activation of T lymphocytes and macrophages and massive production of cytokines [4] . The clinical presentation of MAS includes persistent high fever, pancytopenia, hepatosplenomegaly, hepatic dysfunction, encephalopathy and coagulation abnormalities [2,5] . MAS can occur as a complication of rheumatic diseases or triggered by an infection or by a change in treatment regimen [2,7,8] . There are few case reports, that describe MAS as the first manifestation of rheumatic and also Kawasaki disease [9] . Abnormal immune system reaction and regulation that leads to the lack of control of an exaggerated immune response is one of the mechanisms suggested for MAS [9] .
The diagnostic criteria for MAS that complicates systemic juvenile idiopathic arthritis (s-JIA), include decreased platelet count, elevated aspartate aminotransferase, decreased white blood cells, hypofibrinogenemia, central nervous system impairment, hemorrhages, hepatomegaly and histologic evidence of macrophage hemophagocytosis in bone marrow aspirates [1] .
MAS is seen most commonly in s-JIA [10] . It is also diagnosed in systemic lupus erythematous [11,13] . In our patient, MAS happened during the treatment of SLE, and it should always be kept in mind, if the condition of a rheumatologic patient deteriorates acutely without any obvious reason rapid initiation of the treatment is very important and critical. | 2017-06-18T03:35:51.953Z | 2011-12-01T00:00:00.000 | {
"year": 2011,
"sha1": "d5c5509f3bb67cacd98a338d815574361e73feb1",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4842194fdb744442ebaf5e3db828cfd6ab6566ff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.