id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
255586142
pes2o/s2orc
v3-fos-license
An aptamer agonist of the insulin receptor acts as a positive or negative allosteric modulator, depending on its concentration Aptamers are widely used as binders that interact with targets with high affinity or as inhibitors of the function of target molecules. However, they have also been used to modulate target protein function, which they achieve by activating the target or stabilizing its conformation. Here, we report a unique aptamer modulator of the insulin receptor (IR), IR-A62. Alone, IR-A62 acts as a biased agonist that preferentially induces Y1150 monophosphorylation of IR. However, when administered alongside insulin, IR-A62 shows variable binding cooperativity depending on the ligand concentration. At low concentrations, IR-A62 acts as a positive allosteric modulator (PAM) agonist that enhances insulin binding, but at high concentrations, it acts as a negative allosteric modulator (NAM) agonist that competes with insulin for IR. Moreover, the concentration of insulin affects the binding of IR-A62 to IR. Finally, the subcutaneous administration of IR-A62 to diabetic mice reduces blood glucose levels with a longer-lasting effect than insulin administration. These findings imply that aptamers can elicit various responses from receptors beyond those of a simple agonist or inhibitor. We expect further studies of IR-A62 to help reveal the mechanism of IR activation and greatly expand the range of therapeutic applications of aptamers. Studying how an aptamer, a short section of RNA or DNA, affects the interaction of insulin with its membrane receptor protein offers further insights into aptamers in general. Aptamers can bind with high specificity and affinity to many target molecules, and affect the activity of many proteins. Researchers in South Korea led by Sun Sik Bae at Pusan National University and Sung Ho Ryu at Pohang University of Science and Technology explored the interaction of the aptamer IR-A62 with the membrane protein that binds to and responds to insulin. Whether IR-A62 activated or inhibited insulin’s interaction and effects depended on both the aptamer and insulin concentrations. While increasing understanding of the insulin receptor protein, investigating this subtly variable effect could more generally refine and expand the use of aptamers in medicine. INTRODUCTION Aptamers are reagents that bind to a variety of targets, ranging from small molecules to cultured cells, with high affinity and specificity 1 . They are single-stranded oligonucleotides that are isolated by the in vitro selection process Systematic Evolution of Ligands by Exponential Enrichment (SELEX) from random oligonucleotide libraries 2,3 . Short oligonucleotides can fold into unique tertiary conformations, allowing aptamers to interact with their targets by specifically wrapping or fitting into the surface structures of the target molecules 4 . The development of aptamers for clinical applications has focused on their inhibitory effects on the function of the target 5 . Recently, however, various aptamer modulators that alter the target's response to a stimulus or directly activate a target have been reported. The most common way that aptamers modulate receptor function is through receptor dimerization [6][7][8][9] . Some receptors can be activated through artificial dimerization induced by a dimeric aptamer, although the aptamer monomer has no effect on receptor activation. Aptamers can also function as agonists, activating target receptors and having downstream cellular effects that are independent of the intrinsic ligands 10,11 . The binding of an aptamer agonist to its target receptor appears to induce conformational changes similar to those induced by the interaction between the intrinsic ligand and receptor. Moreover, some aptamers specifically recognize the conformational changes in their target receptors that are induced by the binding of intrinsic ligands [12][13][14] . These aptamers enhance the binding of intrinsic ligands to their target receptors and potentiate downstream signaling by stabilizing the active conformation of ligandbound receptors. These previous findings suggest that aptamers can regulate target protein functions by inducing or stabilizing conformational changes. In the present study, we identified a new aptamer modulator, named IR-A62, that binds to the extracellular domain of the insulin receptor (IR). IR-A62 is a biased agonist that preferentially induces Y1150 monophosphorylation of IR and selectively activates glucose uptake without inducing an increase in cellular proliferation. IR-A62 also exhibits mutual binding cooperativity with insulin with respect to its binding to IR, which varies according to the concentrations of the ligands. At low concentrations, IR-A62 and insulin act as positive allosteric modulators (PAMs), promoting the binding of the other to IR. In contrast, at high concentrations, IR-A62 and insulin act as negative allosteric modulators (NAMs), interfering with the binding of the other to IR. Given that the IR forms a stable dimer, in which two monomers are linked by disulfide bonds, these results imply that IR-A62 may bind to the same site on IR as insulin 15 . To our knowledge, the ability of IR-A62 to act as both a PAM agonist and a NAM agonist is very rare among aptamers, antibodies, peptides, and small molecules. Therefore, the present findings suggest that the potential uses of aptamers as target modulators can be extended beyond their roles as simple binders, inhibitors, or agonists. In vitro selection of IR aptamers We performed SELEX to identify IR-specific aptamers, as previously described 14 . Briefly, the single-stranded DNA (ssDNA) library used for SELEX consisted of a 40-mer random region flanked by 20-mer constant regions. The 40-mer random region contained 5-[N-(1-naphthylmethyl) carboxamide]−2'-deoxyuridine (Nap-dU) in place of deoxythymidine (dT) to enhance the hydrophobic interaction between the aptamer and its target. Fifty picomoles of HIS-tagged recombinant extracellular domain of IR (His 28-Lys 944, R&D Systems) were incubated with 100 pmol of the ssDNA library at 37°C for 30 min in selection buffer (40 mM HEPES (pH 7.5), 102 mM NaCl, 5 mM KCl, 5 mM MgCl 2 , and 0.05% Tween-20). To immobilize the IR proteins, the protein and ssDNA library mixture was incubated with 20 µl of TALON Dynabeads (Invitrogen) at 37°C for 15 min. To remove unbound ssDNAs, the beads were then washed five times with 100 µl selection buffer. One hundred seventy microliters of 2 mM NaOH solution were added to extract the ssDNA from the IR proteins, and then 160 µl of the eluate was mixed with 40 µl of 8 mM HCl for neutralization. The extracted ssDNAs were then amplified using a 5′-OH-terminal biotinylated reverse primer (IQ5 Multicolor Real-time PCR Detection System, Bio-Rad). To immobilize the biotinylated antisense strands, the amplified DNAs were mixed with 25 µl of MyOne Streptavidin Dynabeads (Invitrogen), and 180 µl of 20 mM NaOH was added to elute the sense strands at 37°C for 5 min. After discarding the eluted sense strands, the immobilized antisense strands were washed three times with selection buffer. The beads were then incubated with 60 µl extension reaction mix (1× KOD DNA polymerase buffer containing 500 pmol forward primer; 0.0625 U KOD DNA polymerase; 0.5 mM each of dATP, dGTP, dCTP, and Nap-modified dUTP) at 68°C for 60 min to replace dT with the sense strand containing Nap-dU. After washing three times with 180 µl of selection buffer, 180 µl of 20 mM NaOH was added to elute the sense strands. To neutralize the eluate, 175 µl of the eluate was incubated with 5 µl of 180 mM HEPES and 5 µl of 700 mM HCl. The eluted sense strands were used for the next round of selection, and after eight rounds of SELEX, the enriched ssDNA pool was sequenced. Aptamer binding assay The affinities of aptamers for the extracellular domains of IR (His 28-Lys 944) and IGF-1 receptor (Gln 31Asn 932) were measured using a filter binding assay. [γ-32 P]-ATP was used to label the 5′-end of the aptamer (the reaction mix contained 1 μl of 10× T4 polynucleotide kinase buffer, 0.25 μl of 10 U/μl T4 polynucleotide kinase, 0.25 μl of gamma-32 P-ATP 3,000 Ci/ mmol, and 1 pmol of aptamer and was made up to a volume of 10 μl with H 2 O and then incubated at 37°C for 30 min). To remove unincorporated ATP, the mixture was loaded onto size-exclusion spin columns (MicroSpin G-50 columns, GE Healthcare). To reconstitute the aptamer structure, the mixture was slow-cooled to 37°C at 0.1°C/s in binding buffer (40 mM HEPES (pH 7.5), 120 mM NaCl, 5 mM KCl, 5 mM MgCl2, and 0.002% Tween-20) after heating at 95°C for 3 min. The aptamer was incubated with target proteins at various concentrations for 30 min at 37°C, and then the aptamer-protein mixture was incubated with 5.5 μl Zorbax silica beads (Agilent) for 1 min with shaking to pull down the aptamer-protein complexes. The beads bound to the aptamer-protein complex were partitioned using nitrocellulose filter plates (Millipore) and washed in the binding buffer to remove the unbound aptamer. The amount of 32 P labeling the aptamer was measured by exposure to photographic film and quantified using Amersham Typhoon gel and blot imaging systems. The dissociation constant (Kd) of the aptamers was analyzed by fitting the binding data to a one-site saturation equation using SigmaPlot (Systat Software, San Jose, CA, USA). Cell culture and adipocyte differentiation Rat-1 cells overexpressing human IR (Rat-1/hIR) were kindly provided by Dr. Nicholas J. G. Webster of the University of California, San Diego. 3T3-L1 and MCF-7 cells were purchased from the American Type Culture Collection. We used high-glucose Dulbecco's-modified Eagle's medium (DMEM) containing 10% (vol/vol) fetal bovine serum (FBS, Gibco) to maintain the MCF-7 cells and Rat-1/hIR cells. High-glucose DMEM containing 10% bovine serum (BS, Gibco) was used to maintain the 3T3-L1 preadipocytes prior to differentiation. All the cells were incubated at 37°C in a humidified atmosphere containing 5% CO 2 . Before initiating differentiation, 3T3-L1 preadipocytes were cultured for 2 days after reaching confluence. DMEM containing 1 μM dexamethasone, 500 nM IBMX, 850 nM insulin, and 10% FBS was used to stimulate adipocyte differentiation, and after 2 days, this medium was changed to DMEM containing 10% FBS and 850 nM insulin for an additional 2 days. Finally, this medium was replaced with DMEM containing 10% FBS alone, and the cells were incubated for 4-5 days until at least 90% of the cells exhibited lipid droplets. Sample preparation for western blotting To evaluate the phosphorylation of proteins using western blotting, cells were seeded in 12-well plates. For serum starvation, the cells were incubated in a medium lacking FBS for 3 h before insulin or aptamer stimulation. Aptamers and insulin were prepared in Krebs-Ringer HEPES buffer (25 mM HEPES (pH 7.4), 120 mM NaCl, 5 mM KCl, 1.2 mM MgSO 4 , 1.3 mM CaCl 2 and 1.3 mM KH 2 PO 4 ). To reconstitute their tertiary structure, aptamer samples were heated for 5 min at 95°C and then slowly cooled to room temperature. After stimulation with insulin or aptamer, the cells were washed three times with cold PBS and then lysed in lysis buffer (150 μl/ well) (50 mM Tris-HCl (pH 7.4), 150 mM NaCl, 1 mM EDTA, 20 mM NaF, 10 mM β-glycerophosphate, 2 mM Na 3 VO 4 , 1 mM PMSF, 10% glycerol, 1% Triton-X and a protease inhibitor cocktail). The cell lysates were sonicated and centrifuged at 20,000 × g for 15 min at 4°C, and the supernatant was mixed with 5× Laemmli sample buffer. After heating at 95°C for 10 min, the proteins were separated on Bis-Tris gels and transferred to nitrocellulose membranes. The membranes were incubated in blocking buffer (PBS, 5% nonfat dried milk, and 0.1% NaN 3 ) for 30 min at room temperature and then probed with the indicated antibodies at 4°C overnight. The membranes were then washed three times in TTBS buffer (20 mM Tris, 150 mM NaCl, 0.1% Tween-20) for 10 min each and incubated with secondary antibodies for 1 h at room temperature. After washing the membranes a further three times with TTBS buffer for 10 min each, the intensities of specific bands were analyzed using an LI-COR Odyssey infrared imaging system. Flow cytometry Rat-1/hIR cells were seeded in 100 mm dishes and grown to 70% confluence. To detach the cells without digesting the membrane proteins, we used PBS containing 5 mM EDTA but no trypsin. The detached cells were then incubated in blocking buffer (PBS, 1% BSA, and 0.1% NaN 3 ) for 30 min at 4°C on a rotating shaker at 10 rpm. The cells were then separated into equal aliquots (1 × 10 6 cells/sample), and FITC-labeled IR-A62 or FITC-labeled insulin was diluted with blocking buffer and mixed with the cells. Unlabeled ligands (insulin or IR-A62 without FITC) were added at the same time as the FITC-labeled ligands. After incubation for 1 h at 4°C on a rotating shaker at 10 rpm, the cells were washed twice with cold PBS to remove unbound FITC-labeled ligands. They were then fixed with PBS containing 4% paraformaldehyde for 30 min at room temperature, and the binding of FITC-labeled ligands was measured by flow cytometry (BD Biosciences FACS Canto II). 2-Deoxy-D-glucose uptake To measure 2-deoxy-D-glucose uptake, fully differentiated 3T3-L1 adipocytes were prepared in 24-well plates. Before insulin or aptamer stimulation, the 3T3-L1 adipocytes were serum-starved in DMEM without FBS for 3 h. After stimulation with insulin and/or aptamer for the described time, the cells were incubated with 2-deoxy-[ 14 C]-glucose (0.1 µCi/ml) for 10 min (500 µl/well) and then washed three times with cold PBS containing 25 mM D-glucose. Subsequently, 500 µl of lysis buffer (0.5 N NaOH and 1% SDS) was added to each well, 450 µl of cell lysate was mixed with 4 ml of a liquid scintillation cocktail (Research Products International), and glucose uptake was measured using a liquid scintillation counter (Hidex 300 SL). Cell proliferation assay MCF-7 breast cancer cells were cultured at 10 4 cells/well in 24-well plates in DMEM (low glucose (1 g/l), without phenol red and pyruvate) containing 10% FBS. After 24 h, the cells were washed twice with DMEM lacking FBS. Next, the cells were serum-starved in DMEM containing 0.5% FBS for 24 h and stimulated with insulin or IR-A62 aptamer in DMEM containing 0.5% FBS for 72 h, with the medium containing the insulin or IR-A62 aptamer being replaced every 24 h. The cells were then fixed with 4% paraformaldehyde in PBS for 30 min, and the DNA in the cells was stained using 1 µM SYTO 60 in PBS for 1 h. The relative number of cells was then analyzed by measuring the fluorescence of SYTO 60-stained DNA using the LI-COR Odyssey infrared imaging system. Effect of the aptamer on the blood glucose level of mice . To establish a model of type I diabetes, C57BL/6 mice were intraperitoneally injected with streptozotocin (STZ, 50 mg/kg in 0.1 M sodium citrate buffer, pH 4.5; Sigma-Aldrich) for 5 consecutive days. Their blood glucose levels were measured weekly using a blood glucose test meter (Accu-Check Active; Roche Diagnostics) after sampling by a tail-vein puncture to confirm the development of hyperglycemia. After 7 weeks of treatment, the mice underwent insulin tolerance testing or aptamer tolerance testing. To determine the effects of insulin and IR-A62 on the blood glucose levels of type I diabetic mice, 1.5 U/kg insulin or 10 mg/kg IR-A62 was injected subcutaneously into the STZ-treated mice. To determine the effects of insulin and IR-A62 on the blood glucose levels of type II diabetic mice, 3 U/kg insulin or 20 mg/kg IR-A62 was injected subcutaneously into C57BLKS/J-Db/Db and C57BLKS/J-Ob/Ob mice. In each instance, the blood glucose levels were measured at the indicated time points. RESULTS Identification of the IR-A62 aptamer SELEX was performed to identify aptamers that bind to the extracellular domain of IR (His 28-Lys 944). The single-strand DNA library used contained a 40-mer random region flanked on both sides by 20-mer constant regions that were used for PCR amplification of the library. To improve the specificity and affinity of the aptamer-protein interaction, Nap-dU was used instead of thymine bases in the 40-mer random regions 16 . In this way, we obtained 41 different aptamers containing Nap-dU modifications. To evaluate the autophosphorylation of IR induced by the aptamers, Rat-1 cells overexpressing human IR (Rat-1/hIR) were stimulated with 500 nM aptamers for 1 h. We used the IR-A48 agonistic aptamer as a positive control to compare the efficacy of the novel aptamers 11 . Although most of the aptamers had no effect or a smaller effect than the IR-A48 agonistic aptamer on IR autophosphorylation, one aptamer, IR-A62-F, significantly induced IR autophosphorylation to a similar extent to IR-A48. Full-length IR-A62-F is a 79-mer that contains a 39-mer variable region and two 20-mer constant regions (Fig. 1a). Furthermore, we identified a core sequence (IR-A62-T) of IR-A62-F that is essential for its agonistic activity by comparing the effects of IR-A62-F truncation The secondary structure of IR-A62 was predicted using Mfold software. c The effect of IR-A62 on IR phosphorylation. The phosphorylation of six tyrosine residues was analyzed using site-specific antiphosphotyrosine antibodies. Rat-1/hIR cells were stimulated with 50 nM insulin for 5 min or 200 nM IR-A48, IR-A62, or IR-A62-R for 1 h. 'IR-A62-R' is the reverse sequence of IR-A62 (5′-CZGCCPAGAPCZGAGPACGACZZAC-3′). variants containing 3′ or 5′ sequential deletions (data not shown). IR-A62-T consists of 25 nucleotides, of which seven are Nap-dUs, and forms a small stem-loop structure (Fig. 1b). IR-A62-T showed biased agonism, preferentially phosphorylating a specific tyrosine residue of IR, similar to IR-A48 (Fig. 1c). In contrast to insulin, which increased phosphorylation at the Y960, Y1146, Y1150, Y1151, Y1316, and Y1322 residues, IR-A62-T preferentially stimulated the phosphorylation of Y1150, which is in the kinase domain of IR. Moreover, the reversed sequence of IR-A62-T (IR-A62-R) did not stimulate the phosphorylation of Y1150, which indicates that the agonistic effect of IR-A62-T is not caused by a nonspecific interaction with oligonucleotides. Post-SELEX optimization of the IR-A62 aptamer A critical limitation of the in vivo use of aptamers is their rapid degradation by serum nucleases 17 . Therefore, it is essential to improve their stability by chemically modifying the nucleotides, such as by adding a methoxy (2'-OMe) or fluoro (2'-F) group at the 2'-sugar position of the ribose. However, such chemical modifications can seriously affect the binding of the aptamer to its target. To determine whether the efficacy of IR-A62-T was affected by the incorporation of modified nucleotides, we prepared IR-A62-T variants in which each dA, dC, and dG nucleotide was substituted by the corresponding 2'-OMe derivative (mA, mC, and mG) ( Supplementary Fig. 1a). The effects of the IR-A62-T variants on IR phosphorylation were then evaluated by comparison with IR-A62-T in Rat-1/hIR cells, and the results showed that 11-mG, 12-mA, 13-mG, 19-mA, 21-mC, 22-mC, and 25-mC had no effect or positive effects on the activity of IR-A62-T (Fig. 2a, Supplementary Fig. 1b). We finally chose the 11-mG, 13-mG, 21-mC, and 25-mC modifications to lengthen the distances between each modification because consecutive 2'-OMe modifications significantly disturb the activity of IR-A62-T ( Supplementary Fig. 1c). We also performed a similar screen of IR-A62-T variants containing the corresponding 2'-F-derivatives (fA, fC, and fG) in place of each nucleotide, except at the four 2'-OMe modifications sites ( Supplementary Fig. 2a). The results showed that 2-fA, 6-fC, 8-fC, 12-fA, 19-fA, and 22-fC had no effect on the activity of IR-A62-T ( Supplementary Fig. 2b). To evaluate the combined effects of both the 2'-OMe and 2'-F modifications on IR-A62-T activity, we then tested three IR-A62-T variants containing both 2'-OMe and 2'-F modifications and found that the IR-A62-T variants showed slightly higher activity than the original IR-A62-T (Supplementary Fig. 2c). The placement of hydrophobic side chains at the 5-position of uracil improves the success of SELEX and increases the affinity of aptamers by adding hydrophobicity to aptamer-target interactions 16 . However, these hydrophobic sites also increase the plasma clearance of the molecules in vivo, which has a negative effect on the pharmacokinetic properties of therapeutic aptamers 18 . Therefore, to reduce the hydrophobicity of IR-A62-T, seven IR-A62-T variants were synthesized, in which each Nap-dU was replaced by 5-(N-benzylcarboxamide)-2'-deoxyuridine (Bn-dU) ( Supplementary Fig. 3a). Substitution with Bn-dU at 10-Nap, 16-Nap, and 20-Nap significantly reduced the activity of IR-A62-T ( Supplementary Fig. 3b). Therefore, we ultimately selected 3-Bn, 4-Bn, 14-Bn, and 24-Bn as the most appropriate Bn-dU substitutions for IR-A62-T. The results of the testing of IR-A62-T variants with chemical modifications are summarized in Fig. 2a. The most favorable Fig. 2 Post-SELEX optimization of IR-A62. a Summary of 2'-OMe or 2'-F substitution scans at the A, C, and G positions and Bn-dU substitution scans at the Nap-dU positions (mG: 2'-OMe G, mC: 2'-OMe C, fA: 2'-F A, fC: 2'-F C, Nap: Nap-dU, Bn: Bn-dU). The IR Y1150 phosphorylation induced by the IR-A62-T variants was compared using western blotting. The percentage values represent the Y1150 phosphorylation band intensities associated with the IR-A62-T variants compared to those associated with IR-A62-T. b The affinities of IR-A62-T or IR-A62 for the insulin receptor or insulin-like growth factor type 1 receptor were measured using a filter binding assay. The dissociation constant (Kd) was determined by fitting the data to a one-site saturation model. Data are presented as the mean ± standard deviation of two independent replicates. c Rat-1/hIR cells were stimulated with various concentrations of IR-A62-T or IR-A62 for 1 h, and then the level of IR Y1150 phosphorylation induced by aptamers was estimated using western blotting. The relative band intensities are presented as the mean ± standard deviation of two independent replicates. d In vitro stability of IR-A62-T and IR-A62 in 90% human serum. Aptamer degradation was analyzed using denaturing polyacrylamide gel electrophoresis. Data are presented as the mean ± standard deviation of three independent replicates, and the half-life values were determined by fitting to a one-phase exponential decay model. combination of substitutions was in a derivative named IR-A62, which contained four 2'-OMe groups, six 2'-F groups, four Bn-dU side chains, and three Nap-dU side chains (Fig. 1a, Fig. 2a). The affinity (K d ) and maximal binding capacity (B max ) of IR-A62 were slightly superior to those of unmodified IR-A62-T (Fig. 2b). Consistent with the results of the aptamer binding assay, IR-A62 was a more potent inducer of Y1150 phosphorylation on IR than IR-A62-T (Fig. 2c, Supplementary Fig. 4). Moreover, we assessed the nuclease resistance of IR-A62 using an in vitro serum stability assay, in which IR-A62-T and IR-A62 were labeled with a 3′inverted dT (3′-idT) to protect the aptamers from degradation by 3′-exonucleases in the serum and were incubated with 90% human serum at 37°C for up to 48 h. The degradation of the aptamers at various time points was then analyzed using denaturing polyacrylamide gel electrophoresis, which demonstrated the stability of IR-A62 (serum half-life [t 1/2 ]=24.9 h) was significantly superior to that of IR-A62-T (t 1/2 = 7.4 h) (Fig. 2d). These results indicate that the combination of chemical modifications successfully improved the nuclease stability of IR-A62-T without causing any loss of agonistic activity. Therefore, all subsequent experiments were performed using IR-A62 containing these modifications. IR-A62 demonstrates binding cooperativity that differs in a concentration-dependent manner To determine whether the binding site of IR-A62 is allosteric or orthosteric, we next studied the effect of IR-A62 on the binding of insulin to IR on the plasma membrane. Rat-1/hIR cells were incubated with FITC-labeled insulin (100 nM) and various concentrations of IR-A62 (3.2 nM, 16 nM, 80 nM, 400 nM, 2 µM, or 10 µM), and insulin binding was measured using flow cytometry. FITC-labeled insulin alone caused a 6.24% shift in the peak fluorescent intensity (Fig. 3a). At low IR-A62 concentrations (3.2-80 nM), coincubation of FITC-labeled insulin with IR-A62 gradually increased the peak shift, up to 64.2%. However, as the concentrations of IR-A62 were increased further (400 nM-10 µM), the binding of FITC-labeled insulin gradually decreased to 2.67%, which was a lower level than with FITC-labeled insulin alone. To verify that the binding cooperativity between insulin and IR-A62 varies depending on concentration, we also measured the binding of FITC-labeled IR-A62 to IR in the presence of various concentrations of insulin (3.2 nM, 16 nM, 80 nM, 400 nM, 2 µM, and 10 µM). Consistent with the results of the insulin-binding assay, coincubation with low insulin concentrations (3.2-16 nM) gradually increased FITC-labeled IR-A62 binding compared to incubation with FITC-labeled IR-A62 alone (Fig. 3b). Moreover, as the concentration of insulin was further increased (80 nM-10 µM), the binding of FITC-labeled IR-A62 gradually decreased. These results imply that insulin and IR-A62 cooperatively bind in a concentration-dependent manner. At low concentrations, insulin and IR-A62 act as mutual PAMs, with the binding of one promoting the binding of the other to IR. However, at high concentrations, IR-A62 and insulin act as mutual NAMs, inhabiting each other's binding to IR. In a previous study, we demonstrated that the enhancement of insulin binding to IR by a PAM aptamer potentiates the phosphorylation of tyrosine residues in the intracellular domain of IR 14 . As shown in Fig. 1c, insulin binding to IR leads to the autophosphorylation of tyrosine residues, and IR-A62 preferentially induces monophosphorylation of the Y1150 residue of IR. Thus, we can distinguish between insulin-or IR-A62-induced IR phosphorylation by comparing the levels of phosphorylation of Y1150 and other tyrosine residues. To determine whether the concentration-dependent cooperativity between insulin and IR-A62 affects IR autophosphorylation, we evaluated the phosphorylation of IR in the presence of 50 nM insulin and various concentrations of IR-A62 (30 nM, 100 nM, 300 nM, 1 µM and 3 µM). The IR Y1146 phosphorylation induced by insulin increased at low IR-A62 concentrations (30-300 nM) and decreased at higher IR-A62 concentrations (1-3 µM). Because IR Y1146 phosphorylation is induced by insulin but not by IR-A62, this implies that insulininduced IR phosphorylation can be potentiated or inhibited by concentration-dependent cooperativity with IR-A62 (Fig. 3c). Although a low level of IR Y1150 phosphorylation was induced by IR-A62 alone at low IR-A62 concentrations (30-100 nM), the level induced by coincubation with insulin and IR-A62 was significantly higher. However, as the IR-A62 concentration was increased further (300 nM-3 µM), the IR Y1150 phosphorylation induced by coincubation with insulin and IR-A62 gradually decreased to a similar level to that induced by 3 µM IR-A62 alone. These findings demonstrate that the concentration-dependent differences in the mutual cooperativity displayed by insulin and IR-A62 directly affect the autophosphorylation of IR. IR signaling is induced by IR-A62 We have shown that IR-A62 is a biased agonist that preferentially induces Y1150 phosphorylation of IR, similar to IR-A48 (Fig. 1c). Moreover, in our previous study, we showed that IR-A48 is characterized by slower and more sustained phosphorylation kinetics of IR and downstream proteins than insulin 11 . To further investigate the signaling kinetics of IR-A62, we first compared the kinetics of the Y1150 phosphorylation of IR induced by insulin and IR-A62. In contrast to insulin, IR-A62 slowly increased the phosphorylation of IR at Y1150 over 2 h, and this phosphorylation was sustained for 8 h (Fig. 4a), which indicates that IR-A62 also induces signaling slowly but sustains it over a relatively long period of time. However, IR-A62 had a 4.7-fold lower EC 50 (18.4 nM) for IR phosphorylation (Y1150) than insulin (86.4 nM) (Fig. 4b). Furthermore, IR-A62 did not bind to IGF-1 receptor (IGF-1R), despite the high degree of structural similarity between IR and IGF-1R (Fig. 2b). Consistent with this binding specificity, IR-A62 had no effect on the phosphorylation of IGF-1R (Fig. 4c). To characterize the downstream signaling activated by IR-A62, we treated fully differentiated 3T3-L1 adipocytes with IR-A62 for 5 min, 1 h, or 2 h and measured the phosphorylation of IR, AKT, and extracellular signal-regulated kinase (ERK) (Fig. 4d). IR-A62 (200 nM) stimulation for 5 min only slightly increased the phosphorylation of ERK, and the level of AKT phosphorylation induced by IR-A62 was lower than that induced by insulin, even though the level of IR Y1150 phosphorylation induced by IR-A62 was significantly higher than that induced by insulin (Fig. 4e-h). Moreover, the AKT phosphorylation induced by IR-A62 was sustained for up to 2 h, but the ERK phosphorylation induced by IR-A62 was not. Taken together, these results imply that although IR-A62 induces IR Y1150 phosphorylation more potently than insulin, its effects on signaling downstream of IR are minor and less than those of insulin. However, the activation of the AKT pathway by IR-A62 was sustained over a longer period of time than the activation induced by insulin, which is consistent with the Y1150 phosphorylation kinetics of IR. The effects of IR-A62 on glucose uptake and cell proliferation IR is a critical regulator of metabolic processes, such as glucose uptake, fat synthesis, gluconeogenesis, and glycogenolysis 19 . Many previous studies have shown that the metabolic effects of insulin and IR mainly involve the AKT pathway rather than the MAPK pathway 20 . Because IR-A62 stimulated AKT phosphorylation in 3T3-L1 adipocytes, we next quantified the time-and dosedependent effects of IR-A62 on 2-deoxy-glucose uptake. The timing of the effect of IR-A62 on glucose uptake was similar to the timing of its effect on IR and AKT phosphorylation (Fig. 5a): glucose uptake increased slowly over 30 min, in contrast to the rapid effect of insulin, and was sustained for up to 4 h. Moreover, in contrast to the glucose uptake following insulin stimulation, which decreased rapidly after 30 min, the glucose uptake induced by IR-A62 remained greater than half-maximal after 8 h. To analyze the binding of insulin or IR-A62, the fluorescence generated by FITC was measured using flow cytometry. c IR phosphorylation resulting from costimulation using insulin and IR-A62. Rat-1/hIR cells were incubated with 50 nM insulin and various concentrations of IR-A62 for 5 min, and then IR phosphorylation was estimated using western blotting. To compare the potency of the effects of IR-A62 and insulin on glucose uptake, we next measured 2-deoxy-glucose uptake in 3T3-L1 adipocytes after stimulation with various doses of each (Fig. 5b). The maximal glucose uptake induced by insulin or IR-A62 at concentrations >500 nM did not differ significantly. However, similar to the effects of each on the phosphorylation of AKT at S473 and T308, the glucose uptake induced by insulin was higher than that induced by IR-A62 at low concentrations (5-100 nM). Moreover, IR-A62 exponentially increased glucose uptake at concentrations of 100-500 nM (Hill coefficient: 4.9) compared with insulin, which increased glucose uptake more gradually in the range 5-500 nM (Hill coefficient: 1.27). Consequently, although IR-A62 has a lower EC 50 value for IR Y1150 phosphorylation than insulin, the EC 50 value of IR-A62 for glucose uptake was higher (177.6 nM) than that of insulin (36.5 nM). These results indicate that IR-A62 alone increases glucose uptake to a level that is comparable with the effect of insulin. Fig. 4 Effects of the dose and duration of treatment with IR-A62 on IR Y1150 phosphorylation and downstream signaling. a IR Y1150 phosphorylation was measured following the incubation of Rat-1/hIR cells with 100 nM insulin or 100 nM IR-A62 for 1 min, 5 min, 10 min, 30 min, 1 h, 2 h, 4 h, or 8 h. The relative band intensities are presented as the mean ± standard deviation of two independent replicates. b Rat-1/hIR cells were incubated with various concentrations of insulin for 5 min or IR-A62 for 1 h. The relative band intensities are presented as the mean ± standard deviation of two independent replicates. To determine the EC50, the data were fitted to a four-parameter logistic equation. g HeLa cells were incubated with 50 nM insulin-like growth factor-1 for 10 min, 100 nM insulin for 10 min, or 1 µM IR-A62 for 1 h. IGF-1R was then immunoprecipitated to assess its phosphorylation. d Fully differentiated 3T3-L1 adipocytes were incubated with 50 nM insulin or 200 nM IR-A62 for 5 min, 1 h, or 2 h. The phosphorylation kinetics of e IR Y1150, f extracellular signal-related kinase (ERK) T202/Y204, g AKT S473, and h AKT T308 are presented as the mean ± standard deviation of three independent replicates. In Fig. 3, we show that the cooperative binding of insulin and IR-A62 to IR is mutual and depends on the concentration of each. To determine whether the effect of IR-A62 on increasing insulin binding by IR-A62 potentiates glucose uptake, we measured the glucose uptake induced by IR-A62 in the absence or presence of insulin. Figure 5b shows that glucose uptake in the presence of IR-A62 began to increase at~100 nM. Therefore, 50 nM, 100 nM, and 150 nM IR-A62 were used to stimulate 3T3-L1 adipocytes in the absence or presence of a low concentration of insulin (12.5 nM) to determine the effect of IR-A62 on insulin-induced glucose uptake (Fig. 5c). Fifty nanomolar IR-A62 alone did not induce glucose uptake, but cotreatment with insulin potentiated insulin-induced glucose uptake. In addition, the glucose uptake induced by IR-A62 alone was greater at concentrations of 100 nM or 150 nM than 50 nM, and the level of glucose uptake that was induced by insulin and IR-A62 together was greater than that induced by a combination of 50 nM IR-A62 and insulin. These results indicate that IR-A62 cooperatively increases glucose uptake when coadministered with insulin. Insulin is also a growth factor: it induces the proliferation and growth of cancer cells, principally via the MAPK pathway 21 . In contrast to insulin, IR-A62 had little effect on the MAPK pathway (Fig. 4d, f). However, we also performed a cell proliferation assay in MCF-7 human breast cancer cells to determine whether IR-A62 affected cell proliferation (Fig. 5d). In this assay, insulin-stimulated cell proliferation by up to 1.76-fold, but IR-A62 did not significantly change the number of cells, even at a concentration of 1 µM. Given that the glucose uptake induced by IR-A62 was maximal at 500 nM, this implies that IR-A62 is a biased agonist that selectively induces the metabolic effects of IR, similar to IR-A48. IR-A62 reduces glycemia in diabetic mice Our in vitro data demonstrate that IR-A62 is a biased agonist that induces glucose uptake but not cellular proliferation. Moreover, IR-A62 is stable when exposed to serum nucleases in vitro (t 1/2 = 24.9 h). Therefore, to investigate the effect of IR-A62 on blood glucose in vivo, we compared the effects of subcutaneous injections of insulin or IR-A62 on the blood glucose levels of diabetic mice. We established a model of type 1 diabetes by administering STZ to C57BL/6 mice, in which the basal glucose levels were maintained at~450 mg/l (Fig. 6a). The subcutaneous injection of either insulin or IR-A62 markedly reduced the blood glucose levels within 1 h, and these gradually returned to their basal levels over the next 2 h. The kinetics and magnitudes of the effects of insulin and IR-A62 on the blood glucose levels did not differ significantly. Next, we administered insulin or IR-A62 subcutaneously to ob/ ob and db/db mice, which are well-established models of type 2 diabetes 22 . Both insulin and IR-A62 markedly reduced their blood glucose levels within 1 h (Fig. 6b, c), but the blood glucose levels of mice administered insulin returned to their resting levels within the following 3 h, whereas those of mice administered IR-A62 did not. These results imply that IR-A62 lowers blood glucose to a similar extent to insulin, but the kinetics of its effects on blood glucose differ according to the mouse model used. 5 IR-A62 selectively stimulates glucose uptake but not cell proliferation. a To measure 2-deoxy-D-glucose uptake, fully differentiated 3T3-L1 adipocytes were incubated with 50 nM insulin or 200 nM IR-A62 for the indicated periods of time. b Fully differentiated 3T3-L1 adipocytes were incubated with various concentrations of insulin and IR-A62 for 30 min or 2 h, respectively. c Fully differentiated 3T3-L1 adipocytes were treated with 50 nM, 100 nM, or 150 nM IR-A62 in the absence or presence of 12.5 nM insulin for 30 min. Data are presented as the mean ± standard deviation of three biological replicates. P values were determined using one-way ANOVA followed by Tukey's multiple comparisons test. d MCF-7 breast cancer cells were incubated with various concentrations of IR-A62 or insulin for 72 h. Cell proliferation was quantified by measuring the amount of SYTO 60-stained DNA using an LI-COR Odyssey infrared imaging system. Data are presented as the mean ± standard deviation of two independent replicates. To determine whether IR-A62 increases IR and AKT phosphorylation in peripheral tissues in the same way as insulin, we administered insulin (1.5 U/kg) or IR-A62 (10 mg/kg) subcutaneously to normal C57BL/6 mice (n = 3). After 30 min, adipose tissue samples were collected, and AKT (S473) and IR (Y1150) phosphorylation were measured (Fig. 6d). Consistent with the in vitro findings, both insulin and IR-A62 increased AKT and IR phosphorylation. This implies that the ability of IR-A62 to reduce glucose in vivo may be the result of the activation of IR in peripheral tissues. DISCUSSION In this study, we identified a new agonistic aptamer, IR-A62, which induces the phosphorylation of IR by binding to the extracellular domain of IR. IR-A62 also preferentially stimulates Y1150 monophosphorylation in the kinase domain of IR. This is a unique property of agonistic aptamers for IR because insulin induces the phosphorylation of all six tyrosine residues. Moreover, the effects of IR-A62 on insulin binding depend on the ligand concentration. At low concentrations, IR-A62 acts as a PAM, potentiating both insulin binding and IR phosphorylation. Conversely, at high concentrations, IR-A62 acts as a NAM, inhibiting both insulin binding and IR phosphorylation. Because IR-A62 alone acts as an agonist and activates IR, the classification of IR-A62 as a PAM agonist and a NAM agonist depends on the concentration at which it is used. The cooperativity of IR-A62 is also mutual with insulin: insulin likewise enhances or inhibits the binding of IR-A62 to IR in a concentration-dependent manner. To our knowledge, this variable cooperativity between IR-A62 and insulin, which depends on the concentration of each, represents a phenomenon that has not been reported to date. The biased agonism of IR-A62 is similar to that described for IR-A48, another agonistic aptamer for IR 11 . These two aptamer agonists preferentially stimulate the Y1150 monophosphorylation of IR. Moreover, their selectivity for downstream signaling to metabolic endpoints is also identical. Both IR-A62 and IR-A48 stimulate AKT phosphorylation and glucose uptake but have little effect on ERK phosphorylation or cellular proliferation. However, one critical difference between IR-A62 and IR-A48 lies in their binding properties. IR-A48 is an allosteric modulator that exerts its effects independent of insulin binding, whereas IR-A62 and insulin exhibit binding cooperativity that depends on their concentrations. The effects of IR-A62 on insulin binding are somewhat similar to those of the IR-A43 aptamer 14 . IR-A43 alone cannot stimulate IR phosphorylation but acts as a mutual PAM with insulin. It enhances not only insulin binding to IR but also IR phosphorylation, downstream signaling, and insulin-stimulated glucose uptake. The binding site of IR-A43, which was identified by IR mutation studies, is an allosteric site that is distinct from the insulin binding site. However, IR-A62 is not identical to IR-A43, in that it competitively binds insulin at high concentrations. Thus, IR-A62 has complex effects that seem to be a mixture of those of IR-A48, IR-A43, and a NAM agonist. We speculate that the unique cooperativity demonstrated by IR-A62 is dictated by the structure of IR. The activity of IR-A62 as a PAM means that IR-A62 binds to an allosteric site distant from the insulin-binding site of IR. Moreover, the NAM activity of IR-A62 implies that the binding of IR-A62 to IR may be competitive with insulin. These two apparently contradictory conclusions can be explained by the fact that IR is a dimer that has two insulinbinding sites. In contrast to other members of the receptor tyrosine kinase family, which exist as monomers when not binding ligands, IR always exists as a dimer that is stably linked by disulfide bonds 23 . Inactive IR, in the absence of insulin, has a symmetrical inverted V-shaped structure 24,25 . Although one IR dimer has two insulin-binding sites, the binding of one molecule of insulin to a receptor dimer can initiate dimer activation 26 . One insulin Fig. 6 IR-A62 administration reduces the blood glucose levels of diabetic mice. a Streptozotocin-treated mice were subcutaneously administered vehicle (PBS), insulin 1.5 U/kg or 10 mg/kg IR-A62. b ob/ob, and c db/db mice were subcutaneously administered vehicle (PBS), insulin 3 U/kg, or IR-A62 20 mg/kg. Data are presented as the mean ± standard deviation (n = 6 mice/group). d Effect of IR-A62 on IR and AKT phosphorylation in adipose tissue. Normal mice were subcutaneously administered vehicle (PBS), insulin 1.5 U/kg or 10 mg/kg IR-A62. The adipose tissues were collected 30 min after administration. Data are presented as the mean ± standard deviation (n = 3 mice/group). molecule forms a complex with the leucine-rich repeat 1 (L1) and α-helical C-terminal domains of IR, which alters the receptor conformation to an asymmetric inverted L-shape structure. Moreover, at high insulin concentrations, up to four insulin molecules can bind to a receptor dimer 27 . The binding of two or more insulin molecules to the receptor dimer causes the formation of a T-shaped structure because of the translocation of both L1 domains toward the FnIII-1 domains. Therefore, one plausible model for the effects of IR-A62 is that one insulin and one IR-A62 molecule bind to each insulin binding site of a receptor dimer at low ligand concentrations. The binding of insulin and IR-A62 may confer structural stability on the ligand-receptor complex, which may reduce the dissociation of the bound ligands. However, as the concentration of insulin or IR-A62 increases, the ratio of IRs in which two insulin or two IR-A62 molecules bind to both insulin-binding sites of a receptor dimer to the total number of IRs increases. Therefore, at a high concentration of insulin or IR-A62, each competes with the other ligand bound to the opposite binding site, thereby interfering with each other's binding. To confirm the veracity of this model, further structural studies of the IR-A62-IR complex are needed. We also expect that such structural studies will help further elucidate the mechanism of IR activation. Consistent with the results of the in vitro glucose uptake assay, we have shown that subcutaneous IR-A62 administration reduces the glycemia of diabetic mice to the same degree as insulin over 1 h. Furthermore, the serum insulin concentration may be important for the in vivo activity of IR-A62. In STZ-treated mice, the reduction in blood glucose induced by IR-A62 was gradually reversed after 1 h, whereas it lasted up to 3 h in ob/ob and db/db mice. The effects of insulin on blood glucose slowly dissipated after 1 h in all three mouse models of diabetes, which may be explained by the mutual binding cooperativity of IR-A62 and insulin. One of the important differences among the STZ-treated, ob/ob, and db/db mice is their serum insulin concentrations. STZ reduces the serum insulin concentration by causing the necrosis of pancreatic beta cells, but in ob/ob and db/db mice, overweight and insulin resistance result in significant hyperinsulinemia 28 . Given that IR-A62 and insulin show mutual binding cooperativity for IR, a plausible explanation is that the binding of IR-A62 to the IR in ob/ob and db/db mice is rendered more stable by the positive cooperativity of the high serum insulin concentration than in STZtreated mice. The long-lasting blood-glucose-lowering effect of IR-A62 that was identified in vivo suggests that IR-A62 has potential as an addition to or a substitute for long-acting insulin in the treatment of diabetes. For many years, only the binding of most aptamers to their targets has been studied. However, recent studies have shown that aptamers can regulate the activities of their targets. Aptamers can potentiate the binding and activity of intrinsic ligands by recognizing specific conformations of target receptors [12][13][14] . Moreover, an allosteric aptamer can activate its target receptor and initiate biased signaling without the need for the intrinsic ligand 11 . The most important differences between aptamers and antibodies are that aptamers are much smaller and recognize the surface structure of the target protein 4,29 . Therefore, we predict that aptamers can bind to various sites on the target protein, thereby inducing complex conformational changes or, conversely, stabilizing target proteins in specific conformations. It remains difficult to explain the complex properties of IR-A62 because structural analysis of receptor modulation by aptamers has not been reported to date. However, we speculate that IR-A62 may induce changes in the structure of the IR that differ from those induced by insulin. We believe that the present findings will suggest new directions for aptamer research and use. The discovery and further characterization of aptamers with unique properties, such as IR-A62, may expand their potential for use as target modulators that have differing effects on antibodies.
2023-01-11T14:30:15.247Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "62c29e585b95b3be6922b6bd5d89005f10e30e03", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s12276-022-00760-w.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "62c29e585b95b3be6922b6bd5d89005f10e30e03", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [] }
252551763
pes2o/s2orc
v3-fos-license
Polystyrene-Impregnated Glulam Resistance to Subterranean Termite Attacks in a Laboratory Test This study aimed to enhance tropical fast-growing tree species’ resistance to subterranean termite (Coptotermes curvignathus) attacks through the manufacturing of polystyrene glued-laminated timber (glulam). Three young tropical wood species, namely manii (Maesopsis eminii), mangium (Acacia mangium), and rubber-wood (Hevea brasiliensis), were cut into laminae. After drying, the laminae were impregnated with styrene monomer, then polymerized using potassium peroxydisulfate as a catalyst and heat. The polystyrene-impregnated laminae were constructed using isocyanate glue and a cold press for three-layered glulam. Untreated or control glulam and solid wood specimens were also prepared. The specimens of each wood species and wood products (solid wood, control glulam, and polystyrene glulam) were exposed to the termite in a laboratory test according to Indonesian standards. The results showed that mangium wood had better resistance to the termite attack than manii and rubber-wood, with both of those woods performing the same. Among the wood products, the glulams were equal and had higher resistance to the termite attack than solid wood. To enhance the termite resistance of polystyrene glulam, we suggest that the polymer loading of polystyrene on each lamina should be increased. In our evaluation of the products’ order of priority, polystyrene glulam emerged as performing best towards subterranean termites attack. Introduction The demand for wooden products is continuously increasing every year. In the past decade, Indonesian log production increased by 2% each year, and currently, 87% of log production is from plantation forests [1,2]. In 2020, log production reached 61 million m 3 , and mangium (Acacia mangium) wood from plantation forests was dominantly produced, representing 52.6% [3]. Wood from plantation forests usually has a small diameter and low wood resistance, and it is especially susceptible to subterranean termite attacks. Previous studies estimated that the economic losses caused by termite attacks have reached 1 billion USD [4,5]. To overcome this issue, numerous methods have been developed for improving the wood resistance and utilizing small-diameter logs. Those methods include heat treatment, impregnation, and chemical modification [6][7][8][9][10][11][12]. Over the last decade, research on non-toxic biocide has been intensively conducted to find environmentally friendly products. Rowell [13] stated that acetylation, a well-known, non-toxic wood-modification approach, of yellow pine could improve the wood resistance to Gloeophyllum trabeum brown rot decay. The durability of this acetylated wood in ground stake testing after 10 years was also reported [14]. Meanwhile, Huaxu et al. [15] noted that citric acid-bonded rubber-wood particle board has significantly better fungal and termite resistance than urea-formaldehyde (UF)-bonded particle board. Furthermore, some studies used hydrothermal treatment to improve wood's resistance to biodeteriorating agents [16]. In studying the utilization of wood from the plantations mentioned above, some researchers have reported that the small-diameter logs that were made into glued-laminated timber (glulam) were not different from solid wood in terms of physical and mechanical characteristics. Yet, compared to solid wood, the glulam had higher values for the modulus of rupture (MOR), modulus of elasticity (MOE), and hardness, but lower shear strength [17]. Moreover, impregnated polystyrene wood had increased resistance against subterranean termites [18]. Hence, these studies revealed that wood products with high termite resistance must be developed with a combined modification method. Among several modification methods, impregnation using polystyrene can be prescribed. This polystyrene may be recovered from waste plastic products [19,20]. Accordingly, it seems this treatment has the possibility to reduce the end-product costs [21]. Wood modification using the impregnation of polystyrene increased wood's resistance to subterranean termites by 80% when exposed in a field test [22]. Furthermore, after a one-year exposure in the field, polystyrene wood had a weight loss of only 20%, while the untreated wood totally failed [23], and in another research, untreated sugi (Cryptomeria japonica) wood had a weight loss of 43.8% after exposure to subterranean termites in a laboratory test, while polystyrene-impregnated wood had only a 6.8% loss [18]. Polystyrene glulam can be constructed from some plies of polystyrene-impregnated laminae using adhesive and pressure. Hadi et al. [18] reported on some physical and mechanical properties of polystyrene glulam compared to untreated glulam, proposing that they were no different in terms of color, shear strength, or MOR. Polystyrene glulam had higher density and hardness, but lower moisture content and MOE. In other work, Hadi et al. [17] reported that polystyrene glulams of three fast-growing wood species had lower values for MOR and MOE, equal shear strength and wood failure, and higher hardness than the untreated glulam, and both glulams had slight delamination in the hot water test. In further research, Nurhanifah et al. [24] showed that the shear strength of sengon (Falcataria moluccana) solid wood was not significantly different from polystyrene glulam, meaning the impregnation process of styrene into sengon wood did not affect the gluing process. Furthermore, it was reported that the polystyrene-impregnated glulam had better resistance to termite attacks than solid wood, but its performance was not significantly different from that of the untreated glulam. Reflecting on the previous research listed above, since polystyrene wood and its polystyrene glulam demonstrated better mechanical properties and resistance against termite attacks than untreated wood, the objective of this current study was to enhance the resistance of other tropical fast-growing tree species to subterranean termite attacks (Coptotermes curvignathus) through the manufacturing of glulam made from polystyrene wood laminae. The young tropical wood species studied were manii (Maesopsis eminii), mangium (Acacia mangium), and rubber-wood (Hevea brasiliensis). Materials Wood specimens for glulam manufacturing were sourced from plantation of people's forests in the Bogor area, West Java, Indonesia. The log species were manii (Maesopsis eminii Engl.), mangium (Acacia mangium Willd.), and rubber-wood (Hevea brasiliensis Muell Arg.). All logs had a diameter of less than 20 cm and were cut from young stands of less than 10 years old. The logs were cut into flat-sawn timber for lamina manufacturing at 1.67 cm by 6 cm by 50 cm (in thickness, width, and length, respectively) and then kiln-dried to reach about a 12% moisture content. The recorded mechanical properties of the MOE and MOR of manii, mangium, and rubber-wood were 4.5 ± 0.4 GP and 42.7 ± 3.3 MP, 10.5 ± 1.5 GP and 79.4 ± 8.4 MP, and 6.1 ± 0.6 GP and 50.5 ± 8.6 MP, respectively. Meanwhile, the MOE and MOR of the glulam controls for manii, mangium, and rubber-wood were 6.8 ± 0.4 GP and 61.7 ± 4.9 MP, 12.1 ± 1.1 GP and 102.1 ± 9.0 MP, and 8.3 ± 0.4 GP and 72.7 ± 4.4 MP, respectively. Furthermore, the MOE and MOR of polystyrene glulam of manii, mangium, and rubber-wood were 5.7 ± 0.5 GP and 48.6 ± 3.2 MP, 11.3 ± 0.6 GP and 81.6 ± 7.6 MP, and 7.5 ± 0.9 GP and 64.9 ± 8.7 MP, respectively [11]. The styrene monomer and potassium peroxydisulfate used as catalysts were bought from TokoFRP and PT. Merck Indonesia Tbk, Jakarta, Indonesia. Glulam Manufacturing and Its Properties The laminae and glulam manufacturing process is shown in Figure 1. Prior to glulam manufacturing, the modulus of elasticity (MOE) for each lamina was estimated using a non-destructive testing system by means of a wood-grading device, a Panter MPK-5 made by IPB University, Bogor, Indonesia [25]. The laminae were then classified according to their MOE values. The laminae with higher MOE values were used for the outer layers in three-layered glulam manufacturing, while those with lower values were applied for the inner layers. Glulam Manufacturing and Its Properties The laminae and glulam manufacturing process is shown in Figure 1. Prior to glulam manufacturing, the modulus of elasticity (MOE) for each lamina was estimated using a non-destructive testing system by means of a wood-grading device, a Panter MPK-5 made by IPB University, Bogor, Indonesia [25]. The laminae were then classified according to their MOE values. The laminae with higher MOE values were used for the outer layers in three-layered glulam manufacturing, while those with lower values were applied for the inner layers. Polystyrene-impregnated laminae were prepared by weighing the laminae and exposing them to a vacuum at 600 mmHg for 30 min in a tank. For the impregnation process, potassium peroxydisulfate was added as a catalyst to styrene monomer (1:100 v/v), and the solution was introduced to the tank as the vacuum was released. Afterward, a pressure of 10 kg/cm 2 was applied for another 30 min. After the impregnation process, each lamina was wrapped with aluminum foil and placed in an oven at 80 • C for 24 h. The foil was then removed, and each lamina specimen was weighed to calculate the polymer loading or weight percent gain (WPG). Conditioning of the specimens was conducted at room temperature for two weeks. In addition, three-layered glulam was manufactured using the laminae with a higher MOE in the face and back layers, while the lamina with the lowest MOE was used for the core layer. The laminae were placed with a longitudinal fiber orientation along the length of the glulam. Isocyanate glue was applied with a single spread glue line at 280 g/m 2 [17], and the laminae were then cold-pressed with a specific pressure of 10 kg/cm 2 for 3 h, followed by conditioning at room temperature for two weeks in the laboratory of the Centre for Standardization of Sustainable Forest Management Instruments, Ministry of Environment and Forestry, Bogor, Indonesia. For comparison purposes, untreated glulam and solid wood specimens were also prepared. Six replications of the test specimens were manufactured for each treatment combination of wood species and wood products. Physical Properties' Determination The physical properties measured were the density and moisture content, according to Japanese Agricultural Standard JAS 234-2003 [26]. Laboratory Test of Termite Attack The Indonesian standard SNI 7207-2014 for subterranean termite attacks in laboratory tests [27] was used in this study. To begin, 200 g of sterilized sand with a moisture content of 7% under a water-holding capacity was placed in a glass container, then a wood test wood specimen was added to the container. The wood specimen was stood up almost vertically from the bottom of the container, touching its side. Two hundred healthy and active Coptotermes curvignathus Holmgren subterranean worker termites from a laboratory colony were added to each container. Schematically, the test unit is shown in Figure 2. Polystyrene-impregnated laminae were prepared by weighing the laminae and exposing them to a vacuum at 600 mmHg for 30 min in a tank. For the impregnation process, potassium peroxydisulfate was added as a catalyst to styrene monomer (1:100 v/v), and the solution was introduced to the tank as the vacuum was released. Afterward, a pressure of 10 kg/cm 2 was applied for another 30 min. After the impregnation process, each lamina was wrapped with aluminum foil and placed in an oven at 80 °C for 24 h. The foil was then removed, and each lamina specimen was weighed to calculate the polymer loading or weight percent gain (WPG). Conditioning of the specimens was conducted at room temperature for two weeks. In addition, three-layered glulam was manufactured using the laminae with a higher MOE in the face and back layers, while the lamina with the lowest MOE was used for the core layer. The laminae were placed with a longitudinal fiber orientation along the length of the glulam. Isocyanate glue was applied with a single spread glue line at 280 g/m 2 [17], and the laminae were then cold-pressed with a specific pressure of 10 kg/cm 2 for 3 h, followed by conditioning at room temperature for two weeks in the laboratory of the Centre for Standardization of Sustainable Forest Management Instruments, Ministry of Environment and Forestry, Bogor, Indonesia. For comparison purposes, untreated glulam and solid wood specimens were also prepared. Six replications of the test specimens were manufactured for each treatment combination of wood species and wood products. Physical Properties' Determination The physical properties measured were the density and moisture content, according to Japanese Agricultural Standard JAS 234-2003 [26]. Laboratory Test of Termite Attack The Indonesian standard SNI 7207-2014 for subterranean termite attacks in laboratory tests [27] was used in this study. To begin, 200 g of sterilized sand with a moisture content of 7% under a water-holding capacity was placed in a glass container, then a wood test wood specimen was added to the container. The wood specimen was stood up almost vertically from the bottom of the container, touching its side. Two hundred healthy and active Coptotermes curvignathus Holmgren subterranean worker termites from a laboratory colony were added to each container. Schematically, the test unit is shown in Figure 2. The containers were left in a dark room, at 25 °C to 30 °C and 80% to 90% relative humidity, for 4 weeks. The containers were weighed weekly, and if the moisture content The containers were left in a dark room, at 25 • C to 30 • C and 80% to 90% relative humidity, for 4 weeks. The containers were weighed weekly, and if the moisture content of the sand decreased by 2% or more, water was added to achieve the moisture content standard. At the end of the exposure period, the wood specimens were cleaned and then placed in an oven at 100 • C to reach their oven-dry weight. The endpoints evaluated were the wood density, moisture content (MC) of the wood, termite mortality, wood weight loss, wood resistance class based on the percentage of wood weight loss, and termite feeding rate, based on the work of Hadi et al. [18]. The protection level of wood against termite attacks was rated according to Furthermore, based on the WL, the wood resistance class against subterranean termites could be classified by referring to SNI 7207-2014 [28], as shown in Table 2. Prioritizing Wood Species and Wood Product The wood species and wood product from each parameter or response were quantified, sorted, and scored numerically using the Likert scale [29], from low to high. A low score showed a lack of priority, and a higher score showed a better priority. Letters a, b, and c for the wood species, and p, q, and r for the wood product, were equal to the scores of one, two, and three, respectively. All letters referred to Duncan's multi-range test result for each wood species and wood product. The total score was obtained by adding together all parameter scores, including the termite mortality, feeding rate, weight loss, resistance class, and protection level. A higher total score reflected a higher priority of wood species and wood product. Data Analysis The data were analyzed in a completely randomized block design using two factors, the wood species and wood product. The wood species, as a block factor, consisted of three levels, namely manii, mangium, and rubber-wood. The wood products factor consisted of three levels, too, namely the solid wood, control glulam, and polystyrene glulam. Duncan's multi-range test was carried out for further analysis when the main factor was significantly different at p ≤ 0.05 [30]. Physical Properties Physical properties-the weight percent gain (WPG), density, and moisture content (MC)-of each wood species and wood product are presented in Table 3. The summary of variance analysis is shown in Table 4, and Duncan's multi-range test is described in Table 5. As can be seen in Tables 4 and 5, the wood species highly significantly affected the WPG of polystyrene polymer loading. Rubber-wood had the smallest WPG due to having the highest density and was different from the other wood species, while the remaining two Polymers 2022, 14, 4003 6 of 11 species were largely the same. These results were in line with Hadi et al. [22], who stated that a higher-density wood species produces a lower WPG because it has a smaller void. The WPG (10 to 21%) in this study was much lower compared to polystyrene-impregnated Polish wood (88 to 135%) [23]. Moreover, the wood density was highly significantly affected by not only the wood species but also the wood product (Tables 4 and 5). Manii had the lowest density, followed by mangium and rubber-wood, with the three wood species significantly different from each other. The density values were in line with Martawijaya et al. [31]. In regard to wood products, solid wood had the lowest density, followed by control glulam and polystyrene glulam. The three products were significantly different from each other. Control glulam had a higher density than solid wood due to its glue-line presence and press treatment. The polystyrene glulam had the highest density because it had polystyrene impregnated in each lamina. Nevertheless, the WPG should be increased to achieve better physical properties. The wood species and wood products did not affect the MC. All wood specimens had the same MC, typical of the Bogor area; as Kadir [32] stated, the MC varied from 12% to 18%. The MC of the wood products, meanwhile, varied from 10.4% to 11.8%, a value range that matched JAS 234-2003 [26]. Termite Test The responses from the laboratory tests for termite resistance, including termite mortality, wood weight loss, wood resistance class, protection level, and termite feeding rate, are presented in Table 6. The analysis of variance outcome is shown in Table 4, Duncan's multi-range test is summarized in Table 5, and images of the wood specimens after the test are shown in Figure 3. Table 6. Weight loss, resistance class, attack degree, mortality, and feeding rate of laboratory test. Wood Species Wood The polystyrene glulam had the highest density because it had polystyrene impregnated in each lamina. Nevertheless, the WPG should be increased to achieve better physical properties. The wood species and wood products did not affect the MC. All wood specimens had the same MC, typical of the Bogor area; as Kadir [32] stated, the MC varied from 12% to 18%. The MC of the wood products, meanwhile, varied from 10.4% to 11.8%, a value range that matched JAS 234-2003 [26]. Termite Test The responses from the laboratory tests for termite resistance, including termite mortality, wood weight loss, wood resistance class, protection level, and termite feeding rate, are presented in Table 6. The analysis of variance outcome is shown in Table 4, Duncan's multi-range test is summarized in Table 5, and images of the wood specimens after the test are shown in Figure 3. Table 6. Weight loss, resistance class, attack degree, mortality, and feeding rate of laboratory test. According to the variance analysis in Table 4, termite mortality was affected by the wood species and wood products. A multi-range test, as presented in Table 5, showed that termite mortality on mangium (61.1%) was the highest, and different from manii (7.6%) and rubber-wood (8.5%), which were almost the same. Mihara et al. [33] also noted that mangium heart-wood contained flavonoids (2,3-trans-3,4 ,7,8,-tetrahydroxyflavanone, teracacidin, 4 ,7,8,-trihydroxyflavanon, and 3,4 ,7,8,-tetrahydroxy-flavanone) that could resist fungal (P. noxius and P. badius) attacks. These findings may indicate that those flavonoids act as termiticides. Furthermore, this study found that the termite mortality on mangium wood was very high. When we consider classifications, Oey [34] noted that mangium belonged to termite resistance class III, while manii and rubber-wood belonged to class V [35,36]; in, the Indonesian standard [28] (see Table 2), class I is very resistant, while class V is very poorly resistant, to subterranean termite attack. Manii In terms of wood products, polystyrene glulam (38.5%) had the highest termite mortality, followed by control glulam (25.6%) and then solid wood (13.1%); the three wood products were significantly different from each other. Polystyrene glulam had the highest termite mortality because it had the highest density. This finding was in line with that of Arango et al. [37], who noted that higher wood density could have higher resistance to subterranean termite attack. Furthermore, the results were also in line with Hadi et al. [18], who noted that polystyrene wood had much higher termite mortality than solid wood. Referring to the variance analysis in Table 4, the wood weight loss was significantly affected by the wood species and wood products. In the multi-range test results (see Table 5), mangium (10.0%) had the lowest weight loss, which was significantly different from manii (20.4%) and rubber-wood (19.3%), the two of which were almost the same. The results were in line with the termite mortality, with mangium wood again highest. Put simply, it had the fewest number of living termites to feed the wood, causing the wood weight loss to be the lowest. In terms of the wood products, solid wood had the highest weight loss (23.8%), followed by control glulam (14.2%) and polystyrene glulam (11.7%), with the two glulams almost the same. These glulams had a higher density than solid wood, and they also had a glue line; both factors could have decreased the wood weight loss. The weight loss of polystyrene glulam was slightly lower than that of control glulam. However, statistically, they were not significantly different. This result matched that of Nurhanifah et al. [24], who reported that solid wood of sengon (Falcataria moluccana) was significantly less resistant than control glulam and polystyrene glulam, with the two glulams almost the same. To achieve the much lower weight loss of polystyrene glulam, we suggest that the weight gain of polystyrene on laminae should be increased. Referring to Hadi et al. [23], after a one-year exposure in the field, four Polish woods with polymer loading between 88 and 135% had a weight loss of 19%, while the untreated woods had 100% weight loss or totally failed. Nevertheless, the polystyrene weight gain of laminae should be considered to a certain degree, along with its optimal mechanical properties, as mentioned by Hadi et al. [17], who noted that the shear strength and rupture modulus of polystyrene glulam were lower than those of solid wood. The weight loss reflected the resistance class of the wood in the laboratory test, as is described in Table 2. Referring to Table 4, the resistance class was affected by the wood species and wood products. Based on the multi-range test in Table 5, mangium wood had the highest resistance class, which was different from manii and rubber-wood, while those were almost the same. This finding was in line with that for the original or untreated wood species, confirming that mangium belongs to resistance class III, while the others belong to class V. In terms of the wood products factor, solid wood that belonged to the lowest class (average class 4.5) was different from control glulam and polystyrene glulam, with the two glulams were almost the same. This phenomenon was in line with the wood weight losses. The protection level of the wood specimen dictated how much of the wood specimen was left compared to its original condition; the highest value of 10 indicated that the wood specimen was very resistant, while zero was for failure, as described in Table 1. The protection level of the wood specimen was affected by the wood species and wood products. Mangium wood, with a value of 8.3, was the most resistant, followed by rubber-wood (7.6) and manii (6.0); for this factor, the three wood species were significantly different from each other. These protection levels of the three wood species were in line with the weight loss found in this study. Regarding wood products, solid wood was the most susceptible to attack by termites, which was indicated by the lowest protection level (value of 5.8). The performance of solid wood was significantly different from control glulam (value of 7.5) or polystyrene glulam (value of 7.7), with the two glulams almost the same. These findings matched those of Hadi et al. [18], who noted that the control glulam had a lower wood weight loss than solid wood, and Hadi et al. [22], who reported that polystyrene wood had a lower wood weight loss and higher protection level than solid wood. In other words, the glulams were more resistant than the solid wood. The wood consumption of each termite per day, or feeding rate, was highly affected by the wood species and wood products, as shown in Table 4. Referring to the multi-range test presented in Table 5, rubber-wood had the highest feeding rate (134.6 µg/termite/day), which was different from mangium (93.4 µg/termite/day) and manii (86.9 µg/termite/day), the two of which were almost the same. Rubber-wood belonged to the very poor resistance class or was very easily attacked by the termites (resistance class V, the lowest class in Indonesian standard), and it had the highest density (0.73 g/cm 3 ); these findings indicate that the wood was very easily consumed by the termites, with high mass feeding. If we look at the solid rubber-wood, the feeding rate in this study reached 147 ± 13 µg/termite/day, a value similar to that put forward by Arinana et al. [36], who reported that the termite feeding rate for rubber-wood was 129 ± 10 µg/termite/day. Likewise, the feeding rates of manii and mangium solid wood were 131 ± 4 µg/termite/day and 100 ± 10 µg/termite/day, respectively, which were similar to those found by Hadi et al. [18] (at 145 ± 41 µg/termite/day and 122 ± 104 µg/termite/day, respectively). In other words, the feeding rates for solid wood in this work were similar to the findings of other works. According to Table 5, the feeding rate for solid wood was the highest (125.8 µg/termite/day) and significantly different from those of control glulam (100.0 µg/termite/day) and polystyrene glulam (89.0 µg/termite/day), which also differed from one another. This finding was in line with the wood weight loss, where solid wood had the highest weight loss, followed by control glulam and polystyrene glulam. Priority Product According to Table 7, in terms of the wood species, mangium wood should be prioritized. In terms of wood products, meanwhile, polystyrene glulam should be prioritized, followed by control glulam and then wood solid. These priorities were decided based on the classification of the results for specific critical parameters in Table 5. A parameter that positively affects the treatment (e.g., termite mortality, protection level) improves the priority value, while a parameter that negatively affects the treatment (e.g., weight loss, resistance class, and feeding rate) decreases the priority value. Thus, the highest weight loss values due to termite attacks lower the priority values, while the highest termite mortality values raise them. Note: Numbering came from scoring based on Table 5, with a and p as 1, b and q as 2, and c and r as 3. Conclusions From the discussion above, it can be concluded that mangium wood has a better resistance class (class III, moderately resistant) to subterranean termite attacks than manii and rubber-wood, which belong to the lowest class (class V, very poorly resistant) of Indonesian standard SNI 7207-2014. In terms of wood products, solid wood has the lowest resistance class (class V) to subterranean termite attacks when compared with control glulam and polystyrene glulam (both class IV, poor resistance); the glulams were equal and had a better resistance than solid wood. To enhance polystyrene glulam's resistance to termite attacks, the polymer loading of polystyrene on each lamina should be increased to achieve optimal mechanical properties, including the modulus of elasticity, modulus of rupture, and shear strength. Of the wood products studied, polystyrene glulam should be the most prioritized since it demonstrated the best resistance to subterranean termite attacks.
2022-09-28T15:15:46.918Z
2022-09-24T00:00:00.000
{ "year": 2022, "sha1": "85ad0a499531f386849fcd1d7e66ca24dd095109", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/19/4003/pdf?version=1664015963", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7081df18906dc8ad5bb473af28b4bc7537edbd2e", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
235796851
pes2o/s2orc
v3-fos-license
C-C chemokine receptor type 6 modulates the biological function of osteoblastogenesis by altering the expression levels of Osterix and OPG/RANKL deficiency in the culture treated with 1,25(OH)2D3/PGE2, while there was no effect observed in the normal culture environment. The results provide novel insights, such as that CCR6 deletion suppresses osteoblast differentiation by downregulating the expression levels of the transcription factor Osterix , and indirectly promotes osteoclast production by increasing transcription of RANKL. This may be one of the mechanisms via which CCR6 deletion regulates bone metabolism. Introduction Bone metabolism balance is an important factor in maintaining the health of the body. Osteoporosis is a group of systemic skeletal diseases characterized by low bone mass, degeneration of the bone microstructure, increased bone fragility and fracture sensitivity (1,2). The main pathogenesis is the imbalance of bone remodeling. Bone remodeling mainly includes bone formation and bone resorption activities, both of which are initiated and modulated by a number of factors, including inflammation, hormone levels and mechanical stimulation (3,4). A decrease in estrogen level is the main cause of osteoporosis in postmenopausal women. Estrogen reduction affects the biological behavior of osteoblasts, osteoclasts and T cells by altering the levels of cytokines, such as TNF-α, IL-1, IL-6 and IL-17, which affects bone metabolism (5)(6)(7)(8). During the early stages of collagen-induced arthritis (CIA), estrogen treatment can increase the number of Th17 cells in lymph nodes and decrease the number of Th17 cells in joints of CIA mice (9,10). Studies have demonstrated that C-C chemokine receptor type 6 (CCR6) serves an important role in the differentiation of B cells driven by antigens, and can regulate the migration of dendritic cells and T cells in inflammatory and immune responses (11)(12)(13). In addition, estrogen can increase the expression levels of CCR6 and CCL20, which play a role as the ligand of CCR6 in Th17 cells of the lymph nodes. Furthermore, the increase in CCR6 and C-C chemokine ligand 20 (CCL20) expression in the lymph nodes impels Th17 cells to stay in the lymph nodes and hinders the migration of Th17 cells to the joints, thus reducing the recruitment of neutrophils into joints, and alleviating arthritis and erosion providing potential treatment targets (14). In the physiological state, osteoclasts are involved in bone resorption and form local bone resorption lacunae. Additionally, osteoclasts release cytokines and chemotaxins, recruit osteoblasts to local bone resorption lacunae, and participate in new bone formation, thus maintaining the balance of bone metabolism (15)(16)(17). Previous studies have revealed that global loss of CCR6 in mice markedly decreases trabecular bone mass coincident with reduced osteoblast numbers. CCL20 and CCR6 are co-expressed in osteoblast progenitors and the levels increase during osteoblast differentiation, indicating the potential for CCL20/CCR6 signaling to influence osteoblasts via both autocrine and paracrine pathways. CCL20/CCR6 signaling may serve an important role in regulating bone mass accrual, potentially by modulating osteoblast maturation, survival and the recruitment of osteoblast-supporting cells (18). Further studies investigating the role of CCR6 in the pathogenesis of bone metabolism-related diseases could provide novel ideas and methods for the treatment of osteoporosis. Mice CCR6 -/and wild-type (WT) mice (C57BL/6 mice), aged 10-12 weeks and weighing 20-30 g, were purchased from Jackson Laboratory. Mice were raised in carbonate plastic cages in the animal room (clean grade) of the Institute of Gynecology and Obstetrics of the Hospital Affiliated to Fudan University, Shanghai, China. The genotype of the CCR6 -/mice was identified according to the standard protocol provided by Jackson Laboratory, and the identified primer sequences are listed in Table 1. The experiments were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals published by the US National Institutes of Health and were approved by the Fudan ethics committee. Throughout the study period, the mice were housed in a temperature-controlled (23 ± 0.5˚C) and humidity-controlled (43 ± 8%) environment with a 12-h light-dark cycle and ad libitum access to food and water. Genomic DNA isolation and genotyping Genotypes of CCR6 -/mice were confirmed by PCR analyses of genomic DNA. Generally, tissue samples were collected and used for further PCR analyses with 2X GC rich PCR MasterMix (Tiangen Biotech Co., Ltd.). The PCR reaction was performed according to the protocol provided by Jackson Laboratory: 94˚C for 3 min, followed by 10 cycles of 94˚C for 30 sec, 65˚C for 15 sec and 68˚C for 10 sec, followed by another 28 cycles of 94˚C for 15 sec, 60˚C for 15 sec and 72˚C for 10 sec, and finally an additional step of 72˚C for 2 min prior to the end of the program. Primary osteoblast isolation and induced differentiation culture Osteoblasts were collected from the calvarium of newborn mice after 2 days as previously described (19). Skull bones were extracted and digested (five times; 10 min each time) in α-minimum essential medium (α-MEM) containing 0.1% collagenase and 0.2% dispase. The supernatant from the first 10-min digestion was discarded. Cells obtained from the remainder of the digestions were pooled, and 5 × 10 5 cells were seeded onto 10% FBS (Gibco) and phenol red-free α-MEM supplemented with 10 U/mL penicillin and 10 µg/mL streptomycin in 6-well culture plates until they reached 80% confluence. The osteogenic differentiation medium consisted of 10% serum and phenol red-free α-MEM, 20 mM ascorbic acid, 1 M β-glycerophosphate disodium salt hydrate and 1 mM dexamethasone. Cell transfection The CCR6-specific small interfering RNA (siRNA) and negative control (si-NC) were purchased from P2 into 6-well culture plates and cultured until they reached 90% confluence. The cells were digested with pancreatin and collected into a 1.5 mL Eppendorf tube. ALP activity in cell lysate was quantitated by using an Alkaline Phosphatase Assay kit (Beyotime Institute of Biotechnology) according to the manufacturer's protocol. Alizarin Red S staining The primary osteoblasts were isolated and the cell density was adjusted to 1 × 10 5 cells/mL. After 21 days of osteogenic induction medium treatment in 24-well plates, Alizarin Red S stain was added to each well and the plate was incubated in the dark for 10 min at room temperature. The staining buffer was removed carefully when mineralized osteoblasts appeared bright orangered and undifferentiated cells were slightly red or colorless. RNA isolation and reverse transcription-quantitative PCR (RT-qPCR) For PCR analysis, total RNA was isolated using an RNA extraction kit (Axygen; Corning Inc.) according to the manufacturer's protocol. The concentration of total RNA was measured using a NanoDrop 2000c (Thermo Fisher Scientific, Inc.). RNA (1 µg) was reverse transcribed into cDNA using a reverse transcriptase kit (Promega Corporation). qPCR was performed using SYBR Premix Ex Taq (Takara Bio, Inc.). The cDNA levels were quantified using the housekeeping gene GAPDH. Gene expression was normalized to the level of housekeeping gene GAPDH and analyzed using the standard 2 -ΔΔCT method. Primer sequences are listed in Table 2. Cell viability assay Primary osteoblasts were isolated according to the method "Primary osteoblast isolation and induced differentiation culture", and the cell density was adjusted to 2 × 10 5 cells/mL. Subsequently, 100 μL medium containing 2,000 cells was added to each well of a 96-well cell culture plate. After 24 h, when the cells had adhered completely, the medium was exchanged with medium containing 25 μg/mL vitamin C, 10 mM GenePharma (Shanghai, China). Cell transfection was conducted using Lipofectamine 2000 (Invitrogen, CA, USA). Cell transfection was performed independently at least three times. Alkaline phosphatase (ALP) staining and quantitative detection of ALP activity For ALP detection, 5-bromo-4-chloro-3-indolyl phosphate (BCIP)/nitro blue tetrazolium (NBT) was the preferred staining substrate. After 7 days of osteogenic induction medium treatment in 24-well plates, cells were washed with 500 μL PBS and fixed with 4% paraformaldehyde for 20 min at room temperature, followed by three washes with 500 μL PBS. The fixed cells were incubated in BCIP/NBT buffer, which was prepared according to the kit's protocol (3 ml ALP staining buffer, 10 μL 300X BCIP buffer and 20 μL 150X NBT buffer) for > 30 min at room temperature avoiding light until ALP-positive differentiated osteoblasts appeared blue-violet. The reaction was stopped by adding excess deionized water. The results were visualized using an HP scanner and recorded. A total of 5 × 10 5 primary osteoblasts were seeded β-glycerol phosphate and 100 nM dexamethasone (differentiation medium). Each well contained 100 μL medium, and six replicate wells were analyzed for each group. After 12, 24, 48 and 96 h, 10 μL MTT solution (5 mg/mL; 0.5% MTT) was added to each well, followed by incubation in a cell incubator for 4 h. Subsequently, 100 μL formazan solution was added to each well, followed by incubation in a cell culture box until the formazan was completely dissolved as observed under an ordinary optical microscope. The absorbance of each well was measured at a wavelength of 570 nm using an ELISA plate reader. The activity of OB after 12 h in WT mice was calculated and expressed as a percentage of that in the control group. Treated primary osteoblasts with 1,25(OH)2D3/ PGE2 in vitro The primary osteoblasts were isolated according to the method "Primary osteoblast isolation and induced differentiation culture" and the cell density was adjusted to 1 × 10 5 cells/mL. A 24-well plate was used to inoculate 500 μL/well or a 6-well plate was used to inoculate 2 ml/ well. The cells were cultured in an incubator at 37˚C with 5% CO 2 . After 48 h, the medium containing 1 × 10 -8 M 1,25-dihydroxyvitamin D3 and 1 × 10 -6 M prostacyclin E2 (co-culture medium) was used to replace the medium, which simulated a microenvironment of co-culture of osteoblasts and osteoclasts (20). Subsequently, medium containing 1,25(OH)2D3/PGE2 was changed once every 48 h. Statistical analysis All data are presented as the mean ± SEM. Differences were assessed by Student's t-test using SPSS software (IBM Corp.). All experiments were repeated more than three times. p < 0.05 was considered to indicate a statistically significant difference. Identification of CCR6 -/mice The transgene identification of CCR6 -/mice was confirmed according to the standard protocol provided by Jackson Laboratory (stock number, 005793; strain name, B6.129P2-Ccr6tm1Dgen/J). The master protocol details are presented in Figure 1A. The expected product size of mutant (CCR6 -/-) was 442 bp, the sizes of heterozygote (CCR6 +/-) were 228 and 442 bp, and the size of WT was 228 bp ( Figure 1B). Deletion of CCR6 inhibits the differentiation of osteoblasts in vitro The mRNA expression of CCR6 in primary osteoblasts from CCR6 -/mice was decreased obviously compared to WT controls (Figure 2A). The present study first examined the effect of CCR6 deficiency on ALP activity of differentiated primary osteoblasts 7 days after osteogenic induction medium treatment. ALP staining revealed reduced osteoblastic differentiation in CCR6 -/mice compared with WT controls ( Figure 2B). Additionally, ALP activity in the cell lysate was markedly decreased in CCR6 -/osteoblasts compared with WT controls ( Figure 2C). 3.3. CCR6 deficiency inhibits mineralization of differentiated primary osteoblasts while it has no effect on cell viability Osteoblasts of WT and CCR6 -/mice were treated with induction differentiation medium for 21 days, and the number of calcium nodules was compared using Alizarin Red staining. Calcium nodules appeared orange-red following Alizarin Red staining. The results of Alizarin Red staining demonstrated that the number of osteoblastic calcium nodules in CCR6 -/osteoblasts cultured in vitro was lower than that in osteoblasts from WT mice ( Figure 3A). Quantitative comparison of the number of calcium nodules per pore revealed that the number of osteoblastic calcium nodules cultured in CCR6 -/osteoblasts was markedly lower than that in WT osteoblasts ( Figure 3B). Therefore, the present study suggests that CCR6 deletion may weaken osteoblast activity and inhibit osteoblast mineralization in vitro. Furthermore, the proliferation activity of primary osteoblasts from WT and CCR6 -/mice was assessed. The results demonstrated that there was no significant difference in cell proliferation activity between WT and CCR6 -/osteoblasts at 12, 24, 48 and 96 h ( Figure 3C). This indicated that CCR6 deletion had no effect on the proliferation of osteoblasts in mice. CCR6 deficiency decreases Osterix expression in differentiated primary osteoblasts Runx2 and Osterix are important transcription factors in osteoblast growth and differentiation. Runx2 is expressed in the early stage of osteoblast differentiation, whereas Osterix is only expressed in the late stage of osteoblast differentiation. The primary osteoblasts of WT and CCR6 -/mice were induced to differentiate and cultured in vitro. RT-qPCR was used to analyze the expression levels of Runx2 and Osterix in the two groups. The results revealed that with the prolongation of cell culture time in vitro there was no significant difference in the mRNA expression levels of Runx2 ( Figure 4A). The mRNA expression levels of Osterix in CCR6 -/osteoblasts were markedly lower than those in WT osteoblasts after 21 days of culture ( Figure 4B). Additionally, the present study analyzed the mRNA expression levels of the functional factor Collagen-1 during osteoblastic differentiation in vitro. There was no significant difference in the mRNA expression levels of collagen-1 between CCR6 -/and WT osteoblasts ( Figure 4C). We also transfected MC3T3-E1 cells with siRNA-CCR6 to detect the expression of related proteins. The MC3T3-E1 cells were induced to differentiate, after 21 days of differentiation culture, we transfected the MC3T3-E1 with siRNA-CCR6 and si-NC, then detected the protein expression of Runx2, Osterix and Collagen-1 in cells. There was no significant difference in the expression levels of Runx2 and Collagen-1 between si-NC and si-CCR6 groups, while the expression level of Osterix in si-CCR6 group was markedly lower than those in NC group ( Figure 4D). Osteoprotegerin (OPG)/receptor activator of nuclear factor κ-Β ligand (RANKL) levels decreased in CCR6 -/osteoblasts treated with 1,25(OH)2D3/PGE2 The OPG/RANKL/receptor activator of nuclear factor κB (RANK) system is an important signaling pathway in osteoclast differentiation. Osteoblasts can regulate osteoclastogenesis by expressing OPG and RANKL. The primary osteoblasts of WT and CCR6 -/mice were divided into a common culture group and a treated group. The common culture group was cultured with common culture medium, whereas treated group was supplemented with 1,25(OH)2D3/PGE2 to common culture medium, which was used to simulate the coculture environment. Total RNA was collected after 0, 1 and 2 days of culture, and the mRNA expression levels of OPG and RANKL in osteoblasts were analyzed by RT-qPCR. The results demonstrated that the mRNA expression levels of OPG in CCR6 -/osteoblasts were not significantly different from those in WT osteoblasts in both the normal culture environment and the treated environment ( Figure 5A). Furthermore, CCR6 deletion did not affect the mRNA expression levels of RANKL in osteoblasts in the normal culture environment, whereas it increased the mRNA expression levels of RANKL in osteoblasts in the treated group ( Figure 5B). The ratio of OPG/RANKL was decreased in osteoblasts with CCR6 deletion compared with WT control cells in the treated group ( Figure 5C). Discussion Previous studies have demonstrated that osteoblasts and osteoclasts can secrete CCR6 and CCL20, and the CCL20/CCR6 signaling pathway is closely associated with bone metabolism (18). When the overall level of CCR6 in mice decreases, trabecular bone mass decreases, which is consistent with the reduced number of osteoblasts. CCR6 and CCL20 are coexpressed in osteoblast progenitor cells. The expression levels of CCR6 and CCL20 are increased during the differentiation of osteoblasts, suggesting that the CCL20/CCR6 signaling pathway affects osteoblasts via autocrine and paracrine pathways (21,22). Whether CCR6 is involved in osteoblastogenesis and what role it serves remains to be elucidated. Therefore, the present study proposed the hypothesis that the decrease in CCR6 expression in osteoblasts suppresses osteoblast differentiation and regulates bone metabolism, which leads to osteoporosis. The main cause of osteoporosis is the imbalance of bone formation induced by osteoblasts and bone resorption induced by osteoclasts. Osteoblasts are derived from mesenchymal stem cells (MSCs) and have the potential to differentiate into a variety of cells, such as chondrocytes, myoblasts or adipocytes. Osteoblast differentiation is divided into three stages: The OPG/RANKL ratio was decreased in osteoblasts with CCR6 deletion compared with WT control in the treated group, while no change was observed in the normal culture environment. * p < 0.05. All data are presented as the mean ± SEM and representative of at least three experiments. cell proliferation, extracellular matrix formation and maturation, and mineralization. Each stage has a characteristic gene expression profile (23)(24)(25). Under suitable culture conditions, osteoblasts can secrete certain unique extracellular matrix proteins, including osteocalcin (OCN), ALP and a large amount of Collagen-1 (26,27). The extracellular matrix does not mineralize at the beginning of deposition and is rich in Collagen-1 (28). With the accumulation of calcium phosphate in the form of hydroxyapatite, the matrix mineralizes to form hard but light-weight sediments (both organic and inorganic), which are the main components of bone tissue (29,30). These osteoid calcium nodule deposits represent the end products of proliferation and differentiation of osteoblasts (31). In the present study, we isolated pre-osteoblasts from the calvarium of newborn CCR6 -/mice and detected ALP activity, calcium deposit formation, and osteoblastogenesis related factor expression 0, 3, and 7 days after differentiation. The activity of ALP decreased in CCR6 -/osteoblasts 7 days after differentiation compared to wild-type controls, combined with impaired calcium deposit formation that indicated inhibited mineralization of differentiated osteoblasts. During different stages of osteoblast growth and development, sequential expression of different osteoblast-related genes has different effects on differentiation. Runx2 and Osterix are important transcription factors in osteoblast growth and differentiation. Runx2 is expressed in the early stage of differentiation, whereas Osterix is only expressed in the late stage of differentiation (32)(33)(34). Runx2 is essential for osteoblast differentiation in the process of chondrogenesis and intramembrane osteogenesis. Runx2 can directly stimulate the transcription of OCN, Collagen-1, osteopontin, collagenase 3 and suppression of Tumorigenicity 2 during the differentiation of bone marrow MSCs (BMSCs) into osteoblasts (35)(36)(37). Osterix is a downstream transcription factor of Runx2 in osteoblasts and is required for osteoblast differentiation (38). If Runx2 and Osterix expression is inhibited, this will affect the growth and differentiation of osteoblasts, which will lead to a differentiation disorder. Therefore, the present study isolated RNA from osteoblasts, which had been treated in different stages of differentiation and analyzed the expression levels of osteoblast differentiation-related genes using RT-qPCR. The results demonstrated that CCR6 deletion did not affect the transcription levels of Collagen-1 and Runx2 in osteoblasts, whereas the transcription levels of Osterix were markedly lower in CCR6 -/osteoblasts than in WT osteoblasts, indicating that CCR6 deletion inhibited Osterix expression in the late stage of mineralization of osteoblasts in vitro. CCR6 deletion can weaken the activity of osteoblasts and inhibit the mineralization of osteoblasts in vitro. The present study assessed the osteoblasts of the two groups using an MTT assay, and the effect of CCR6 deletion on osteoblast proliferation was observed. The results revealed that there was no significant difference in the proliferation activity between the two groups at the four time points (12, 24, 48 and 96 h). The results demonstrated that CCR6 deletion did not affect the proliferation of osteoblasts. Therefore, it was concluded that CCR6 deletion may directly affect the differentiation of osteoblasts but not proliferation. Postmenopausal estrogen deficiency is associated with increased bone resorption and increased production of pro-inflammatory factors, such as RANKL. A mature osteoclast is a multinucleated giant cell, which is induced and differentiated by bone marrow hematopoietic stem cells stimulated by macrophage colony-stimulating factor and RANKL. RANKL serves an important role in osteoclast generation. OPG, also known as osteoclast inhibitory factor, is a growth factor receptor belonging to the tumor necrosis factor receptor family (39,40). RANKL expression in osteoblasts and BMSCs can promote the differentiation and activation of osteoclasts and inhibits the apoptosis of osteoclasts. In addition, osteoblasts and BMSCs secrete and express OPG, which competitively binds with RANKL, preventing binding between RANKL and RANK (41,42). The OPG/RANKL/RANK system is an important signaling pathway in osteoclast differentiation. Numerous hormones and immune factors affect bone metabolism in vivo by affecting the expression levels of OPG or RANKL (43,44). The present results demonstrated that CCR6 deletion did not affect the transcription levels of OPG in osteoblasts in the simulated co-culture environment or in the common culture environment, but increased the transcription of RANKL in mice osteoblasts in the treated culture environment only. Therefore, it was speculated that the deletion of the CCR6 could indirectly promote osteoclast formation by increasing the transcription levels of RANKL, while the bone mass and bone microarchitecture of CCR6 -/and wild-type mice in vivo should be analyzed in further studies. In conclusion, CCR6 deletion weakened osteoblast activity and inhibited osteoblast mineralization in vitro, whereas it did not affect the proliferation of osteoblasts. This suggests that CCR6 deletion may directly inhibit osteoblast differentiation by downregulating the expression levels of Osterix, a key transcription factor in osteoblast differentiation, and indirectly promote osteoclast production by increasing the transcription levels of RANKL in osteoblasts without affecting the transcription levels of OPG. However, the transcription levels of Collagen-1 and Runx2 were not significantly altered in CCR6 -/osteoblasts. Therefore, it was speculated that CCR6 deletion can alter the biological function of osteoblasts in osteogenesis in mice. The present study provides novel evidence to explain the mechanisms via which CCR6 deletion regulates bone metabolism.
2021-07-13T06:16:35.737Z
2021-07-11T00:00:00.000
{ "year": 2021, "sha1": "9b864119e0405c5f947234371282daa0b78f2f3f", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/bst/advpub/0/advpub_2021.01199/_pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "358a180661c2fe6ee67f8ce3a9cb423556bf606a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
17517243
pes2o/s2orc
v3-fos-license
Stability and Mismatch Discrimination of Locked Nucleic Acid–DNA Duplexes Locked nucleic acids (LNA; symbols of bases, +A, +C, +G, and +T) are introduced into chemically synthesized oligonucleotides to increase duplex stability and specificity. To understand these effects, we have determined thermodynamic parameters of consecutive LNA nucleotides. We present guidelines for the design of LNA oligonucleotides and introduce free online software that predicts the stability of any LNA duplex oligomer. Thermodynamic analysis shows that the single strand–duplex transition is characterized by a favorable enthalpic change and by an unfavorable loss of entropy. A single LNA modification confines the local conformation of nucleotides, causing a smaller, less unfavorable entropic loss when the single strand is restricted to the rigid duplex structure. Additional LNAs adjacent to the initial modification appear to enhance stacking and H-bonding interactions because they increase the enthalpic contributions to duplex stabilization. New nearest-neighbor parameters correctly forecast the positive and negative effects of LNAs on mismatch discrimination. Specificity is enhanced in a majority of sequences and is dependent on mismatch type and adjacent base pairs; the largest discriminatory boost occurs for the central +C·C mismatch within the +T+C+C sequence and the +A·G mismatch within the +T+A+G sequence. LNAs do not affect specificity in some sequences and even impair it for many +G·T and +C·A mismatches. The level of mismatch discrimination decreases the most for the central +G·T mismatch within the +G+G+C sequence and the +C·A mismatch within the +G+C+G sequence. We hypothesize that these discrimination changes are not unique features of LNAs but originate from the shift of the duplex conformation from B-form to A-form. A locked nucleic acid (LNA) is a useful chemical modification. 1−5 Mixed oligonucleotides consisting of LNA, DNA, and RNA residues have improved polymerase chain reaction (PCR) experiments, 6 single-nucleotide polymorphism assays, 7,8 RNA interference, 1,4 antisense mRNA technology, 2 microRNA profiling and regulation, 9,10 aptamers, 11 LNAzymes, 3 microarrays, 12 and nanomaterials. 13 These applications require that LNA oligonucleotides possess specific melting temperatures (T m ) and free energies of association for complementary sequences (ΔG°). 5 The thermodynamic stability of nucleic acid duplexes has been described with the nearest-neighbor model, which takes into account energetics of nearest-neighbor base pairs and assumes that interactions beyond neighboring nucleotides can be neglected. 14−17 The total enthalpy and entropy of duplex annealing are calculated by summation of doublet terms (1) (2) where N bp is the number of duplex base pairs. The first term on the right side of eq 1 is the sum over all internal nearestneighbor doublets (ΔH°i ,i+1 ). The second term (ΔH°i nit ) represents the "initiation" enthalpy, which includes the formation of the duplex first base pair, corrections for the extra hydrogen bond of G·C versus A·T in terminal base pairs, 17 and terminal base−solvent interactions. The initiation parameter varies with the nature of terminal base pairs. 15,16 Equation 2 also includes an entropic symmetry correction (ΔS°s ymmetry ) of −1.4 cal mol −1 K −1 , which is added when a duplex consists of two identical, self-complementary oligonucleotides. The nearest-neighbor model accurately predicts thermodynamics and melting temperatures (±1.5°C) of native oligonucleotides. 15−19 It appears that the nearest-neighbor model also predicts well single-base mismatches 18,20 and some chemical modifications, including single LNAs. 21−24 Because LNAs increase duplex stability and change the specificity of base pairing, 2,5 LNA nearest-neighbor parameters differ significantly from DNA parameters. The LNA parameter set is incomplete and does not cover many useful sequences. Thermodynamic parameters have been published for isolated LNA·RNA base pairs introduced into 2′-O-methyl RNA oligonucleotides 25,26 and for isolated LNA·DNA base pairs. 21 However, many applications benefit from other types of LNA modifications. For example, a triplet of LNA residues appears to maximize mismatch discrimination and improves single-nucleotide polymorphism assays. 5 Fully LNA-modified probes can selectively capture genomic DNA sequences. 27 To determine the parameters for consecutive LNAs, we measured the stability of duplexes using the fluorescence melting method. 28 The energetics of LNA effects was determined from the difference between LNA-modified and native (core) duplexes. Because we used standard experimental conditions (1 M Na + and pH 7), new parameters are compatible with existing DNA parameters. ■ MATERIALS AND METHODS Oligonucleotides were synthesized at Integrated DNA Technologies, purified by HPLC, 29 and dialyzed against storage buffer [10 mM Tris-HCl and 0.1 mM Na 2 EDTA (pH 7.5)]. 28 Concentrated oligonucleotide samples were tested by mass spectroscopy (molecular weights were within 2 g/mol) and capillary electrophoresis (>90% pure). DNA concentrations were determined from predicted extinction coefficients (ε) and sample absorbance at 260 nm using the Beer−Lambert law. 29,30 LNA nucleotides were assumed to possess the same extinction coefficients as DNA ones. Coefficients of Texas Red (14400 L mol −1 cm −1 ) or Iowa Black RQ (44510 L mol −1 cm −1 ) were added to the ε of labeled oligonucleotides. Primary Set of Oligonucleotides. Figure 1A shows the sequences studied. Fluorescent Texas Red dye (TXRD) is attached at the 5′ end of the top strand, and Iowa Black RQ quencher (IBRQ) is attached at 3′ end of the complementary strand. This design efficiently quenches fluorescence when the strands are annealed because the dye and the quencher are in close contact. We use notation of oligonucleotide manufacturers; LNA nucleotides are indicated with + in front of the base symbol (e.g., +A denotes an adenine LNA nucleotide). Cytosine of +C is 5-methylated because oligonucleotide manufacturers usually synthesize the methylated version of LNA cytosine. The set of DNA duplexes contains a triplet of consecutive LNAs located either in the interior of the strand labeled with Texas Red or in the interior of the complementary strand labeled with Iowa Black RQ. Eight possible LNA·DNA base pairs (X·Y ≡ +A·T, A·+T, +T·A, T·+A, +C·G, C·+G, +G·C, and G·+C), and 24 mismatches were introduced at the X·Y site. Core duplexes were also measured. They contained DNA·DNA base pairs (X·Y ≡ A·T, T·A, C·G, and G·C), and the same terminal Texas Red−Iowa Black RQ pair was also measured. This design is economical; each oligonucleotide is used in several duplexes. Thirty-six duplexes were melted for each set except for set 3. This set consisted of 27 duplexes because its two sequences, GTAGGGGTGCT-IBRQ and GTA+G+G +GGTGCT-IBRQ, were not obtained with sufficient purity. For sets 1−4, the same base flanks the X·Y site on the 5′ and 3′ sides. For sets 5−8, the flanking bases are different and each of the four bases (A, T, C, and G) occurs once on the 5′ and 3′ sides of the X·Y base pairs. This design ensures that every possible nearest-neighbor interaction is present several times within the data set. Figure 1A also shows that duplex lengths range from 10 to 12 bp. Such short sequences are likely to melt in a two-state manner. Nevertheless, non-two-state behavior may occur even for short oligonucleotides if they form stable self-complementary structures, e.g., hairpins or dimers. OligoAnalyzer version 3.1 (http://www.idtdna.com/analyzer/Applications/ OligoAnalyzer/) confirmed that our sequences do not form such structures. This paper follows previous conventions to represent duplex sequences. 16 A slash divides the strands in an antiparallel orientation. The sequence is oriented 5′ to 3′ before the slash and 3′ to 5′ after the slash (for example, CA/GT represents the 5′-CA-3′/3′-GT-5′ doublet with two Watson−Crick base pairs). Mismatched nucleotides are underlined or colored red. Ribonucleotides are distinguished from deoxyribonucleotides by the "r" prefix, e.g., rA. Melting Experiments. We followed our previously described method for fluorescence melting experiments. 28 The melting buffer contained 1 M NaCl, 3.87 mM NaH 2 PO 4 , 6.13 mM Na 2 HPO 4 , and 1 mM Na 2 EDTA and was adjusted to pH 7.0 with 1 M NaOH. 30 Buffer reagents of p.a. grade purity were bought from ThermoFisher Scientific (Pittsburgh, PA). Melting experiments were performed at 13 different total single-strand concentrations (19,30,46,70,110,160,250,375,570, and 870 nM and 1.3, 2.0, and 3.0 μM). Duplex samples were prepared at the highest C t of 3 μM. Complementary oligonucleotides were mixed in a 1:1 ratio in the melting buffer, heated to 95°C, and slowly cooled to room temperature. Aliquots of the 3 μM solution were diluted with the melting buffer to make 12 remaining samples. Low-binding Costar microcentrifuge tubes (catalog no. 3207, Corning, Wilkes Barre, PA) were used to reduce the level of binding of oligonucleotides to the tube surface. We pipetted 25 μL of the melting sample into two wells of a 96-well PCR plate (Extreme Uniform Thin Wall Plates, catalog no. B70501, BIOplastics BV, Landgraaf, The Netherlands). A significant discrepancy between wells alerted us to an erroneous measurement. Using the Bio-Rad iQ5 real-time PCR system, the fluorescence signal in the Texas Red channel was recorded every 0.2°C while the temperature was increased from 4 to 98°C and decreased back to 4°C over two cycles. Subsequent temperature cycles were not used because they were unreliable; T m sometimes increased, indicating the evaporation of water or degradation of dye. The iQ5 system maintained a temperature rate of 25°C/h. Analysis was conducted in Microsoft Excel. We programmed VBA software to automate melting profile analysis, including baseline selection using a second-derivative algorithm. 28 The fraction θ was calculated [θ = (F − F L )/ (F U − F L )] from the fluorescence of the DNA sample (F), the fluorescence of the upper linear baseline (F U ), and the fluorescence of the lower linear baseline (F L ). If a duplex melts in a two-state manner, dissociation of the fluorophore from the quencher is likely coupled to the duplex-to-single strand melting transition and θ represents the fraction of melted duplexes. The melting temperature was defined as the temperature at which θ = 1 / 2 . The average standard deviation of T m values was 0.4°C. Transition enthalpies, entropies, and free energies were determined from fits to individual melting profiles and from the dependence of melting temperature on DNA concentration. 14,28,31,32 These two analytical methods assume that melting transitions proceed in a two-state manner; that is, intact duplex and unhybridized single strands are dominant, and partially melted duplexes are negligible throughout the melting transition. The methods also assume that transition enthalpies and entropies are temperature-independent. If ΔH°or ΔS°v alues differed more than 15% between these two methods, the duplex did not melt in a two-state manner. 28,32,33 In that case, we excluded ΔH°or ΔS°values from further analysis because they were inaccurate. Stabilizing Effects of LNA Modifications. Locked nucleic acids increase duplex stability and alter the melting transition enthalpy, entropy, and free energy. As shown in Figure 1B, we determined these LNA contributions (ΔΔH°, ΔΔS°, and ΔΔG°3 7 ) from the difference between LNAmodified and core duplexes. 28 LNA modifications were located at least five nucleotides from the terminal fluorophore and the quencher. In this design, terminal labels do not interact with LNAs and do not influence differential thermodynamic values between modified and core duplexes. Figure 1B shows an example of the analysis for the Set1−11 duplex. Entering ΔH°from Table S1 of the Supporting Information, we determined the experimentally measured differential enthalpic change [ΔΔH°(A+T+G+TC/TACAG)] to be −97.6 − (−86.4) = −11.2 kcal/mol. In the nearestneighbor model, this enthalpic contribution is a sum of enthalpies of base pair doublets (3) Rearrangement of eq 3 places unknown LNA parameters on the left side (4) The right side of eq 4 contains the experimentally measured enthalpic change and two previously determined nearestneighbor parameters. 21 McTigue, Peterson, and Kahn investigated the thermodynamics of interactions between LNA·DNA and DNA·DNA base pairs. We used their parameters to account for LNA−DNA interactions that occur in the beginning and at the end of a section of consecutive LNAs. Parameters from their 32NN set (Table 4 of ref 21) were entered into eq 4 (5) A similar equation was constructed for each LNA duplex. Analogous equations were set up for ΔΔS°and ΔΔG°3 7 contributions. Determination of LNA Nearest-Neighbor Parameters. Selecting two bases from the set of four (A, T, C, and G) with replacement leads to the creation of 16 nearestneighbor doublets. 34 Because antiparallel strands of native DNA duplexes exhibit structural symmetry, some doublet sequences are identical, e.g., AC/TG and GT/CA. Therefore, 10 nearestneighbor parameters are sufficient to represent internal DNA·DNA doublets. No such symmetry exists for LNA·DNA base pairs. The +A+C/TG doublet differs from the +G+T/CA doublet. Sixteen nearest-neighbor parameters are needed for consecutive LNA·DNA base pairs. We measured 62 perfectly matched LNA duplexes. Sixty of them melted in a two-state manner. Their thermodynamic values were used to determine the parameters. Each of the 16 LNA doublets was well represented in this data set with the following numbers of occurrences: 8 First, we examined enthalpic effects. Equation 5 was constructed for each LNA duplex. This thermodynamic analysis produced the set of 60 linear equations (6) where M is a 60 × 16 matrix of the number of occurrences for each LNA nearest-neighbor doublet in 60 duplexes, H n−n is the vector of 16 unknown parameters, and H exp is the column vector of experimentally measured enthalpic contributions. Because the number of unknown parameters (16) was less than the number of equations (60), eq 6 was overdetermined. 35 We solved it using singular-value decomposition (SVD) 36 by minimizing χ 2 (7) where σ H is the diagonal matrix whose elements are experimental errors of ΔΔH°. Because these errors were similar, they were set to a constant value of 3 kcal/mol and the SVD fit was not error weighted. Singular-value decomposition was conducted using Microsoft Excel Add-in, Matrix.xla package, version 2.3.2 (Foxes Team, L. Volpi, http://digilander.libero. it/foxes). Calculations were repeated using the Excel LINEST function, yielding the same values. We also examined matrix M for degeneracies. The rank of matrix M was 16. Because the rank was equal to the number of unknown parameters, the matrix had no singular values, and the parameters were unique and linearly independent. 34−36 Next, we replaced ΔΔH°values with ΔΔS°or ΔΔG°3 7 values in eqs 5−7. Analogous analyses gave us nearest-neighbor parameters for entropies and free energies. Error Analysis. Error estimates of parameters were obtained from bootstrap simulations. 37 These calculations estimate the dependence of parameter values on the data set. Many bootstrap data sets were created from the original data set. A different value of the parameter was usually determined from each bootstrap data set. The bootstrap estimate of the parameter error is given by the standard deviation of all these parameter values. In our simulations, the bootstrap data sets were the same size as the original data set; i.e., the sets contained data from 60 duplexes. The duplex data were randomly drawn, with replacement, from the original data set. This means that the entire experimental data set was used in each drawing. This procedure produced bootstrap data sets in which some duplex data from the original data sets were present multiple times and other data were not selected. We generated 5 × 10 4 bootstrap data sets. Equation 6 was solved for each data set using SVD, and 16 parameters for the consecutive LNAs were determined. If the rank of M was less than 16, the particular bootstrap data set did not contain all possible nearest-neighbor doublet sequences. Thermodynamic parameters could not be determined in this case; therefore, the bootstrap set was excluded from analysis, and a replacement data set was drawn. Fewer than 3% of the data sets were excluded. Standard deviations and averages were calculated from bootstrap parameter estimates. The average parameters determined from bootstrap analysis agreed with the parameters determined from the original data set. We have analyzed the error in free energy calculated from entropic and enthalpic contributions (ΔΔG°= ΔΔH°− TΔΔS°). Enthalpies and entropies of DNA melting transitions are correlated. 38 The errors of the enthalpic contribution, σ(ΔΔH°), and the entropic contribution, σ(ΔΔS°), are also highly correlated; their correlation coefficient is usually above 0.99. 17,21 If the uncertainty in ΔΔG°is estimated by error propagation, 39 the covariance cov(ΔΔH°,ΔΔS°) significantly decreases the error (8) Equation 8 indicates that the free energy is determined more precisely than the enthalpic or entropic contributions alone. The similar error compensation decreases the error in the melting temperature calculated from ΔH°and ΔS°. 17 This analysis demonstrates that it is useful to report the ΔΔH°and ΔΔS°parameters in Tables 1−3 beyond their individual errors. If the parameters are rounded to their error estimates, the calculated free energies and melting temperatures may be less precise. Validation Melting Experiments. Validation sets were measured by ultraviolet spectroscopy as previously described. 30 Absorbance at 268 nm was recorded every 0.1°C using a Beckman DU 650 spectrophotometer. The temperature was changed at a rate of 25°C/h in the range from 10 to 98°C using a high-performance temperature controller (Beckman-Coulter, Brea, CA). Both heating and cooling melting profiles were collected. Sloping baselines were subtracted from the melting profiles, 30 and the melting temperature was defined as the temperature at which the fraction of melted duplexes equaled 0.5. Nearest-Neighbor Parameters for Single-Base Mismatches. There are 12 possible LNA·DNA mismatches (+A·A, +C·C, +G·G, +T·T, +A·C, +C·A, +A·G, +G·A, +C·T, +T·C, +G·T, and +T·G). In our design, mismatches were located in the center of LNA triplets. Enthalpic, entropic, and free energy effects were determined from the differences between the energetics of LNA mismatch duplexes and core DNA duplexes. As an example, let us consider the Set1−17 duplex containing the +G·G mismatch. The enthalpic contribution from the A+T +G+TC/TAGAG duplex subsequence is calculated from the difference in the total enthalpy of the Set1−17 (TXRD-CGTCA +T+G+TCGC) and Set1−10 (TXRD-CGTCATGTCGC) duplexes (Table S1 of the Supporting Information) (9) The nearest-neighbor model assumes that this contribution is the sum of four nearest-neighbor doublets. We used the parameters of McTigue et al. 21 for two doublets (A+T/TA and +TC/AG). An equation similar to eq 4 was constructed. The left side contains two unknown parameters (10) Equation 10 was built for each mismatched duplex. Sequences having the +G·G mismatch were grouped into the subset. The resulting system of linear equations was overdetermined and was solved by SVD analysis. Eight unknown nearest-neighbor parameters (+A+X/TY, +C+X/GY, +G+X/ CY, +T+X/AY, +X+A/YT, +X+C/YG, +X+G/YC, and +X+T/ YA) were obtained (+X ≡ +G, and Y ≡ G). This procedure was repeated for 12 +X·Y mismatch types, and 96 parameters (8 × 12) were determined. The thermodynamic values of LNA duplexes containing mismatches can be predicted from eqs 1 and 2 using new parameters. SVD analysis indicated that the number of linearly independent equations and the rank of matrix M was seven Biochemistry Article dx.doi.org/10.1021/bi200904e | Biochemistry 2011, 50, 9352−9367 for mismatches. Eight fitted nearest-neighbor parameters are useful, but they are not a unique solution 34,35,40 because a constraint equation relates the numbers of eight doublets (N) (11) The constraint decreases the number of unique parameters to seven for each mismatch type. Equation 11 is valid for duplexes containing mismatches within consecutive LNAs. Unique, linearly independent parameters can be constructed from linear combinations of eight nonunique parameters. The similar constraint limits the number of unique parameters for some DNA mismatches. Allawi and SantaLucia proposed seven linearly independent sequences for DNA mismatches. 41 They added a C·G base pair to nonunique doublets to create linearly independent triplets. Using a similar procedure, we have added the +C·G base pair to LNA doublets and created seven unique triplets (+A+X+C/TYG, +C+X+C/GYG, +G+X+A/CYT, +G +X+C/CYG, +G+X+G/CYC, +G+X+T/CYA, and +T+X+C/ AYG). A single LNA mismatch lies in the center. Using SVD analysis, seven parameters for those triplets were determined for each +X·Y mismatch type (Table S3 of the Supporting Information). The energetics of any LNA +X·Y mismatch sequence (+K+X+M/LYN) could also be calculated from unique triplet parameters (12) The Nearest-Neighbor Parameters for Consecutive LNAs. Thermodynamic values were measured for the primary oligonucleotide set using fluorescence. 28 The melting process was monitored using Texas Red dye and Iowa Black RQ quencher, which were attached at the termini of duplexes. These labels appear to be optimal for melting experiments, as other fluorophores (FAM, HEX, and TET) do not provide reliable thermodynamic values and may ruin the two-state nature of melting transitions. 28 Fluorescence versus temperature plots always exhibited single, S-shaped transitions that were reversible. Figure 2 presents examples of averaged melting profiles. Pictured duplexes have a TXRD-CGTCA+T+A+TCGC base sequence. The DNA matched duplex (dashed line) is more stable than the LNA duplex containing the +A·A mismatch (dotted line) in Figure 2. This stability order is sequence-dependent and not universally observed. If LNAs cause large duplex stabilization and a single mismatch destabilizes a duplex less, the mismatched LNA duplex will be more stable than the matched DNA duplex of the same base sequence. This occurs often for +G·T, +T·G, +G·G, and +G·A mismatches. Thermodynamic values were extracted from melting profiles. First, the enthalpy, entropy, and free energy were estimated from fits to individual melting profiles. 28 We fitted only data within the transition where fraction θ ranged from 0.15 to 0.85. Second, ΔH°, ΔS°, and ΔG°3 7 were determined from graphs of 1/T m versus ln C t /4. These graphs were linear over a 150-fold range of 13 DNA concentrations ( Figure S1 of the Supporting Information). When the 1/T m data point deviated from the fitted straight line by a value more than twice the value of the propagated error, it was removed from the fit as an outlier. Fewer than 1% of all graph points were excluded. Melting temperatures and thermodynamic values for the primary data set are presented in Table S1 of the Supporting Information. The enthalpy, entropy, and free energy are negative because they are reported for the annealing reaction, which is customary practice. Our thermodynamic analysis assumed a two-state nature of melting transitions. When this assumption is valid, both 1/T m versus ln C t /4 plots and fits to melting profiles yield the same results. If thermodynamic values differed more than 15% between these two methods, the specific duplex did not melt in a two-state fashion, and its thermodynamic data were removed from further analysis, averages, and fitting of nearest-neighbor parameters. For the primary data set, average differences between both methods in ΔH°, ΔS°, and ΔG°3 7 values were 7.2, 8.3, and 2.5%, respectively. Duplexes exhibiting deviations from the two-state melting behavior are listed in Table S1 of the Supporting Information. The non-two-state melting transitions may occur when the cooperativity of the melting process is low and the duplex melts in several stages. The oligonucleotides can also fold into alternative stable structures, broadening the melting transition or splitting it into two S-shaped transitions. We did not observe the second transition in any melting profile. Duplexes also have a terminal dye−quencher pair that can interact with neighboring base pairs; this could change duplex melting behavior and local base pair cooperativity. Because fluorescence depends on dye−quencher distance and orientation, the fluorescent signal is more sensitive to non-two-state behavior than the UV absorbance signal. If dissociation of the dye from the quencher does not coincide with duplex melting, discrepancies in thermodynamic analysis are likely to occur and thermodynamic values could be inaccurate. The majority of duplexes in the data set (>93%) exhibited two-state melting transitions, and the average ΔH°, ΔS°, and ΔG°3 7 values of those duplexes were used to determine nearestneighbor parameters. Table 1 shows the nearest-neighbor parameters for consecutive LNAs. Standard errors were estimated from bootstrap analysis. The free energy values calculated from the Gibbs thermodynamic relation (ΔΔG°3 7 = ΔΔH°− 310.15ΔΔS°) agreed within 0.09 kcal/mol with the ΔΔG°3 7 values determined from SVD analysis. This agreement confirms the consistency of our method. Because ΔΔG°3 7 is negative for all nearest-neighbor doublets in Table 1, consecutive LNAs always stabilize a DNA duplex and the effect is sequence-independent. The most stabilizing doublets are +C+C/GG (ΔΔG°3 7 = −2.3 kcal/mol) and +G+G/CC (ΔΔG°3 7 = −2.0 kcal/mol). The smallest LNA impact is seen for the +A+A/TT (−0.6 kcal/mol) and +T+T/ AA (−0.8 kcal/mol) sequences. Effects of LNAs on ΔΔG°3 7 are approximately proportional to the duplex fraction of G·C base pairs. Introduction of LNAs stabilizes cytosine-guanine base pairs ∼0.9 kcal/mol more than adenine-thymine base pairs. The ΔΔS°values vary widely from −23.5 to 0.7 cal mol −1 K −1 . Software Implementation of New Parameters. Thermodynamic parameters in Table 1 are differential thermodynamic parameters; i.e., they represent deviations from native DNA duplexes. To calculate the total enthalpy for any LNAmodified sequence, one predicts the transition enthalpy for the native DNA duplex (ΔH°) according to eq 1 and adds the differential parameters (ΔΔH°) to take into account LNA effects (13) Both sums of eq 13 contain the same doublet sequences; the difference is in LNA modification (CA/GT vs +C+A/GT). Parameters for the same base sequences could be combined. Addition of differential LNA parameters (ΔΔH°) and DNA nearest-neighbor parameters 16 gives full nearest-neighbor LNA parameters (ΔH°) (14) where +K+X/LY is a nearest-neighbor doublet. We present full thermodynamic parameters for consecutive and isolated LNA modifications in Table 2. It is faster and takes fewer computer resources to calculate thermodynamic values from full thermodynamic parameters than from differential ones. As an example, we present calculations for the perfectly matched 5′-TA+C+AGG-3′ duplex. (15) The first and last parameters represent initiation interactions using the concept of a fictitious end base (E). 15,16,34 Transition entropies and free energies can also be casted into full parameters using analogous relationships. Accuracy of Thermodynamic Parameters for Consecutive LNAs. To verify the analysis and applicability of the nearest-neighbor model, we used new parameters to predict thermodynamics of the primary data set. New LNA parameters accurately predicted ΔH°, ΔS°, and ΔG°values for these short duplexes. The average relative errors were 3.3, 3.5, and 2.9%, respectively. This is comparable to the accuracy reported for nearest-neighbor parameters of native nucleic acids where standard deviations of thermodynamic values ranged from 3 to 8%. 17 To estimate the robustness of the new parameters, it is important to test their performance with an independent validation set of duplex oligomers that were not used to derive the parameters. We have measured 53 additional LNA-modified duplexes. The oligonucleotides did not have any fluorescent labels or quenchers attached. Their melting transitions were followed using UV spectroscopy. 30 These LNA duplexes ranged from 8 to 10 bp in lengths, from 10 to 88% in G·C content, and from 20 to 60% in LNA content. Figure 3 presents a comparison of experimentally measured melting temperatures with predictions. Good agreement is observed. Additional details are listed in Table S2 of the Supporting Information. The new parameters in Table 2 result in an average T m prediction error of 2.1°C (χ 2 = 2549). Exiqon also developed a thermodynamic model of locked nucleic acids. 42 Because their parameters have not been publicly disclosed and the algorithm has not been described in detail, we relied on T m predictions that were obtained online using their software. Comparison of experimental melting temperatures reveals that the Exiqon model tends to overestimate melting temperatures for our validation set. The average T m prediction error is 4.2°C, and χ 2 is equal to 7981. This level of accuracy agrees with the values reported by the developers where a standard deviation of 5.0°C was obtained for T m predictions of chimeric LNA·DNA duplexes. 42 Assuming a normal distribution of measured melting temperatures, probability P of the null hypothesis that this χ 2 difference occurs by random chance is less than 0.01. Thus, a two-tailed F -test for the ratio of χ 2 values 30,35 indicates that the new parameters from Table 2 predict melting temperatures more accurately than the Exiqon software. Nearest-Neighbor Parameters for Single-Base Mismatches. From the primary data set, we determined nearestneighbor parameters for single mismatches using SVD analysis. Table 3 shows eight doublet parameters for each of 12 LNA format of nearest neighbors simplifies software implementation, but eight parameters for mismatch doublets are not unique, which was demonstrated in Materials and Methods. The constraint equation (eq 11) limits the number of linearly independent parameters to seven for each mismatch type. The unique parameters were constructed in triplet format and are listed in Table S3 of the Supporting Information. To investigate trends and relationships of mismatch stabilities, we predicted thermodynamic values for all possible LNA triplets with a central mismatch. Matched base pairs flank the mismatch on both 5′ and 3′ sides. There are four possibilities for each flanking base pair (+A·T, +T·A, +C·G, and +G·C). Sixteen triplets, therefore, exist for each mismatch type (+A+X+A/TYT, +A+X+C/TYG, +A+X+G/TYC, +A+X+T/ TYA, +C+X+A/GYT, +C+X+C/GYG, +C+X+G/GYC, +C+X +T/GYA, +G+X+A/CYT, +G+X+C/CYG, +G+X+G/CYC, +G+X+T/CYA, +T+X+A/AYT, +T+X+C/AYG, +T+X+G/ AYC, and +T+X+T/AYA). There are 4 × 3 = 12 mismatch types because three types exist for each LNA nucleotide (for example, +A·A, +A·C, and +A·G for LNA adenine). The total number of unique triplets is therefore 16 × 12 = 192. Contributions to the free energy of the duplex transition (ΔG°3 7 ) were predicted for these triplets containing LNA mismatches, DNA mismatches, and related perfectly matched sequences using parameters from Tables 2 and 3 The LNA triplets were sorted according to free energy contributions. The least stable LNA mismatch is +A+C+T/ TCA (ΔG°3 7 = 2.7 kcal/mol). The same C·C mismatch context is also the most destabilizing for DNA·DNA single-base mismatches. 43 The most stable LNA mismatch is the +G·T mismatch within the context of +G+G+C/CTG (−5.5 kcal/mol). It is interesting that the most stabilizing DNA mismatch occurs in the same sequence context, but it is the G·G mismatch instead, GGC/CGG (−2.2 kcal/mol). Average ΔG°3 7 values over 16 triplet contexts produced a trend of decreasing stability for mismatches within consecutive LNA·DNA base pairs: +G·T ≫ +G·G > +T·G ≈ +G·A > +C·A > +T·T > +A·G ≈ +C·T > +A·A > +A·C ≈ +T·C > +C·C. The trend of relative stabilities of RNA·RNA mismatches closely resembles this trend: 44 rG·rU ≫ rG·rG > rU·rU > rA·rC > rC·U > rA·A ≈ rA·rG ≈ rC·rC. The stability trend of DNA·DNA mismatches shows some similarities: 43 G·G > G·T ≈ G·A > T·T ≈ A·A > T·C > A·C > C·C. The main differences between LNAs and DNAs are the higher relative stabilities of +G·T and +C·A mismatches and the lower relative stability of the +A·G mismatch. The order of stability of hybrid RNA·DNA mismatches is between the trends of RNA·RNA and DNA·DNA mismatches. 45,46 The most stable mismatch is the rG·T mismatch, like in RNAs, while the rC·A mismatch has relatively low stability, like in DNAs. Mismatch Discrimination. To study the dependence of mismatch discrimination on oligonucleotide sequence, the free energy of mismatch discrimination (ΔΔG°) was defined as the difference between mismatched and matched duplexes. The ΔΔG°value quantifies the amount of destabilization due to a mismatch. Let us define G·G mismatch discrimination in the +T+G+T/AGA LNA triplet (16) and in the isosequential DNA triplet (17) Values of ΔΔG°are positive because the lower stability of the mismatch makes ΔG°3 7 less negative. The larger the ΔΔG°v alues, the stronger the destabilization and mismatch discrimination. The positive difference between eqs 16 and 17 [ΔΔG°(LNA) − ΔΔG°(DNA)] indicates that LNAs increased the level of mismatch discrimination. The negative difference means that LNA modifications decreased the level of mismatch discrimination. We have predicted these free energy differences for the entire set of 192 possible mismatch triplets. Figure 4 shows the range of ΔΔG°(LNA) − ΔΔG°(DNA) values for each mismatch type. LNA modification enhances discrimination for 85% of sequences and weakens it for 8%. Free energy differences are insignificant, that is, between −0.2 and 0.2 kcal/mol, for 7% of mismatches. Figure 4 shows that LNAs negatively impact discrimination of +G·T mismatches and some +C·A mismatches. It appears that base pairs flanking a mismatch affect discrimination as well. The +G·C base pairs adjacent to a mismatch decrease the level of discrimination, while +A·T or +T·A base pairs increase it. To quantify this effect, we averaged ΔΔG°(LNA) − ΔΔG°(DNA) differences over possible triplet sequences containing a specific flanking base pair. The order of increasing mismatch discrimination resulting from the flanking base pair is as follows: +G·C < +C·G < +A·T ≈ +T·A [with average The largest increases in the level of mismatch discrimination, i.e., the most positive ΔΔG°(LNA) − ΔΔG°(DNA) differences, are seen for the +C·C mismatch in the +T+C+C/ACG triplet (3.4 kcal/mol), the +A·G mismatch in +T+A+G/AGC and +T+A+T/AGA (3.4 kcal/mol), and the +T·C mismatch in +A+T+C/TCG (3.2 kcal/mol). LNAs significantly enhance discrimination of all +G·G, +G·A, +A·A, and +T·T mismatches, as well. The free energies in Figure 4 were calculated at 37°C, the temperature of the human body. In some biological applications, for instance, polymerase chain reaction, oligonucleotides are annealed at higher temperatures. Analysis at 60°C reveals a similar dependence of LNA discriminatory effects on mismatch type (data not shown). However, values of ΔΔG°(LNA) − ΔΔG°(DNA) increased by ∼0.5 kcal/mol for the +G·T, +C·A, and +A·C mismatches. This result suggests that the positive effect of LNA on mismatch discrimination increases with temperature. For example, LNAs improve mismatch discrimination, in relative terms with respect to DNA, for half of +G·T mismatches at 60°C, while such positive effects are rare at 37°C. In our analysis, we assumed negligible heat capacity effects (ΔC p ∼ 0). This has also been assumed for previously published thermodynamic parameters, although recent comprehensive studies 47 detected small heat capacity changes, ∼50 cal mol −1 K −1 bp −1 . Because similar mismatch discrimination trends are predicted at different temperatures, the veracity of this assumption does not seem to seriously influence the results of mismatch analysis. Validation of Nearest-Neighbor Parameters for LNA Mismatches. To test the accuracy of mismatch parameters, we measured the stability of LNA mismatches described in the previous paragraphs. Table 4 lists sequences and their melting temperatures. Neither dye nor quencher was attached to these oligonucleotides. Their melting temperatures were determined using ultraviolet melting experiments. LNA modifications were predicted (1) to decrease the level of mismatch discrimination of VAL-A and VAL-B sequences, (2) not to affect discrimination of VAL-C and VAL-D sequences, and (3) to enhance mismatch discrimination of VAL-E, VAL-F, and VAL-G sequences. Considering limitations of the nearest-neighbor model, 19,48 the predicted discrimination effects (ΔT m ) agree with experimental measurements for all seven sequence sets. New LNA mismatch parameters result in an average T m prediction error of 2.9°C for the sequences in Table 4. The accuracy of DNA mismatch parameters 20 is the same. For DNA or LNA matched duplexes, average errors of predicted melting temperatures are less than 1.3°C. The lower accuracy of mismatch predictions suggests that mismatched duplexes are more likely to deviate from assumptions of the nearestneighbor model and two-state transitions. A small perturbation, like a single-nucleotide mismatch, does not usually break down assumptions of the nearest-neighbor model, but it may increase the magnitude of interactions propagating beyond nearestneighbor nucleotides. These long-range interactions are often of electrostatic origin and likely become more significant in buffers with low counterion concentrations (<40 mM Na + ). We expect the weaker H-bonding interactions and increased nucleobase flexibility at the mismatch site. This potentially decreases cooperativity and increases deviations from the twostate melting behavior. Characteristics of Effects of LNA on Duplex Stability. The LNA modifications placed at every second or third nucleotide position are very effective in increasing the duplex stability and affinity for complementary targets. 42 Mismatch discrimination is improved most if the triplet of consecutive LNAs is centered on the mismatch site. 5 A single LNA modification usually discriminates less. We were therefore motivated to study thermodynamics of consecutive LNAs to expand the published nearest-neighbor model of single LNA modifications and improve our understanding of LNA·DNA duplex stability. We employed the fluorescence melting method to measure the stability of modified oligonucleotide duplexes. 28 This new technology allows measurements for large sets of duplexes with unprecedented speed, and its accuracy is similar to the accuracy of the ultraviolet optical melting method. Using the fluorescence method, the experimental errors of ΔH°, ΔS°, ΔG°, and T m were 8%, 9%, 4%, and 0.4°C, respectively. If the duplex melts in the two-state manner, the thermodynamic values are in agreement between both methods. The transition enthalpies and entropies measured using the fluorescence differed by <4% from the values determined by the UV melting method. 28 The free energy values agreed within 2.5% when the optimal Texas Red−Iowa Black RQ pair was attached to the duplex terminus. These differences are similar or smaller than the errors seen in UV melting experiments where the errors of ΔH°, ΔS°, and ΔG°are ∼8, ∼8, and ∼4%, respectively. 17,41 The fluorescence melting method relies on the dye− quencher pair attached to one of the duplex termini as shown in Figure 1B. When the duplex melts, the dye and the quencher dissociate, giving the increase in the magnitude of the fluorescence signal. Although the terminal dye−quencher pair stabilizes the duplex, it is attached to both the LNA-modified duplex and the core duplex. The Texas Red−Iowa Black RQ labels therefore change the ΔH°, ΔS°, and ΔG°values of both duplexes to the same amount. The thermodynamic impact of LNA modification is determined from the difference between the LNA-modified and core duplexes. We have shown previously that these thermodynamic differences (ΔΔH°, ΔΔS°, and ΔΔG°) are not affected by terminal labels. 28 The stabilizing effect of labels cancels out in this analysis. Using SVD, we determined nearest-neighbor parameters for consecutive LNA·DNA base pairs. New parameters accurately predict melting temperatures of chimeric LNA·DNA duplexes. The average error was ∼2°C, which is the best accuracy that can be achieved by the nearest-neighbor model. 19,48 If LNA modifications amount to a moderate perturbation of a DNA duplex, new parameters are most accurate. Analysis of the validation data set (Table S2 of the Supporting Information) suggests that accuracy decreases slightly as the percentage of LNA modifications increases. The duplexes of VAL-01− VAL-33 are predicted more accurately (average error of 1.5°C) than VAL-34−VAL-53 duplexes (3.0°C). The LNA content is low for the VAL-01−VAL-33 subset (20−25%) and varies from 30 to 60% for the latter subset. We also predicted melting temperatures for 11 duplexes from published sources where one strand was LNA-modified from 89 to 100%. Initiation parameters for terminal LNAs were assumed to be identical to DNA initiation parameters. 16 Table S4 of the Supporting Information shows results. The average error of T m predictions was higher for these duplexes (2.7°C) than the error seen for the set of VAL-01−VAL-33 duplexes (1.5°C). If an LNA strand is modified ≥50%, LNAs induce structural changes that could propagate beyond neighboring base pairs. In that case, the nearest-neighbor parameters and the model may be less accurate. Thermodynamic parameters reveal the nature of stabilizing effects. The single strand to helix transition of nucleic acid is usually driven by favorable enthalpic changes associated with an increased level of stacking and H-bonding interactions. Entropic changes are unfavorable. Because single strands explore more degrees of freedom than the strands in the relatively stiff duplex structure, duplex formation incurs the entropic loss. Locked nucleic acids have been reported to alter both transition enthalpy and entropy, 21,49 so the origin of LNA effects is uncertain. The free energy change due to LNA residues can be divided into enthalpic (ΔΔH°) and entropic (−TΔΔS°) components, which are presented in the second and last columns, respectively, of Table 1. The values suggest that the stabilizing effect is of enthalpic origin. Consecutive LNAs induce favorable changes in the transition enthalpy, making it more negative by −1 to −9 kcal/mol per each nearest-neighbor doublet. Changes in the entropic contribution to the free energy (the last column of Table 1) are either unfavorable or negligible. The values of −TΔΔS°range from 0 to 7 kcal/mol at 37°C and are smaller in magnitude than ΔΔH°. Thus, we conclude that the higher stability of consecutive LNA·DNA base pairs is mostly the result of favorable contributions to the transition enthalpy. This is the case for all nearest-neighbor doublets, confirming that enthalpy drives stabilization of consecutive LNAs regardless of base sequence. These thermodynamic observations are related to structural changes. Stabilizing enthalpic effects of LNAs are equated with enhanced stacking interactions, potentially improved H-bonding of base pairs, and weakened hydration of the duplex state. 50 The LNA cytosine C5-methyl group, which is not present in the native DNA, may also increase the stacking energies due to additional van der Waals interactions with neighboring bases. 45 The entropic contributions of LNAs originate from backbone conformational preorganization, which is the result of restrictions of ribose flexibility in the C3′-endo (N-type) conformation. 3,51 Because the modified ribose is similarly constrained in the single strand and in the duplex conformations, it has been argued that the smaller entropic loss occurs upon formation of LNA·DNA rather than DNA·DNA base pairs. While we observe that the stabilization of consecutive LNAs is driven by enthalpic changes, McTigue et al. reported that the stabilizing effects of a single LNA modification are mostly entropic in origin. 21 Taken together, these findings suggest that the entropic changes characterized by restriction of nucleotide local conformations are achieved by the introduction of a single LNA nucleotide. Additional adjacent LNAs stabilize the duplex further by favorable enthalpic changes. This mechanism may explain the conflicting reports in the literature regarding the origin of LNA stabilization. Structural studies have shown that both isolated and consecutive LNA residues restrict ribose conformation space and introduce structural changes in the double helix toward A-form. For example, LNAs widen the minor groove and decrease the value of the rise and the twist. 51−54 The 1 H NMR experiment with the C+TGA+TA+TGC sequence that contains only isolated LNA modifications failed to show significant changes in base stacking. 52 In contrast, LNAs in the C+TGC+T+TC+TGC sequence containing consecutive modifications enhanced base stacking. 53 Our fluorescence experiments using 2-aminopurine also detected enhanced stacking interactions in LNA triplets. 5 These apparent discrepancies can be reconciled assuming that the energetic impact of a single LNA in the duplex interior is dominated by entropic changes, and the subsequent addition of consecutive LNAs stabilizes duplexes by favorable enthalpic changes that are associated with enhanced stacking interactions. Energetics of LNA modifications introduced at the duplex terminus may have a different character. Kaur et al. measured impacts of isolated LNA modifications at various positions. 49 The interior modifications decreased entropic loss in agreement with our rationale, but stabilizing effects of the terminal modification were driven by favorable enthalpic changes. We have not studied consecutive LNAs at the duplex terminus. The A-form helical conformation that is preferred by LNA· DNA duplexes is also dominant in RNA·RNA and RNA·DNA duplexes. In fact, ribose puckering of the LNA·DNA duplex resembles closely the puckering of the RNA·DNA hybrid. 54 However, the structural similarity does not mean the same thermodynamic parameters. The LNA·DNA nearest-neighbor doublets are on average 1.4 kcal/mol more stable than RNA·DNA doublets. 19 For example, the ΔG°3 7 of +C+C/GG is −4.1 kcal/mol, while rCrC/GG is only half as stabilizing, −2.1 kcal/mol. The sequence dependence of parameters is also different. The least stable LNA doublet is +A+A/TT, while the rArA/TT doublet is more stable than five other RNA·DNA doublets. These significant differences reveal that thermodynamic parameters of RNA·DNA duplexes are not good approximations of LNA·DNA thermodynamics. The different composition of the ribose moiety, different patterns of hydration in the minor groove, the extra methyl group of +C, and subtle variations of the helical structure potentially explain these thermodynamic differences. Enhanced Mismatch Discrimination Is Not Unique for Locked Nucleic Acids. We show in Results that LNA·DNA and RNA·RNA mismatches exhibit a similar trend of stabilities, which deviates from the stability trend of DNA mismatches. To inquire whether the mismatch discrimination is similarly enhanced in RNA duplexes, like it is enhanced in LNAs, we predicted the free energy of mismatch discrimination (eq 16) for RNA, RNA·DNA, LNA, and DNA triplets. For each mismatch type, the ΔΔG°values were averaged over 16 possible triplet sequences containing the central mismatch. Predictions were based on the established nearest-neighbor parameters for matched LNA, DNA, and RNA base pairs (Table 2 and refs 16, 17, and 19). For mismatches, the complete set of thermodynamic parameters is available for LNA·DNA and DNA·DNA pairs (Table 3 and ref 20). Because parameters for many RNA·DNA mismatches are unknown, we averaged ΔΔG°for eight rG·T sequence contexts reported by the Sugimoto group 45 and predicted the average ΔΔG°values for rA·A, rG·G, and rC·C mismatches. Their parameters were recently determined. 46 The rA·rA, rG·rG, and rC·rC RNA·RNA mismatches were approximated by the algorithm of Davis and Znosko. 44 Mathews, Sabina, Zuker, and Turner parameters were used for the rG·rU mismatch. 18 The RNA calculations were conducted with MELTING version 5.0.3. 55 Figure 5 shows the average free energies of duplex destabilization due to a mismatch. The general trend of increasing discriminatory power for the A·A, G·G, and C·C mismatches is as follows: DNA·DNA ≪ RNA·DNA < RNA·RNA ≤ LNA·DNA. These mismatches destabilize the LNA·DNA and RNA·RNA duplexes more than the DNA·DNA duplexes. To a lesser degree, the level of mismatch discrimination also increases in RNA·DNA duplexes. The opposite trend is seen for the wobble G·T base pair. The DNA·DNA mismatch shows the strongest discrimination. The +G·T, rG·T, and rG·rU mismatches discriminate less. This analysis suggests that the enhanced mismatch discrimination is not a unique property of locked nucleic acids but rather the result of structural changes of nucleic acids from B-form to A-form. DNA·DNA duplexes in water solutions are in B-like conformations. The RNA·DNA hybrids fold into structures that are intermediates of A-and B-forms. The RNA·RNA duplexes occur in the A-form conformation, which is also the structure of LNA·DNA base pairs. 53,54 As the conformational equilibrium is shifted toward the A-form, the level of mismatch discrimination increases. This is likely the result of energetic changes in stacking interactions, H-bonding of base pairs, and hydration envelope when the duplex turns to the A-like conformation. The one significant structural change from B-form to A-form is the compaction of the rise between base pairs along the helical axis. The rise is significantly smaller in the A-form (0.26 nm) than in the B-form (0.34 nm). Because of the shorter distances, LNA nucleotides in the A-like structure may engage in stronger stacking interactions, which are disrupted by mismatches. If our hypothesis is correct, the enhancements of mismatch discrimination can be expected for any modification that shifts the conformation equilibrium from B-form to A-form, e.g., 2′-O-methyl-RNA, 2′-O-[2-(methoxy)ethyl]-RNA, 2′-deoxy-2′-fluoro-RNA, and N3′→P5′-phosphoramidate-DNA. 56−59 As discussed earlier, +G·T mismatches are the exception; their level of mismatch discrimination decreases when LNA-modified guanine is introduced at the mismatch site. This could be a result of improved stacking interactions of guanine with neighboring bases. These stacking interactions are not significantly weakened by a thymine mismatch because the G·T pair is stabilized by two hydrogen bonds and is wellstacked in the duplex structure. Small pyridine bases are expected to stack less than large purine bases. This may explain the opposite discriminatory effects of LNAs in +T·G versus +G·T mismatches. Chemical differences among LNA, RNA, and DNA are the composition and conformation of the ribose moiety. Another difference is the C5-methyl group in pyrimidine nucleobases. In RNA, uracil and cytosine are typically unmethylated. In DNA, thymine is C5-methylated and cytosine is not. In LNA nucleotides, both thymine and cytosine are C5-methylated. Wang and Kool investigated thermodynamic effects of C5methyl and 2′-OH groups in DNA and RNA duplexes. 60 The methyl group stabilized duplexes on average by 0.25 kcal/mol, and its effects on ΔG°were largely independent of 2′-hydroxyl effects. The C5-methyl appeared to enhance base stacking. Ziomek et al. studied 5-alkyl and 5-halogen analogues of uracil in (rArUrCrUrArGrArU) 2 duplexes. 61 The methyl group stabilized the RNA duplex slightly (ΔΔG°< 0.1 kcal/mol). Sugimoto et al. examined thermodynamics of pyridine methyl groups in RNA·DNA mismatches. 45 The rG·dU mismatches were found to be less stable than rG·dT mismatches regardless of neighboring sequence context. The free energy contribution of the thymine C5-methyl was estimated to vary from 0.1 to 0.5 kcal/mol. The methyl moiety likely has a similar thermodynamic impact on the LNA cytosine residue. 62 The extra methyl group of pyrimidines is not the driver of mismatch discrimination trends. The increase in the level of discrimination occurs in purine mismatches (+A·A and +G·G) and in the sequence contexts that do not contain methylated LNA cytosine. For example, LNAs increase free energies of +A·A mismatch discrimination in the center of the +G+A+G triplet by 1.0 kcal/mol. Further, relative to DNA, the extra C5methyl group is present in LNA cytosine, but not in RNA cytosine. In both cases, the level of mismatch discrimination increases; i.e., both LNA and RNA duplexes have more discriminatory power than DNA. The presence of the C5methyl group does not appear to be essential for discriminatory effects. Oligonucleotide Design and Online Software. Sufficient mismatch discrimination is important for many oligonucleotide applications. Locked nucleic acids enhance discrimination due to two impacts. First, LNAs increase the stability of oligonucleotide probes. This allows the use of shorter sequences with more discriminatory power because the mismatch has a much larger impact on the duplex stability in shorter sequences than in longer ones. 5 This length effect is very significant in duplexes with <30 bp. The ΔΔG°and ΔT m differences between matched and single-base mismatched duplexes can double when the duplex length is decreased from 25 to 17 bp. Second, locked nucleic acids can also increase specificity directly if they are located at or next to the mismatch site. We have discovered that the triplet of LNA residues containing the mismatch in the center has the largest discriminatory power; a single LNA modification usually discriminates less. 5 We therefore recommend using the LNA triplet at the mismatch site. This design will increase the discriminatory power for a majority of mismatches (in particular, for A·G, T·C, C·C, G·G, A·A, and T·T). New results also pinpoint several anomalies. LNAs in some +G·T and +C·A mismatches impact discrimination negatively. In these cases, is it not advised to introduce the LNA modifications at the mismatch site, but LNAs could be placed ≥2 bp from the mismatch to increase the stability of the probe−target duplex and make the probe shorter. The short probe will likely exhibit more discriminatory power. Alternatively, the probe could be redesigned to target the complementary strand if it is available in the biological sample. This will change the +G·T mismatch into the +T·G mismatch; the latter one is more likely to show positive effects of LNA on discrimination. It is also important to optimize the location of mismatches within the probe. The mismatches at the terminus or adjacent to the terminus (penultimate mismatches) show significantly less discrimination than the mismatches in the duplex interior. 5,20 It is preferable to place mismatches at least 3 bp from the termini of the probe−target duplex. Although the mismatch site in the center of the duplex maximizes the discrimination, it is not essential for the mismatch to be located exactly in the center of the oligonucleotide probe. As long as the mismatch is positioned in the interior of the duplex and not next to the termini, its discriminatory power (ΔΔG°) will be very close to the maximum. To help design optimal LNA oligonucleotides, free software is available at the IDT websites http://biophysics.idtdna.com and http://www.idtdna.com. The web tools predict melting temperatures, free energies, and the extent of hybridization using the latest nearest-neighbor parameters, including parameters from Tables 2 and 3. It is important to enter conditions of the experiments (e.g., cation and DNA concentrations) to obtain the relevant predictions. Users can test effects of LNA modifications and mismatches at any location within their sequence. The potential LNA probes can be compared with unmodified probes to estimate benefits of modifications. The probes can be ranked by their mismatch discrimination energetics (ΔΔG°and ΔT m ) and tuned to the hybridization temperature of a specific application. It is often optimal if the probe has a melting temperature 3−5°C above the annealing temperature. The perfectly matched probe−target duplex will be stable, while the mismatched duplex is likely to be unstable under those conditions and will not give a false positive signal. Many applications also require that chimeric oligonucleotides bind effectively and exclusively to DNA complements. The design must therefore exclude sequences that can form stable hairpins, dimers, and other self-folding structures. This is important because LNA·LNA base pairs are more stable than isosequential LNA·DNA base pairs. 63 Because thermodynamic parameters for LNA·LNA base pairs, LNA bulges, and hairpin loops are unknown, it is not currently possible to accurately predict the propensity of an LNA oligonucleotide to form selffolding structures. The simple approach is to avoid long stretches of consecutive LNAs. This approach makes stable LNA·LNA duplexes less likely to appear but also unnecessarily impedes probe design. Accurate predictions of LNA·LNA base pair stability would be useful. The tendency of the base sequence to form hairpins can be estimated by the hairpin function of the IDT OligoAnalyzer tool. 64 The self-dimer function shows the potentially stable structures that can form between two molecules. The heterodimer function estimates interactions between the probe and the primers. If the predicted structure contains several consecutive LNA·LNA base pairs, it could be stable enough to compete with the formation of the probe−target duplex and the assay would be negatively impacted. For such sequences, a single LNA modification could be a better choice than the LNA triplet. The design of real-time PCR hydrolysis probes (e.g., TaqMan probes) calls for additional considerations. This family of assays relies on the 5′ exonuclease activity of the polymerase, which degrades the probe and releases the dye attached to the 5′ terminus of the probe. Locked nucleic acids cannot be introduced at the 5′ terminus of the probe or at the adjacent nucleotide because they would increase nuclease resistance and interfere with the desired probe degradation. Future Challenges. Although the new parameter set is a significant addition toward a complete thermodynamic model of LNA modifications, parameters for some important LNA structures have yet to be determined (mismatches adjacent to a single LNA modification, LNA·LNA base pairs, bulges, and tandem mismatches). We also do not have parameters for LNAs at duplex termini, although such modifications are employed in PCR primers. The parameters in Table 3 were determined for mismatches located in the interior of a duplex and will not be accurate at the terminus. The mismatch in the terminal or penultimate position often affects duplex stability less than the same mismatch located in the interior, i.e., ≥3 bp from the terminus of the duplex. 20 ■ ASSOCIATED CONTENT * S Supporting Information Thermodynamic values for studied duplexes, figures of 1/T m versus ln C t fits, unique thermodynamic parameters for triplets containing single-base mismatches, and melting temperatures of validation data sets. This material is available free of charge via the Internet at http://pubs.acs.org. ■ ACKNOWLEDGMENTS We thank Derek M. Thomas for assistance with capillary electrophoresis and mass spectroscopy tests of oligonucleotide samples.
2014-10-01T00:00:00.000Z
2011-09-19T00:00:00.000
{ "year": 2011, "sha1": "0f6f740c8b66d6797f183f364c5e904739371505", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://doi.org/10.1021/bi200904e", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0f6f740c8b66d6797f183f364c5e904739371505", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
252730571
pes2o/s2orc
v3-fos-license
Biochar and Compost in the Soil: A Bibliometric Analysis of Scientific Research Biochar is a carbonized material obtained from the pyrolysis of biomass produced in a limiting environment of zero or very low oxygen. Its interest lies in its versatility for different applications in the water treatment, soil pollution, mitigation of greenhouse gases, etc. The synergy of this product with other amendments such as compost has been studied for different applications in the soil, including environmental remediation, crop yield, etc. The aim of the research is to identify the relevant aspects in the scientific literature of biochar, compost and soil through a bibliometric analysis for which 753 articles were selected from the Scopus database, having as keywords “biochar”, “compost” and “soil”. This research used R software, specifically the package Bibliometrix, to analyze descriptive analysis, author sources, document metrics, citation, co-citation analysis, co-occurrence network, co-word analysis, and collaboration analysis. Results showed that Zhang Z is the author with the greatest number of documents, and with a higher H index. Science of The Total Environment, Bioresource Technology, and Agronomy are the 3 topmost relevant sources. The keywords according to bond strength and most frequent use were biochar (538 occurrences), composting (349 occurrences), compost (436 occurrences), charcoal (295 occurrences), soil (255 occurrences). China is the country with the most collaboration. It is hoped that the bibliometric review will help to identify current research trends and provide information on the application of biochar and compost in the soil. Introduction A large amount of solid waste is generated with the increase in the world population and due to the consumption behavior of inhabitants; so, the efficient management of solid waste is crucial for the successful disposal of waste (Hoornweg et al., 2013). Biochar fits perfectly as a method to contribute to adequate waste management, valuing it through pyrolysis and generating an amendment for the soil, so the numerous strategies to use the final product biochar facilitate zero waste and development of a circular economy (Hu et al., 2021). Biochar is the product obtained from biomass pyrolysis under limiting conditions or in the absence of oxygen (Lehmann et al., 2011). The interest in biochar lies in the fact that it can be produced from different biomass, such as crops, agricultural remnants, industrial and municipal solid waste (Yaashikaa et al., 2020). Its application has shown efficiency in different areas such as water treatment; thus, many studies have been dedicated to research the application of biochar for the removal of contaminants from aqueous solutions (Tan et al., 2015) and the treatment of polluted soils with heavy metals. Kong et al. (2021) have mentioned that the most important factor that determines the effect of remediation in soils is the diversity of physical and chemical properties of biochar, such as the surface area, porosity and functional groups that vary with the type of biomass consumed in pyrolysis and the control of parameters. Furthermore, its application includes the reduction of greenhouse gas emissions by capturing and carbon storage (CCS) and increased soil fertility in an environment friendly manner (Lee et al., 2019). IPCC (2019) provides updated guidance on specific issues to consider in national greenhouse gas inventories, and the guide now includes reports of non-CO2 emissions from biochar production and CO2 and CH4 emissions from flooded land. From the past to the present, most people have used charcoal as the main source of energy for cooking because charcoal has several advantages, such as easy storage, high calorific value, and low cost. It is easy to store charcoal for longer periods and it is more durable than wood. The difference between biochar and charcoal lies in its application: while the first is for soil amendment, the second is to be used as fuel (Sangsuk et al., 2020). There are other amendments that can be applied to the soil, such as charcoal, activated carbon, manure, ashes, lime, compost (Palansooriya et al., 2020). Due to its wide versatility and applications, biochar has been tested with different amendments such as compost, and the interest between both has been increasing over the years. Compost is a product obtained from municipal, agricultural or forestry organic waste (Hu et al., 2022), and it is produced under aerobic conditions (Lim et al., 2016). The application of biochar and compost in the soil can occur in two ways: the first is from a mixture of compost and biochar, that is, biochar-compost, where the biochar is produced from a residue and the compost from others, and when they are ready in terms of quality, they are mixed for soil application. The second interaction is when biochar is added to the composting process, which is known as co-composting. The biochar-compost amendment interaction could increase the physical and chemical properties of the soil, by providing certain nutrients. It can also be used to recover degraded soils, which makes more agricultural land available, while increasing crop yields so that the need for expansion of agricultural land area decreases (Khan et al., 2016), as well as for co-composting; the integration of biochar in the compost at the production stage should result in a mature and suitable amendment for general soil improvement, with the added value of maximum potential as a biosorbent for metals in solution (Wang et al., 2019). Bibliometrics can be defined as the exploitation of statistical techniques to understand and analyze global research in a particular field from publications retrieved from a database of academic literature . Bibliometrics helps to identify current research trends, provides information on specific and general aspects over time, and contributes to the development of important areas. Recently, bibliometric studies have been carried out on the topic of biochar research (Abdeljaoued et al., 2020;Qin et al., 2022), including trends in research on the effects of biochar in the soil (Yan et al., 2020). On the other hand, according to the literature, biochar and compost have positive effects in different research topics, for example, in mobility and toxicity of metals (Beesley et al., 2014), in the physical, chemical, and microbiological properties during the co-composting of spent mushroom compost and biochar (Zhang and Sun, 2014), soil quality, crop yield, and greenhouse gases in agricultural soils (Agegnehu et al., 2016). That is why the need arises in the literature for a bibliometric review focused on the analysis of both amendments and their application in the soil. In this sense, the aim of this research is to carry out a bibliometric analysis in order to explain recent trends, collaboration, and citations, among other relevant aspects. Data source and search criteria The bibliometric analysis that was carried out in research followed the procedure described by Zupic and Čater (2014) shown in Fig. 1, which includes five phases. Data collection consisted in the use of the open source software R, using the Bibliometrix package developed by Aria and Cuccurullo (2017). The data from the documents obtained were exported and analyzed on bibliometric software. Data analysis was conducted to obtain descriptive bibliometric analysis about all the articles. Data visualizations were conducted to analyze productivity of authors, how their evolution was over time, as well as the group of researchers who remain active and who represent a large part of the general effort of scientific production; therefore, it was important to classify the most productive researchers, and this was possible with Lotka's law (Kilicoglu and Mehmetcik, 2021). The productivity of the authors is controlled by this law (Lotka, 1926), since it explains the relationship between the number of authors and articles. It is also represented mathematically with the following equation: y is the expected percentage of authors (Kilicoglu and Mehmetcik, 2021); n and C are constants (Kilicoglu and Mehmetcik, 2021;Urbizagástegui-Alvarado, 1999). In addition, the analysis of the authors was carried out based on their H index, author-institution-country collaboration, corresponding author's country, most cited countries, and most relevant affiliations. Furthermore, Bradford's law was analyzed to obtain the main magazines in a period, which were classified according to their productivity. This law groups three zones, and each zone contains the same number of articles (Kilicoglu and Mehmetcik, 2021). Bradford's law is represented mathematically by the following equation: = = ( . ) 1/ γ is 0.57,772 and Y m is equal to the maximum productivity of the first magazine; P is the number of groups (Andrés, 2009). Source dynamics were conducted to evaluate the trend of the sources over the years. Regarding the analysis of documents, the top 10 were shown. In addition, the trend of the articles regarding biochar, compost and soil in its beginnings and the direction of research in recent years are discussed. Citation and co-citation analysis was conducted to obtain a Where: Where: bibliometric historiography. This mapping allows creating a historical network of direct citations from the most cited work and then visualizes a network in the chronological order (Garfield, 2016). Furthermore, co-occurrence network and co-word analysis were conducted to create a density map showing the co-occurrence of keywords. In the item density visualization, items are represented by their label in a similar way as in the network visualization (Su et al., 2022). Collaboration analysis was carried out to visualize which institutions have the greatest impact in collaboration. Finally, interpretation was conducted to analyze information with other studies. Results and Discussion Descriptive analysis Table 1 shows the main information of the bibliometric analysis that was carried out in the research. A total of 753 articles were published within the period 2008-2021, which were extracted from the Scopus database. It is highlighted that a total of 2643 authors published in that period, using a total of 1938 keywords, generating an average of 27.95 citations in each document. Fig. 2 shows the number of articles in the period from 2008 to 2021. It is interesting to note that there is a growing trend of publications regarding biochar, compost and soil, highlighting a growing increase in articles published in 2020 and 2021. On the other hand, in 2008 there was only one article, and the topic was started with the following keywords "biochar Brazil" and "carbon" (Steiner et al., 2008). Authors The top 20 authors about biochar, compost and its soil application research with the most published papers are shown in Fig. 3. In the first place, there is Zhang Z, with a total of 19 scientific articles. In the second place, there is Chen H, with a percentage distribution of 2.1%, with 16 documents. It is necessary to highlight that this graph shows all types of authorship (corresponding author, co-authors, etc.). With a smaller amount and in the same distribution are Ali S, Lebrun M, and Nandillon R with a total of 10 documents, and a percentage contribution of 1.3%. Main Information Explanation Number Documents Fig. 4. This analysis is important because it allows showing the evolution of each author over the years. Fig. 4 shows that while the circle is bluer, it means that there is a greater total citation (TC) per year, while for a larger circle, it means that there is a greater number of articles (N. Articles). From 2008 to 2021, the most productive author was Zhang Z, with a total of 19 articles, while only for the year 2017, the most productive author was Chen H, with a total of 8 articles. Glaser B has been one of the pioneering researchers on topics related to biochar, compost and its application in the soil with an article published in 2008 (Steiner et al., 2008), keeping his participation in publications updated. While in 2016, the two articles of Ok YS collected a greater number of citations per year (67.29) productive authors Zhang Z and Chen H collaborated with three articles in 2017, focusing on the co-composting process, where they assessed the effect of biochar on bacterial and fungal diversities using sludge and organic waste . They also assessed the application of biochar and zeolite and their mixture on nitrogen conservation and organic matter transformation during pig manure composting (Kumar Awasthi et al., 2017), and the application of biochar in the composting of dewatered fresh sewage sludge (DFSS)-wheat straw . From 2015 onwards, most authors show an active production trend until 2021, except for Covelo EF, who has not published articles in 2020 and 2021, and for Bourgerie S, who began publishing in 2019 with many articles. The application of Lotka's law to the data, as illustrated in Fig. 5 indicates the number of articles to which each author contributed. For the research, 73% of authors contributed with at least 1 study, while 15% with at least two articles and less than 1% of authors with at least 6 articles. A common way to evaluate the impact of the authors is through their indices, such as the H index, which measures the productivity and the impact of the citations of the publications and is based on the set of the most cited articles (quantity) and the number of citations (quality) that it has received in other research; this is why this index is really important for bibliometric analyses. In Fig. 6 collaboration with other countries is shown in Fig. 8. China leads the largest number of documents with a total of 175, which includes 124 individual country publications and 51 publications of collaborations between countries; the USA is in the second place with a total of 44 articles that include 35 individual country publications and 9 publications of collaborations between countries. Fig. 9 illustrates the number of citations that each country received in total according to the topic of biochar, compost and soil research. The countries that exceed 2000 citations are China, the United Kingdom and Australia. China is in the first place, with a total of 5779 citations. According to Fig. 10 (most relevant affiliations), the institution that leads the list is the Northwest AandF University with a total of 197 Fig. 12 shows the source local impact by H index. Science of The Total Environment is in the first place with an H index of 28, while the second place is still occupied by Bioresource Technology with an H index of 24, and the third place is occupied by Chemosphere, with an H index 21. There are sources that already have topics positioned so that the authors can decide to publish their research to have visibility. Bradford's law is one of the methods to determine the leading sources in each topic during a period of time. Fig. 13 lists the leading sources in biochar research, compost and soil, since Bradford's law classifies sources according to their productivity. The first central zone will contain a limited number of entries, while each subsequent zone will contain an increasing number of sources. It is illustrated that Science of The Total Environment, Bioresource Technology, and Agronomy are the sources with the highest productivity within the field of study of this research. Documents A citation represents the recognition of the author's contributions in the field of science, so determining which publications have been the most cited is important not only for the author because they will gain reputation, but also for evaluation of the impact of a journal. Table 2 illustrates the top 10 authors with the most cited publications. In 2010, there is the research of Beesley et al. (2010) with the highest number of citations, with a total of 793 citations. In fact, it corresponds to one of the first researches about "biochar", the aim of which was to assess the effects on mobility, bioavailability and toxicity of inorganic (Cu, As) and organic contaminants using biochar, compost and their mixtures; therefore, the main conclusion was that biochar has a greater potential to beneficially reduce the bioavailability of organic and inorganic contaminants than green waste compost in this multi-element contaminated soil, being especially effective in reducing phytotoxic concentrations of Cd and water soluble Zn as well as heavier organic contaminants. The research of Steiner et al. (2008) with a total of 435 citations is in the second place. In this research, the topic of the study was to establish a field test in the central Amazon, in order to study the influence of charcoal (vegetable carbon) and the compost produced from forest biomass, fruit residues, manure and kitchen residues in the retention of nitrogen in the soil, for which it was concluded that the greater retention of this element significantly improved the nitrogen cycle in the plots that received charcoal. Karami et al. (2011) are in the third place, with a total of 388 citations. This research with the following keywords "biochar", "compost", "heavy metal", "porewater remedy", and "ryegrass" assessed the effect of compost from green waste by itself and in combination with biochar, using ryegrass, concluding that therefore the two amendments have opposing metal specific suitability for treating this contaminated soil regarding whether it is a maximum reduction in plant tissue metal concentration or a maximum reduction in the harvestable amount of metal that is required. On the other hand, regarding the keywords of these studies, the term biochar and compost are the ones that predominate. However, in 2014 (position 10 of Table 2), the keyword "co-composting" was included. This co-process was also studied by Agegnehu et al. (2016) in the journal Science of The Total Environment. When research began on biochar, compost, and soil, the topics used to be about general applications of biochar, compost and its mixture in plant performance, such as in Avena sativa L. (Schulz and Glaser, 2012), assessment of the yield of Lactuca sativa and Brassica chinensis (Carter et al., 2013) pine, soil fertility and greenhouse gas emissions (Agegnehu et al., 2015). In addition, research carried out the assessment of biochar in the composting process. For example, Jindo et al. (2012) assessed the quality of a composting mixture prepared with poultry manure and different local organic waste by adding biochar. Another common topic was the status of heavy metals in the composting process, for example, by adding biochar and humic acid (Hou et al., 2014). Likewise, studies focused on the removal of heavy metals in the soil, such as Cu, Ni, Pb, Zn, assisted with Brassica Juncea L. (Rodríguez-Vila et al., 2015), in the reduction of the bioavailability of Cd, Cu, Zn, Pb in wetland soils , and in assisted phytoremediation with biochar and compost, with Helianthus annuus (Chirakkara and Reddy, 2015). Recent articles focus on more specific aspects regarding the performance of species. Biochar, compost and mycorrhizae are mixed to avoid Bary disease in soybeans (Safaei Asadabadi et al., 2021). In addition, evaluation of the performance of low concentrations of lead in contaminated soils using biochar, compost and rhizobacteria is conducted (Zafar-ul-Hye et al., 2021). While in the composting process, the application of biochar in the maturation process during aerobic composting assisted by electric field is evaluated (Fu et al., 2021). Citation and co-citation analysis Co-occurrence network The analysis of the co-occurrence of keywords is shown in Fig. 16. This analysis is a tool to identify critical points and research frontiers (Ye et al., 2020). The more co-occurrence between two keywords, the closer their relationship (Chen et al., 2016). The words with the highest co-occurrence are biochar, composting, compost. The colors indicate different clusters (green, blue and red), and the generation of each cluster is based on the relationship of the elements, which gives a set of closely related elements. In the period from 2008 to 2021, 753 documents and 4320 keywords related to biochar, compost and soil were found. The keywords according to bond strength and most frequent use were biochar (538 occurrences), composting (349 occurrences), compost (436 occurrences), charcoal (295 occurrences), and soil (255 occurrences). Co-word analysis A Multiple Correspondence Analysis (MCA) was performed to understand the conceptual structure of this research. Fig. 17 illustrates 3 clusters. The words for the green cluster included soil pollutant, soil pollutants, bioremediation, soil remediation, soil pollution, heavy metal, copper. This group was related to the bioremediation of contaminated soils. The words for the red cluster are the following: chemistry, nonhuman, unclassified drug, priority journal, microbial community, biochars, phosphorous, composting, biochar, compost, soils, soil amendment, biomass, organic carbon, charcoal, soil, article, controlled study. This group was oriented to see more the interactions of the amendments in the soil. And finally, the blue cluster, which has the following words: nutrients, carbon, nitrogen fertilizers, manure, manures, animal. This cluster was oriented where the biochar or compost were obtained and to show some basic properties of the effect that amendments can perform. In Fig. 18, the thematic map groups the keywords of the authors focusing on the relevance and the field of research. This map is divided into four sections: i) motor themes, ii) basic themes, iii) emerging or declining themes, and iv) very specialized/niche themes. In this research, the keywords "composting", "charcoal", soil", "article", and "controlled study" belong to the motor themes category. This quadrant is characterized by having a high centrality and density which means that they are well developed and important topics; on the other hand, "biochar", "compost", "soil amendment", "soils", and "biomass" are considered within the category of basic topics, that are important for research but not well developed, which is verified since the biochar-compost issues such as mixtures and application are still few in the scientific literature. This means that they present a low density and high centrality, and that more research and analysis are needed in the future. In the lower left quadrant are emerging or declining themes. In this research, the themes "microbiology", "bacteria", "enzyme activity", "microbial community", and "nonhuman" emerge. For example, Jiang et al. (2022) point out that the combined addition of biochar and garbage enzyme (GE) improves the humification and succession of the fungal community during the composting of sewage sludge; however, few studies pay attention to the effect of GE on the humification process, and the influence of the combined addition of GE and biochar on the composting process was also not well evaluated, so more research is needed in this field. Furthermore, some research about applying bacteria (microorganisms) assessed the effect of green waste biochar and wood biochar, together with compost and plant growth promoting rhizobacteria (Bacillus subtilis) on tomato yield (Solanum lycopersicum L.) (Rasool et al., 2021), assessed the integrated application of biochar, compost, fruit and vegetable waste, and Bacillus subtilis (SMBL 1) to soil in sole application and in a combined form (Anwar et al., 2021), and evaluated the response of the structure of the arbuscular mycorrhizal fungi community through the application of fertilizers, biochar and compost in a karst mountainous area for 24 months (Yan et al., 2021). In the case of co-composting, biochar and solid digestate from anaerobic digestion were also assessed (Casini et al., 2021). Fig. 19 illustrates the network of authors who have researched biochar, compost, and their application to the soil for the period from 2008 to 2021. Each color represents a cluster associated with authors. The larger the circle, the greater the number of citations it has received. According to this author, the lines between the researchers represent links, and the distance between the researchers is associated with the strength of their relationship. It is observed that there is distancing in the publications, and collaboration is restricted by certain groups of authors. Collaboration analysis Another dimension studied in the bibliometric analysis is the collaboration between institutions. Fig. 20 shows different clusters, and their size is related to the number of collaborations that have been carried out. It is observed that there is collaboration between certain groups of institutions and that the University of Agriculture leads in this aspect. Finally, the collaboration between countries is shown in Fig. 21. It is observed that China is a country that continuously collaborates with different countries. While countries like Hong Kong, Belgium, Japan, Indonesia Denmark, and Malaysia should strengthen ties. Conclusions There is little literature on bibliometric studies, and the synergy that may exist between biochar and other amendments like compost for its application in the soil. This study carried out a bibliometric analysis using the keywords "biochar", "compost" and "soil" for a period of 2008-2021, identifying 753 articles. According to the descriptive analysis, a strong trend of biochar, compost and soil publications is observed in the last two years. For the metrics of the authors, Zhang Z is the author with the largest number of documents, 21 in total and with a distribution of 2.5% for the present study. Zhang Z is also the one with the highest H index, as well as Glaser B, who has been one of the pioneer researchers on topics related to biochar, compost and its application in the soil. Zhang Z and Chen H are the authors who have collaborated most with other countries. Fig. 21. Collaboration network (countries) For the analysis of the sources, Science of The Total Environment is the one with the largest number of documents, H index, and as of 2016, it has positioned itself with the theme of biochar, compost and its application to the soil. For citation and co-citation analyses, a continuous branching is shown from the year 2016 and 2017. For co-occurrence network, the keywords according to bond strength and most frequent use were biochar (538 occurrences), composting (349 occurrences), compost (436 occurrences), charcoal (295 occurrences), and soil (255 occurrences). On the other hand, a strong trend is observed in clearly differentiated fields (multiple correspondence analysis), bioremediation, analysis of parameters in the soil, and analysis of the quality of manure, compost. For collaboration analysis, China is the country with the highest collaboration worldwide, and there are collaboration gaps in some countries such as Hong Kong, Belgium, Japan, Indonesia, Denmark, and Malaysia. An interesting finding was found that the articles from now on tend to focus a lot on microbiological analysis, enzymes, bacteria, and these are emerging issues; that is, they should be strengthened to better understand the synergy of biochar, compost and microbial activity in the coming years, and as pointed out (Jiang et al., 2022), the combined addition of biochar and garbage enzyme (GE) improves the humification and succession of the fungal community during sewage sludge composting. However, few studies pay much attention to the effect of GE on the humification process. The influences of the combined addition of GE and biochar on the composting process were also not well evaluated. Finally, more information and research are needed regarding the microbiological field and interactions of bacteria and enzymes in the application of biochar compost.
2022-10-06T15:10:33.230Z
2022-10-04T00:00:00.000
{ "year": 2022, "sha1": "7f08134915a7ff33ef2136b84343a4936b1a5464", "oa_license": "CCBY", "oa_url": "https://erem.ktu.lt/index.php/erem/article/download/30948/15528", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ec29b1e51293e72be9fdc1c2dbaa2b95804b13ef", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
963765
pes2o/s2orc
v3-fos-license
Rabies Virus Infection Induces Type I Interferon Production in an IPS-1 Dependent Manner While Dendritic Cell Activation Relies on IFNAR Signaling As with many viruses, rabies virus (RABV) infection induces type I interferon (IFN) production within the infected host cells. However, RABV has evolved mechanisms by which to inhibit IFN production in order to sustain infection. Here we show that RABV infection of dendritic cells (DC) induces potent type I IFN production and DC activation. Although DCs are infected by RABV, the viral replication is highly suppressed in DCs, rendering the infection non-productive. We exploited this finding in bone marrow derived DCs (BMDC) in order to differentiate which pattern recognition receptor(s) (PRR) is responsible for inducing type I IFN following infection with RABV. Our results indicate that BMDC activation and type I IFN production following a RABV infection is independent of TLR signaling. However, IPS-1 is essential for both BMDC activation and IFN production. Interestingly, we see that the BMDC activation is primarily due to signaling through the IFNAR and only marginally induced by the initial infection. To further identify the receptor recognizing RABV infection, we next analyzed BMDC from Mda-5−/− and RIG-I−/− mice. In the absence of either receptor, there is a significant decrease in BMDC activation at 12h post infection. However, only RIG-I−/− cells exhibit a delay in type I IFN production. In order to determine the role that IPS-1 plays in vivo, we infected mice with pathogenic RABV. We see that IPS-1−/− mice are more susceptible to infection than IPS-1+/+ mice and have a significantly increased incident of limb paralysis. Introduction Type I interferon (IFN) was first identified as a ''factor'' that rendered cells resistant to viral infection [1]. It is now known that following viral infection, cells induce type I IFN, which in turn upregulates the expression of numerous antiviral proteins [2]. This class of cytokines is comprised of several genes including multiple IFN-a genes, a single IFN-ß gene, and the less well-defined IFN-v, -e, -t, -d, and -k (for review [3]). In addition to having antiviral functions, type I IFNs play a part in activating the adaptive immune response following infection [4,5,6]. For instance, IFN-a/ ß can strengthen the innate immune response by activating antigen presenting cells (APC). Additionally, following maturation in the presence of type I interferon and GM-CSF, monocytederived DCs more effectively stimulate an antigen-specific CD8 + T cell response than when differentiated with GM-CSF alone [7]. Viral infection can trigger the type I IFN response via various pattern recognition receptors (PRR), namely Toll-like receptors (TLR) and RIG-I-like receptors (RLR). In the case of negative stranded RNA viruses, the members of the TLR family that are generally involved in viral recognition, TLR-3 and TLR-7, are found on the endosomal membrane. To initiate the signaling cascade, TLR-3 binds double stranded RNA molecules [8], whereas TLR-7 recognizes immunomodulatory compounds (ieimiquimod) [9] or single-stranded RNA molecules [10]. Although negative stranded RNA viruses do not produce double stranded RNA as part of their normal replication cycle, it is likely that abnormal replication products resulting from errors by viral RNAdependent RNA polymerases give rise to some level of double stranded RNA in virus-infected cells [11]. TLR-3 and TLR-7 initiate signaling though different adaptor molecules, Trif and MyD88, respectively; however, the pathways converge on the phosphorylation of IRF-3. Following phosphorylation, IRF-3 forms protein dimers, which allow for its transport into the nucleus where it can bind to the IFN-ß promoter [12]. Alternatively, RNA viruses can be recognized in the cytoplasm by RLRs, namely RIG-I and Mda-5 [13]. These helicase-like proteins recognize double-stranded RNA and 59 tri-phosphate groups [14]. In the case of rabies virus (RABV), the negative stranded RNA virus of interest in this study, the leader RNA remains unmodified [15,16] and thus provides a potential ligand for these RLRs. Signaling by RIG-I and Mda-5 is mediated through the mitochondria-bound protein IPS-1, which is also referred to as MAVS, Cardif, or VISA [17,18,19,20]. Similar to what is seen in TLR signaling, RLR signaling culminates with the activation and nuclear translocation of IRF-3 [19]. Rabies virus is a member of the Rhabdoviridae family. RABV has a relatively simple genome, comprised of just 5 proteins: the nucleoprotein, phosphoprotein (P), matrix protein, glycoprotein and the RNA dependent RNA polymerase. Infection with RABV can induce IFN-a/ß production rapidly in vivo. Furthermore, it was seen that a mouse's ability to induce type I IFN, as measured by serum concentrations 4 days post infection, positively correlates to the animal's resistance to RABV [21]. The type I IFN response is also important in driving immunity, as mice injected with antimouse IFN-a/ß antibody prior to infection with RABV were more sensitive to the virus than mice injected with a control antibody [22]. However, RABV has the ability to antagonize type I IFN induction [23]. Thus, shortly after infection of fibroblast cells, RABV-P prevents IRF-3 phosphorylation in order to suppress IFN-a/ß production [23]. Although IFN is induced after RABV infection, RABV is able to suppress the IFN response shortly after infection. Therefore, in order to study the receptors responsible for the initial induction of IFN, several groups have used recombinant viruses with lower levels of RABV-P. Using this method, it was determined that IFNß promoter activity was seen following recombinant RABV infection of VERO cells transfected with wildtype RIG-I, but not in cells transfected with dominant-negative mutant RIG-I [24], thus indicating a role for RIG-I in mounting an innate immune response to RABV. Additionally, following infection of human postmitotic neurons with RABV, Prehaud et al. saw an increased production of IFN-ß and TLR-3 mRNAs [25]. Furthermore, the expression of TLR-3 on cerebellar cortex tissues of individuals that had died of rabies, but not on an individual that died of cardiac arrest, verify the viral induced expression of TLR-3 in human brains in vivo [26]. This upregulation of TLR-3 following infection suggests a possible role for TLR-3 signaling in the innate recognition of RABV; however, TLR-3 activation needs to be further studied to conclusively define such a role. Although these results hint at the receptors responsible for interferon expression, there is no evidence that other PRR receptors, such as TLR-7 and Mda-5, do not also play a role. Furthermore, since the recombinant viruses used in some of these studies exhibit decreased pathogenicity, it is possible that a wildtype virus may act differently following infection. In order to study the IFN-inducing pathways triggered by RABV, we needed to identify a cell type in which RABV-P is unable to antagonize type I IFN signaling. Of note, it has been seen that following infection of dendritic cells (DCs) with influenza, another negative stranded RNA virus, the DCs become infected, but this infection is non-productive [27]. Here, we sought to determine whether APCs were productively infected with RABV. Similar to previous reports that human DCs are susceptible to RABV infection [28,29], we saw that mouse DCs became infected; however we also observed that very little viral progeny was released due to limited viral replication. Due to the overall suppression of viral transcription in RABV infected DCs there are presumably low levels of RABV-P that may not be able to inhibit interferon induction. Thus, we decided to utilize infection of DC to study the IFN-inducing capabilities of RABV and found that RLRs are responsible for viral recognition in DCs. RABV infection of antigen presenting cells results in type I IFN production It has been previously shown that RABV-P can inhibit the phophorylation of IRF-3 in fibroblast cells [23], thus crippling the induction IFN-a/ß. However, RABV is able to infect a variety of cells including neurons [30] and antigen presenting cells (APC) [28,29] in addition to fibroblasts. Thus, we wanted to determine whether RABV is able to inhibit IFN signaling in other cell types including DCs, which are known to induce the adaptive immune response. In order to check for type I IFN production, we first infected a variety of cell types including fibroblasts (BSR), neuronal cells (NA), macrophages (Raw264.7) and DCs (JAWSII) with a RABV vaccine strain-based vector, SPBN. Following infection with RABV, cell supernatants were collected and subsequently UVtreated in order to deactivate any infectious virus but retain secreted cellular proteins, such as type I IFN. We then transferred the supernatants to reporter cells, which are sensitive to IFN. Twenty-four hours after supernatant transfer, reporter cells were infected with recombinant vesicular stomatitis virus expressing GFP (VSV-GFP, [31]) for 5-8h. VSV replication is highly sensitive to type I IFN [32], and thus, in the presence of type I IFN, the replication of VSV is suppressed [4]. Following infection with RABV, macrophages as well as DCs, but not fibroblasts or neuronal cells, produce type I IFN that inhibits VSV-GFP replication, as indicated by the lack of GFP expression ( Figure 1A). Of note, when BSR, NA, Raw264.7, or JAWSII cells are originally treated with UV-deactivatecd RABV, the supernatants from these cells are unable to block VSV replication ( Figure 1B); therefore, IFN is secreted only after RABV replication. In order to account for the increased amounts of type I IFN produced following RABV infection of macrophages and DCs when compared to the amount produced by fibroblast and neuronal cells, we did a one-step growth curve following infection of the various cell types. Supernatants from infected cells was titered on BSR cells, which are insensitive to type I IFN [4]. We detected that, although all four cell types were infected, only BSR and NA cells produce infectious virus (Figure 2A). There are two possible explanations for the defect in viral production observed here: either a block in viral replication or a defect in viral Author Summary Rabies virus (RABV) is a neurotropic RNA virus responsible for the deaths of the at least 40,000 to 70,000 individuals globally each year. However, the innate immune response induced by both wildtype and vaccine strains of RABV is not well understood. In this study, we assessed the pattern recognition receptors involved in the host immune response to RABV in bone marrow derived dendritic cells (DC). Our studies revealed that Toll like receptor (TLR) signaling is not required to induce innate responses to RABV. On the other hand, we see that IPS-1, the adaptor protein for RIG-I like receptor (RLR) signaling, is essential for induction of innate immune responses. Furthermore, we found that RIG-I and Mda-5, both RLRs, are able to induce DC activation and type I interferon production. This finding is significant as we can target unused pattern recognition receptors with recombinant RABV vaccine strains to elicit a varied, and potentially protective, immune response. Lastly, we show that IPS-1 plays an important role in mediating the pathogenicity of RABV and preventing RABV associated paralysis. Overall, this study illustrates that RLRs are essential for recognition of RABV infection and that the subsequent host cell signaling is required to prevent disease. assembly. In order to compare viral transcription and replication in fibroblast and dendritic cell lines we used quantitative PCR. In fibroblast cells, we saw that the amount of RABV-N messenger RNA (mRNA) transcripts increased an average of 1.95 logs from 8 hours post infection (hpi) to 48 hpi. Similarly, the quantity of RABV-N genomic RNA transcripts (gRNA) increased an average of 1.2 logs from 8 hpi to 48 hpi. This data indicates that following infection of fibroblast cells both viral transcription (mRNA) and replication (gRNA) occurs. On the other hand, when looking at the quantity of RABV-N found in dendritic cells following infection we see that there was no increase in the number of mRNA or gRNA viral transcripts when comparing 8hpi to 48 hpi. Thus, it appears that RABV is able to enter APCs, but only limited viral transcription occurs following entry. It is reasonable to assume that decreased levels of transcription might result in low levels of RABV-P. It has been previously shown that recombinant RABV expressing low amounts of RABV-P is unable to inhibit type I IFN induction [23]. Furthermore, we show by Western blotting that cell lysate from RABV infected JAWSII cells contained undetectable levels of RABV P 48hpi ( Figure 2B). On the other hand, we were able to detect RABV P in lysates from infected BSR cells as early as 12 hpi after infection. This results support the conclusion that a very low level of RABV P within infected APCs is not able to block the induction of type I IFN and therefore is responsible for the increase in type I IFN production by these cells following infection. In order to better understand the interaction of RABV with host cells following infection, we sought to identify the pathway(s) responsible for type I IFN induction in infected cells. Since we detected that DCs make large amounts of IFN following RABV infection we decided to use bone marrow derived DCs (BMDC) in our studies. To differentiate BMDCs, we cultured the cells in the presence of 10 ng/ml GM-CSF. After 7 days the majority of cells have matured to DCs as shown by the expression of CD11b + CD11c + (Figure 3). In order to identify the PRR that recognizes RABV, we isolated BMDC from mice deficient in various signaling components of PRR pathways. In each experiment, cells were stimulated, and the CD11c + cell population ( Figure 3) was analyzed for production of type I IFN and expression of CD86, a costimulatory molecule that is upregulated on activated DCs. Induction of type I IFN and DC activation following RABV infection is independent of TLR-3 and MyD88 signaling First, we analyzed the role that TLR signaling plays in BMDC activation and type I IFN production following a RABV infection. It has been previously reported that following infection of human postmitotic neurons with RABV, there is an increased production of IFN-ß and TLR-3 mRNAs. In addition, treatment of neurons with poly(I:C), a TLR-3 agonist, generated a similar cytokine profile to that which was seen following RABV infection [25]. Thus, we differentiated BMDCs from TLR-32/2 and congenic wildtype mice and infected the cells with RABV. We then analyzed the infected cells for the presence of CD86 ( Figure 4A). As shown in Figure 4C, there is no significant difference in the expression of CD86 on the cell surface of RABV infected BMDCs derived from TLR-32/2 or wildtype mice. As expected, TLR ligands that signal via other TLR receptors, namely TLR-4 (LPS), TLR-9 (ODN1826), and TLR-7/8 (R848), equally activate BMDCs derived from wildtype (wt) or TLR-32/2 mice. Interestingly, poly(I:C), a known ligand for TLR-3, was able to activate BMDC isolated from TLR-32/2 mice as well as wt mice. However, it has been previously shown that poly(I:C) can also signal through Mda-5 and that Mda-5 is the dominant Figure 3. BMDC differentiation and gating. BMDC were derived from various mice by culture in media containing 10ng/ml GM-CSF for 7 days. (A) Following culture, the majority of viable cells (as determined by forward and side scatter) were CD11b+CD11c+. (B) When analyzing activation state or infection rates of BMDCs following stimulation the cells were first gated for viability (using forward and side scatter) and then gated for CD11c expression. Only CD11c+ cells were used in analysis. One representative Balb/c mouse is shown here, but this gating strategy was consistently used for all BMDC samples. doi:10.1371/journal.ppat.1001016.g003 receptor for mediating type I IFN induction following poly(I:C) stimulation in BMDCs [33,34]. As the RLR pathway remains intact in TLR-32/2 mice, BMDC activation in TLR-32/2 cells following poly(I:C) stimulation is not inexplicable, but rather highlights the need for a better TLR-3 agonist. Taken as a whole and based on the fact that that RABV infection activated BMDC derived from both wt and TLR-32/2 mice equally, we conclude that TLR-3 signaling is not required for the activation of BMDCs following a RABV infection. To our knowledge, TLR-7 has never been investigated in the context of a RABV infection and thus the role that it plays in type I IFN induction and DC activation following RABV infection is unknown. To analyze the function that TLR-7 has in the induction of type I IFN and DC activation, we isolated BMDCs from MyD882/2 and C57BL/6 mice. We detected an equal upregulation of CD86 on BMDCs from MyD882/2 and wildtype mice ( Figure 4B). As expected, activation of MyD882/2 BMDCs is significantly reduced following stimulation with ODN1826 and R848, ligands for TLR-9 and TLR-7/8 respectively, both of which signal via MyD88 [35] ( Figure 4D). Thus we conclude that, similar to TLR-3 signaling, the activation of BMDCs following a RABV infection occurs independently of MyD88 signaling. In order to determine if TLR-3 and MyD88 signaling might have an impact on type I IFN production, supernatant from infected BMDCs was collected at various times post infection, and a VSV-sensitivity assay was performed. As seen with BMDC activation, both TLR-3 and MyD88 are dispensable in the induction of type I IFN (Table 1). We did not detect any VSV-GFP replication on reporter cells following pre-treatment with supernatant from TLR-32/2, MyD882/2, or wildtype BMDC, indicating the presence of type I IFN in the supernatant. Induction of type I IFN and DC activation following RABV infection requires the IPS-1 pathway Having excluded TLRs as the required receptors mediating BMDC activation and type I IFN production, we next looked at the potential role for RLR signaling. Hornung et al. showed that a recombinant RABV expressing low levels of RABV-P signals via RIG-I to induce IFN-ß promoter activity following infection. Furthermore, it was shown that the 59-triphosphate on the leader sequence of RABV was the ligand for RIG-I [24]. To determine whether the RIG-I pathway is also activated in DCs following RABV infection, we isolated BMDCs from IPS-1+/+, +/2, or 2/2 mice. Our results indicate that following infection with RABV, IPS-1+/+ and IPS-1+/2 BMDCs express high levels of CD86 on their surface ( Figure 5A-B). Of note, IPS-1+/2 BMDCs are slightly less activated then IPS-1+/+ cells. On the other hand, IPS-12/2 BMDCs express significantly lower levels of CD86 on their surface at all time points ( Figure 5A-B). The TLR ligands LPS, ODN1826, and R848 equally activated all IPS-1 BMDC samples, indicating that the defect in the IPS-1 2/2 BMDCs is specific to the RLR pathways ( Figure 5B). As such, when cells are stimulated with RLR agonists, there is a defect in the activation of IPS-12/2 BMDCs when compared to IPS-1+/+ or +/2 BMDCs. We see a low CD86 upregulation following both poly(I:C) stimulation and infection with a NS1-deficient strain of influenza (DNS1/PR8) ( Figure 5B). It has been reported previously that poly(I:C) can signal via Mda-5 [33] and DNS1/PR8 signals exclusively via RIG-I [34]. Taken together this data indicates that BMDC activation is dependent on IPS-1 signaling following a RABV infection. In order to determine whether type I IFN production by BMDC is also dependent on IPS-1 mediated signaling, we assayed for the presence of type I IFN in the supernatants of infected IPS-1 BMDCs by VSV-GFP sensitivity assays and quantified the amount of IFN-ß by ELISA. It was seen that supernatant obtained from IPS-1+/+ and IPS-1+/2 BMDCs infected with RABV was able to inhibit VSV-GFP replication, and thus contained type I IFN. On the other hand, the VSV-GFP replication on reporter cells was not inhibited by pre-treatment with supernatants from RABV infected IPS-12/2 BMDCs ( Figure 5C). Likewise, IPS-1 +/+ BMDCs produce on average 250 pg/ml IFN-ß while the IPS-12/2 BMDCs produced less than 16.7 pg/ml, if any, IFN-ß ( Figure 5C). These results indicate that RABV infected IPS-12/2 BMDCs do not secrete type I IFN. Also consistent with the results seen for BMDC activation, IPS-12/2 cells stimulated with RLR agonists produced less type I IFN compared to IPS-1+/+ or +/2 BMDCs (Table 2). It has been shown that IPS-1 mediated pathways are also capable of activating the NF-kB signaling cascade [36]. Thus, we quantified the amount of IL-6 in the supernatant of RABV infected BMDC isolated from IPS-1+/+, +/2 and 2/2 mice ( Figure 5D). We see that there is a significant decrease in IL-6 produced by IPS-12/2 BMDCs compared to IPS-1+/+ BMDCs. However, IPS-12/2 cells do secrete some IL-6 following infection with RABV, and thus, the use of IPS-1 independent pathways to induce NF-kB activation, in contrast to type I IFN activation, seems to be utilized. IPS-1 is the adaptor molecule for both Mda-5 and RIG-I Mda-5 mediated induction of IFN-ß has been described to occur in response to plus-stranded RNA viruses like picornaviruses, whereas it is reported that RIG-I is responsible for type I IFN induction in response to rhabdovirus infection [34]. However, the function of Mda-5 in the innate immune response to rhabdoviridae has not yet been elucidated. Furthermore, the role of these PRRs following a RABV infection in DCs remains unknown. Therefore we wanted to determine which of the two receptors recognizes RABV. For this approach, BMDCs from Mda-52/2 mice and RIG-I2/2 mice were isolated. As shown in Figure 6A, Mda-52/2 BMDCs express high levels of CD86 on their surface at 24 and 48 hpi. Of note, there is a significant reduction of CD86 surface expression on Mda-52/2 BMDCs at 12 hpi when compared to wildtype cells. Likewise, RIG-I 2/2 BMDCs also have a defect in BMDC activation at 12 hpi, while CD86 expression at 24 and 48 hpi is equal for RIG-I2/2 and RIG-I +/+ cells ( Figure 6B). In addition, it appears that while Mda-5 2/2 cells are able to induce type I IFN expression 12 hpi, RIG-I2/2 cells have an early defect in type I IFN induction. Importantly, by 48hpi, RIG-I2/2 BMDC do produce enough type I IFN to suppress VSV-GFP replication ( Figure 6C). This indicates that RABV can induce BMDC activation and type I IFN via both Mda-5 and RIG-I ligation. Furthermore, any perturbation in IPS-1 mediated signaling cascades seems to affect the early response (12hpi) to RABV. RABV infection by itself is sufficient to induce type I IFN production, while positive feedback is necessary for sufficient BMDC activation Once type I IFN is produced, it will further activate the infected cell via autocrine signaling through IFNAR. Ligation of the IFNAR initiates the Jak/STAT signaling cascade, which culminates in the upregulation of antiviral genes. In addition to antiviral genes, Jak/STAT signaling also upregulates proteins required for type I IFN induction, thus providing a positive feedback for the type I IFN pathway [2]. In order to determine how much IFN induction is directly related to RABV infection and how much is due to positive feedback that is driven by IFN-a/ß production, we infected BMDCs derived from IFNAR2/2 mice, which eliminates the contribution of positive feedback. Interestingly, BMDC isolated from IFNAR2/2 mice produce enough type I IFN to block VSV-GFP replication on reporter cells after 12, 24, and 48 h ( Figure 7A). However, we detected a significant decrease in the CD86 cell surface expression of IFNAR2/2 BMDC when compared to wt BALB/c mice ( Figure 7B-C). Thus, although RABV infection is sufficient to induce type I IFN, the cells need an amplification signal in order to undergo maturation. Additionally, we see a significantly greater infection by RABV in IFNAR2/2 cells, presumably due to their inability to induce antiviral gene expression ( Figure 7D). Role of IPS-1 signaling in vivo Lastly, we wanted to determine the impact that the RIG-I and Mda-5 pathways play in the in vivo response to RABV utilizing IPS-1 2/2 mice. Interestingly, we detected that IPS-12/2 BMDC, which do not produce type I IFN, have significantly more RABV-N expression post infection ( Figure 8A). This indicates that in the absence of IFN-a/ß induction, viral replication in DCs occurs at a faster rate, which should also increase viral pathogenicity. Therefore, we infected IPS-1 2/2, +/2, and +/+ mice, intramuscularly with SPBN-N2c, a recombinant RABV that is modestly pathogenic after peripheral inoculation [37]. Figure 8B shows that about 60% of the IPS-1+/+ or +/2 mice lived, while only 45% of the IPS-1 2/2 mice survived infection. More dramatically, nearly 90% of the IPS-12/2 mice had hind limb paralysis 11 days post infection while the IPS-1+/+ and +/2 mice exhibited only about 45% paralysis ( Figure 8C). This data indicates that RABV infection of IPS-12/2 is more pathogenic than RABV infection in wildtype mice. Discussion It has been previously seen that RABV can infect APCs [28,29]; however, the impact of the infection on generating an innate immune response to RABV had not been delineated. We show here that following RABV infection of APCs, unlike fibroblasts or neuronal cells, are able to produce copious amounts of type I IFN. We also determined that infected APCs do not produce novel viral progeny. A similar phenotype has also been seen following influenza infection of DCs. BMDCs become infected by the influenza strain, PR8, as seen by co-expression of influenza HA and DC marker N418 on 72% of cells. However, infected BMDC do not release viral progeny, as seen by a failure of infected DC supernatants to induce hemagglutination of chicken red blood cells [27]. Non-productive infection of DCs may have significant biological relevance over the course of an infection. Since RABV infection within APCs is easily controlled, the cells become a source of viral antigen, with little risk of spreading infection to neighboring cells. Taken together, APCs seem to be of critical importance during a RABV infection both for the prolonged production of type I IFN as well as a source of viral antigen. In this study, we used APCs as a tool to study the PRRs used to recognize RABV following infection. Interestingly, we see that TLR-3 has no role in inducing a type I IFN response or DC activation despite its previously recognized upregulation following RABV infection [25]. However, recent publications may explain this potential discrepancy. It was reported that TLR-3 is required for the formation of Negri bodies in RABV infected cells and that these bodies are the site of viral replication [38,39]. Furthermore, TLR-3 2/2 mice are less susceptible to infection with pathogenic RABV, as seen by increased survival and lower viral titers in the brains of TLR-3 2/2 animals compared to wt mice [39]. Thus, the requirement for TLR-3 by RABV may explain why it is upregulated following infection despite the fact that it is not required for a type I IFN response. We next sought to identify whether TLR-7 was critical for DC activation and type I IFN production. To our knowledge, no one has directly examined the role of TLR-7 following a RABV infection. Of note, TLR-7 signaling does play a role in the cellular recognition of a closely related Rhabdovirus, VSV. Infection of wild type plasmacytoid DCs (pDC) with VSV induced the production of IFN-a. However, infection of pDCs from TLR72/2 or MyD882/2 mice resulted in no cytokine production [40]. Thus, indicating that single-stranded RNA derived from VSV is able to trigger TLR-7 signaling. However, in the case of RABV it appears that MyD88-dependent signaling, and thus TLR-7, is dispensable for IFN-a/ß production following infection. On the other hand, RLR signaling via IPS-1 is critical for both the activation of DCs and production of type I IFN by infected DCs. It was shown previously that RIG-I signaling is necessary for IFN-ß promoter activity in VERO cells following recombinant RABV infection [24]. However, we show here using RIG-I2/2 derived DCs that Mda-5 is also able to induce DC activation and type I IFN production. This is interesting, as Mda-5 is generally recognized as a receptor for positive stranded RNA viruses, not negative stranded RNA viruses like RABV. Of note, another negative stranded RNA virus of the Paramyxoviridae family, Sendai virus, requires MDA-5 signaling for the sustained expression of type I IFN [41]. Our data indicates that RABV can be recognized by either RIG-I or Mda-5 following infection. The use of both RIG-I and Mda-5 receptors has also been observed following infection with West Nile virus (WNV). Following WNV infection, RIG-I2/2 cells had a delayed upregulation of host anti-viral genes; however, the ability to respond was conserved. Thus, indicating that another receptor was involved in recognition of WNV, this receptor was identified as Mda-5 [42,43]. Following RABV infection in the absence of either RIG-I or Mda-5, there is a delay in the activation of BMDCs. Furthermore, RIG-I2/2 BMDCs have an early defect in type I IFN production. Thus, it appears that in response to a RABV infection, both RIG-I and Mda-5 are utilized in order to rapidly induce high levels of IFN-a/ß production and DC activation. Of note, IPS-1 +/2 cells exhibited a phenotype that was intermediate to IPS-1 2/2 and IPS-1 +/+ mice. This observation also supports the requirement for rapid induction of IFN-a/ß following a RABV infection. The heterozygous cells are lacking one of the IPS-1 alleles, and this may result in less functional Table 2. Level of VSV-GFP replication on reporter cells following pre-treatment with UV-inactivated supernatants from TLR-agonist stimulated BMDC. protein in the heterozygous mice compared to homozygous wildtype mice. This again highlights the importance of a rapid response following viral infection in order to control viral replication and spread. The type I IFN response occurs in two phases after infection: the induction of IFN-a/ß following recognition of the pathogen by a PRR and then autocrine or paracrine signaling by IFN-a/ß through the IFNAR to induce upregulation of many other genes. Included among the genes that are upregulated in response to IFNAR signaling are several genes required for PRR signal transduction [2]. In this manner, the infected cell undergoes positive feedback to increase both the host response and PRR signaling. We wanted to identify which arm of the IFN response was responsible for the effects we observed following RABV infection of DCs, viral induction or IFN-a/ß amplification. We saw that both wt and IFNAR 2/2 mice are able to induce type I IFN production, thus highlighting the host's ability to rapidly induce IFN-a/ß following infection with RABV and indicating that the amplification of IPS-1 signaling by IFNAR signaling is not a critical factor in the induction of type I IFN. Surprisingly, we see that in the absence of IFNAR signaling, there is very little BMDC activation. Thus, it appears that DC activation occurs via IFN-a/ß signaling and is not a direct consequence of viral infection. This fact highlights the importance of a type I IFN response in initiating the adaptive immune response following infection with RABV. Lastly, we sought to determine the biological relevance of IPS-1 mediated PRR signaling following infection with a pathogenic strain of RABV. Although this experiment did not focus specifically on type I IFN production by DCs, it indicates how IPS-1 signaling, and thus IFN-a/ß production and DC activation, impacts the prognosis of infected animals. We saw that 87% of the IPS-12/2 mice in the study became paralyzed, whereas only about 45% of the IPS-1 +/+ or +/2 mice exhibited signs of paralysis. There is some data that suggests paralysis following a RABV infection is an early symptom of disease. In humans who present with the less common paralytic rabies, their survival time is slightly longer [44]. Although not significant, this data supports the fact that an early, rapid type I IFN response is an important factor mediating RABV disease outcome. Of note, opposed to vaccine strain of RABV used in the BMDC experiments, the pathogenic RABV strain, SPBN-N2c, infects mostly neurons [37] and we showed here that RABV is able to suppress the type I IFN response in neurons by 12hpi (Figure 1). Despite this limitation, there is no other model to study RABV pathogenicity. The role that antigen presenting cells play in initiating the immune response to RABV in vivo should also be investigated further. It is known that pathogenic RABV is less immunogenic than vaccine strains of RABV [45] thus it is likely that pathogenic RABV avoids or alters infection of DC in order to elicit a lesser immune response. In summary, we show here that RABV replication is cell type dependent; namely, RABV is able to antagonize the induction of type I IFN in fibroblast and neuronal cells but is unable to inhibit IFN-a/ß induction in APCs. Furthermore, in APCs RABV infection is nonproductive due to a defect in viral transcription, and no viral production is observed. Infection of BMDCs allowed us to delineate that RABV is exclusively recognized by either RIG-I or Mda-5 and both receptors are required for a rapid type I IFN response to RABV. This finding has significant implications for the development of a RABV-based vaccine vector. In light of these results, a recombinant RABV expressing a TLR agonist may allow for RABV recognition via TLRs. Such a response may potentiate the type I IFN response and induce better protection in a vaccine setting. We also show here that BMDC activation is secondary to IFN-a/ß induction and requires IFNAR. In addition, IPS-1 mediated signaling does have a role in vivo, as it seems to play a critical role in preventing RABV pathogenesis following RABV challenge. Cell lines The fibroblast cell line used in these studies is a cell clone of BHK-21 (ATCC: CCL-10), BSR. The neuronal cell line used in these studies is a neuroblastoma cell line referred to as NA [46]. The antigen presenting cell lines used here were JAWSII (ATCC: CRL-11904) and Raw264.7 (ATCC: TIB-71). IFN sensitivity assay Cellular supernatants were assessed for the ability to inhibit vesicular stomatitis virus (VSV) replication as described previously [4]. Briefly, the cell line of interest was infected with the vaccine strain of RABV, SPBN, at a multiplicity of infection (MOI) of 10, and supernatant was collected at various time points post infection. Alternatively, supernatant from infected BMDCs was used. The supernatants were UV-deactivated with a 254nm UV light source for 15 min. UV-deactivated viral supernatant was then diluted 1:10 in RPMI-1640 and added to a reporter cell line (either NA, for cell line experiments or 3T3 cells, for BMDC experiements). Following the 24 h pre-treatment, reporter cells were infected with VSV-expressing GFP at a MOI of 5 for 5-8 h. VSV replication was determined by fluorescence under a UV light source. ELISA assays For IFN-ß ELISA (PML Laboratories) the manufacturer's protocol was followed with the following modification: 50 ml of sample or standard was loaded into the 96-well plate. For IL-6 ELISA (eBioscience) the manufacturer's protocol was followed. Briefly, 5 mg/ml coating antibody was added to MaxiSorb (Nunc) plates and kept at 4uC over night. Wells were then washed with 0.05% Tween-20/PBS and blocked with Assay Buffer (eBioscience) for 2 hours. Plates were again washed with 0.05% Tween-20/PBS and then 100 ml standard or sample and 50 ml Biotin-Conjugate was added to the plate. Plates were incubated at room temperature for 2 hours, on a microplate shaker set at 200 rpm, and then washed with 0.05% Tween-20/PBS. Subsequently, wells were incubated with Streptavidin-HRP at room temperature for 1 hour, on a microplate shaker set at 200 rpm. The wells were washed and developed with 100 ml of Substrate Solution for 10 min followed by the addition of 100 ml of Stop Solution. Absorbance at 450 nm was recorded for each well. For both ELISAs a fourth-order non-linear regression curve (Prism software, GraphPad version 4.00) was fit to the standard curve and used to determine the concentration of the unknown samples. One-step growth curve BSR, NA, JAWSII and Raw264.7 cells were infected with SPBN at a MOI of 10. Following 60 min incubation at 37uC, the virus was aspirated, and cells were washed twice with PBS to remove any virus that had not yet infected the cells. Media was then added to the cells, and, at indicated time points, 0.3ml of supernatant was removed and stored at 4uC. The aliquots were titered in duplicate on BSR cells. Quantitative real-time PCR Messenger and genomic RABV-N RNA in SPBN (MOI-10) infected BSR and JAWSII cells was determined by TaqMan probe-based real-time PCR as described previously [4,37]. Western blotting Western blotting was performed as described previously [51]. Bone marrow derived DC (BMDC) differentiation and infection BMDCs were differentiated as described previously [52]. Briefly, bone marrow (BM) was obtained from the mouse's tibia and femur. Following red blood cell lysis using ACK lysis buffer (Invitrogen), the BM cells were cultured in 24-well costar plates at a density of 1 million cells per ml in the presence of 10ng/ml GM-CSF (Peprotech). During the 7 day culture, the cells were washed once by aspirating 600ml of media from the wells and adding 1ml of fresh media supplemented with 10ng/ml GM-CSF. On the seventh day of culture, the non-adherent and semi-adherent cells were collected and used as the BMDCs population. Flow cytometry Following differentiation of BMDC, cells were characterized for expression of DC markers. Briefly, cells were washed in FACS buffer (2% BSA/PBS) and blocked at 4uC for 30-60 m with 2ml rat anti-mouse CD16/CD32 (Fc block) (BD Biosciences Pharmigen) in 100ml FACS. Cells were then stained with APC-CD11b, PerCP-B220, and FITC-CD11c (BD Biosciences Pharmingen) for 30 min at RT. After staining, cells were washed with FACS buffer and fixed with Cytofix (BD Biosciences) for 16-18 hours at 4uC. Samples were washed and resuspended in 300 ml of FACS buffer. Samples were analyzed on BD FACS Calibur and a minimum of 50,000 events were counted. Following infection, BMDCs were analyzed for the expression of activation markers. At each given timepoint, BMDCs were removed from wells with cell scrappers and spun at 1600rpm for 5 min. Cells were then blocked at 4uC for 30-60 min with Fc block in 100ml FACS. Cells were then stained with APC-CD11c and PE-CD86 (BD Biosciences Pharmingen) for 30 min at RT. After staining, cells were washed with FACS buffer and fixed with Cytofix (BD Biosciences) for 16-18 hours at 4uC. Cells were then washed twice in Perm/Wash Buffer (BD Bioscience) and then stained with FITCanti RABV-N (Centacor, Inc) for 30 min at RT. After staining, cells were washed with Perm/Wash buffer and then resuspended in 300 ml of FACS buffer. Samples were analyzed on BD FACS Calibur and 20,000-30,000 APC + events were counted. Statistical analysis All data were analyzed by Prism software (GraphPad, version 4.00). To compare two groups of data we used an un-paired, two-tailed T-test. For all tests, the following notations are used to indicate significance between two groups: *p,0.05, **p,0.01, ***p,0.001.
2014-10-01T00:00:00.000Z
2010-07-01T00:00:00.000
{ "year": 2010, "sha1": "13221ade6c633af3032e89780541590976ff3438", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1001016&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "13221ade6c633af3032e89780541590976ff3438", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
268452949
pes2o/s2orc
v3-fos-license
Relation between Body Composition Trajectories from Childhood to Adolescence and Nonalcoholic Fatty Liver Disease Risk NAFLD has become the leading cause of chronic liver disease in children, as a direct consequence of the high prevalence of childhood obesity. This study aimed to characterize body composition trajectories from childhood to adolescence and their association with the risk of developing nonalcoholic fatty liver disease (NAFLD) during adolescence. The participants were part of the ‘Chilean Growth and Obesity Cohort Study’, comprising 784 children who were followed prospectively from age 3 years. Annual assessments of nutritional status and body composition were conducted, with ultrasound screening for NAFLD during adolescence revealing a 9.8% prevalence. Higher waist circumference measures were associated with NAFLD from age 3 years (p = 0.03), all skin folds from age 4 years (p < 0.01), and DXA body fat measurements from age 12 years (p = 0.01). The fat-free mass index was higher in females (p = 0.006) but not in males (p = 0.211). The second and third tertiles of the fat mass index (FMI) had odds ratios for NAFLD during adolescence of 2.19 (1.48–3.25, 95% CI) and 6.94 (4.79–10.04, 95% CI), respectively. Elevated waist circumference, skin folds, and total body fat were identified as risk factors for future NAFLD development. A higher FMI during childhood was associated with an increased risk of NAFLD during adolescence. Introduction The current obesity epidemic is a major global health problem; the World Health Organization (WHO) considers it one of the most serious public health challenges of the 21st century [1].As in many countries, in Chile, the number of children who are overweight and obese has steadily increased over time.The prevalence of these conditions in children aged 5-6 years increased from 47.3% in 2011 to 65.8% in 2022 [2].Nonalcoholic fatty liver disease (NAFLD) is defined as the accumulation of fat exceeding 5% within hepatocytes in the absence of other liver pathologies and chronic alcohol consumption [3], and is considered as the hepatic manifestation of the metabolic syndrome.Epidemiologic data, derived from pediatric studies using noninvasive and invasive tests to diagnose NAFLD, indicate a prevalence of 5% to 10% in the general pediatric population, increasing up to 40% in obese or overweight children [4][5][6].The highest rate in pediatric patients is in the Hispanic population, from 12% in the general population and up to 52% in adolescents with Nutrients 2024, 16, 785 2 of 10 obesity [4].Conversely, obesity is distinguished by an excess of adiposity and is intricately linked to the onset of systemic insulin resistance, recognized as a central factor in the pathogenesis of NAFLD.Insulin resistance can trigger an increase in de novo lipogenesis in the liver, leading to a reduction in β-oxidation, ultimately resulting in the accumulation of fat in the liver [7]. Recently, a worldwide consensus panel has recommended replacing the term NAFLD with the term metabolic dysfunction-associated steatotic liver disease (MASLD).MASLD recognizes the intricate link between metabolic dysfunction and liver steatosis, encompassing factors beyond fat accumulation [8]. Childhood is a period of rapid growth with marked changes in body composition [9].It has been shown that excessive weight gain during childhood is a strong risk factor for the development of NAFLD [10,11].However, there is scarce information regarding the role of body composition and anthropometric trajectories in the development of this pathology.The main objective of this study was to describe the anthropometry and body composition trajectories throughout childhood and adolescence, in relation to the risk of NAFLD in adolescence. Study Population The Growth and Obesity Cohort Study (GOCS) is a longitudinal follow-up initiative that began in 2006, encompassing 1195 children (≈50% female) born between 2002 and 2003.The participants attended 54 public nursery schools in the Southeast area of Santiago, Chile, and are representative of low to middle socioeconomic levels [12].The study subjects met the specific inclusion criteria, which comprised being a single birth, having a gestational age between 37 and ≤42 weeks, having a birth weight of ≥2500 g, and having no physical or psychological conditions that could significantly impact their growth.The participants were followed annually with assessments that included anthropometric and body composition evaluations.Between 2016 and 2019, participants were recruited prospectively to evaluate the prevalence of NAFLD in adolescence.Participants who presented with the following characteristics were excluded: • Previous history of chronic liver disease other than NAFLD • Significant alcohol consumption: approximately 20 g/day • Elevation of liver enzymes secondary to drug therapy • Any type of malignant disease Anthropometric and Body Composition Assessment Weight and height were measured during the annual check-ups, which were conducted from 2006 in the Institute of Nutrition and Food Technology (INTA) by trained dietitians using standardized measurement protocols.The measurements were performed using a digital scale (TANITA 418 BC, precision 0.1 kg; manufactured by TANITA Corporation, Japan, and sourced from IL, USA) and a portable stadiometer (SECA 222, precision 0.1 cm; manufactured by SECA GmbH & Co., Ltd. and sourced from SECA United States).From these data, the body mass index (BMI) was calculated as the ratio of weight (in kg) to height (in m 2 ), and the z-score was estimated according to the growth curves of the World Health Organization (WHO) in 2007 [13].The waist circumference (WC) was measured using a wrap-around metallic tape measure (model W606PM; Lufkin, precision 0.1 cm) just above the iliac crest at the end of a normal expiration.The measurements of the suprailiac, subscapular, biceps, and triceps skinfolds were taken using calipers (Lange caliper, 1 mm graduation).The measurements were performed by grasping the respective fold perpendicular to the index and thumb fingers.The fat mass (FM) and fat-free mass (FFM) were quantified using a bioelectrical impedance analysis using TANITA 418 BC.The FM index (FMI) was calculated as the ratio of the weight of the fat mass (in kg) to height (in m 2 ).Likewise, the FFM index (FFMI) was calculated as the ratio of the FFM (in kg) to height (in m 2 ).Additionally, body composition was evaluated using DXA (Lunar Prodigy dual-energy X-ray absorptiometry scan).The measurements of weight and height were obtained from the conception of the cohort (approximately 4 years of age) to the time of evaluation.The waist circumference was measured from the age of 4, skinfold measurements were conducted from age 4 to 14 years, bioelectrical impedance analysis was conducted from age 4 year until the time of each evaluation, and DXA scans were performed from age 9 to 13 years in girls and from age 11 to 16 years in boys. Hattori Charts Generally, growth is described in terms only of body weight, which is then normalized for height to obtain BMI.This does not consider the deposition of the FM and the FFM and the underlying body compartment changes [14].Hattori's body composition charts adjust both FFM and FM for height, which allows the assessment of the nature of weight gain with age in the reference child, and the evaluation of the agreement between BMI and body fatness in samples of subjects of a given age [15].By correcting the FM and the FFM for height, the nature of the weight gain can be established.Hattori charts were used to describe the trajectory of body composition from age 5 to 15 years, based on the results of the bioelectrical impedance analysis. Abdominal Ultrasound and NAFLD Diagnosis NAFLD was defined as an echogenic liver compatible with steatosis in an abdominal ultrasound (US).US was obtained using an Acuson S-2000 unit (6-2 MHz convex and 9-4 MHz linear transducers), where the echogenicity of the liver was compared with the echogenicity of the renal cortex [16].Two expert pediatric radiologists confirmed the diagnosis.The thickness of superficial and deep intraabdominal fat was also measured using US at the supraumbilical region, according to the previously established method [17][18][19]. Ethics This research was approved by the Ethics Committee of the School of Medicine of the Pontificia Universidad Católica de Chile (ID:16-030) and of the Institute of Nutrition and Food Technology (INTA) of the Universidad de Chile.Signed informed consent and assent were obtained from the parents and the children, respectively, prior to enrollment. Statistical Analysis The association of variables was determined by dividing the participants into two groups: those with a diagnosis of NAFLD (NAFLD group) and those who did not present with the disease (control group).Numerical variables with a normal distribution were expressed as the mean and standard deviation.Variables presenting with an asymmetric distribution with extreme values were shown as median and interquartile range.The Student's t test for independent samples was used to examine the associations of categoricalnumerical variables with a normal distribution, and the Wilcoxon rank test was used for those with an asymmetric distribution.The association of categorical variables was evaluated using the chi-square test. The binary logistic regression model was used to determine the risk of developing NAFLD associated with a higher FMI between 5 and 10 years of age, adjusted for age, sex, maternal pregestational BMI, gestational diabetes (GD), gestational weight gain, and exclusive breastfeeding (EBF) until the sixth month.For this analysis, the numerical variable FMI was transformed into categorical variables expressed in tertiles, with the first tertile serving as the reference. A significance level of <0.05 was considered for all statistical tests.The statistical power of the study was calculated post hoc because the analyses were performed using the sample size that was initially calculated for the cohort.The FMI and NAFLD analyses obtained a minimum statistical power of 80% with a confidence interval of 95%.Data were analyzed using STATA 15.0 (Stata Corp. 2017.Statistical Software: Release 15.College Station, TX, USA: StataCorp LLC).In addition, the programming language, Pyhton version 3.8, and the IPython libraries were used to run the programs NumPy (version 1.23.2),SciPy (version 1.9.0), and pandas (version 1.4.3) to process and analyze the database and the program, matplotlib, was used to create the charts. Nutritional and NAFLD Diagnosis A total of 784 participants were included (380 were males; average age 15.4 ± 0.98 years; range 13.2 to 17.9 years).The average BMI was 22.3 ± 4.2 for males and 24.4 ± 4.7 for females.In the analyzed sample, 27.5% (216 participants) were classified as individuals who were overweight.Additionally, 12.7% (100 participants) were identified as having obesity, and 2.4% (19 participants) were classified as having severe obesity.The prevalence of NAFLD was 9.8% (77/784).There were no significant differences in the prevalence of NAFLD between males and females (9.2% vs. 10.4%, p = 0.577). At the time of diagnosis, the prevalence of NAFLD was significantly higher in the population of adolescents with obesity (38.1%) compared with adolescents who were overweight (10.3%) and normal weight (2.2%) (p < 0.001 for all comparisons).When comparing the characteristics of the participants, the NAFLD group exhibited a higher BMI, higher BMI z-scores, increased waist circumference, and greater amounts of subcutaneous and visceral fat (Table 1).The results are presented as n, mean and standard deviation or as n, median and interquartile range. Anthropometry and Body Composition Trajectories from Childhood to Adolescence For both males and females, the NAFLD group exhibited elevated BMI z-scores at age 4 years, age 10 years, and age 16 years (p < 0.001 for all ages).Additionally, the NAFLD group demonstrated an increased waist circumference from 3 years onward (p < 0.05 for all groups), and displayed significantly higher levels of subcutaneous fat across the four skinfold measurements.Significant differences were observed from age 4 to 12 years in both males and females (Table 2). Regarding the DXA evaluation in males, the NAFLD group had higher levels of total body fat (percentage of fat) and trunk, arm, and leg fat annually from age 12 to 15 years (all p < 0.05).Regarding the DXA evaluation females, the NAFLD group exhibited elevated levels of percentage of fat and trunk fat at age 10 years (p < 0.05).By the age of 12 years, they demonstrated higher levels of percentage of fat, trunk fat, arm fat, and leg fat (p < 0.05).The results are presented as n, mean, and standard deviation or as n, median, and interquartile range.WC = Waist circumference, PCSI = Suprailiac skinfold, PCSE = Subscapular skinfold, PCB = Biceps skinfold, PCT = Triceps skinfold. When we analyzed the trajectory of body composition measured using the bioelectrical impedance analysis using the Hattori charts from age 5 to 15 years, we observed that the NAFLD group had higher FM levels in males and females (p = 0.001 and p < 0.001, respectively).Regarding the FFM, the NAFLD group had higher values in females (p = 0.002), but not in males (p = 0.05) (Figures 1a and 2a); the percentage of fat at age 5 years was higher in the NAFLD group in males (p = 0.003) and females (p < 0.001).Similarly, the NAFLD group had higher FMI values in males (p = 0.003) and females (p = 0.001).For the FFMI, females with NAFLD had higher values than female controls (p = 0.006), but no significant differences were found for males (p = 0.206) (Figures 1b and 2b).Comparing the groups by similar weight (Figures 1a and 2a) or similar BMI (Figures 1b and 2b), both groups had different body compositions in terms of all parameters.The results are presented as n, mean, and standard deviation or as n, median, and interquartile range.WC = Waist circumference, PCSI = Suprailiac skinfold, PCSE = Subscapular skinfold, PCB = Biceps skinfold, PCT = Triceps skinfold. When we analyzed the trajectory of body composition measured using the bioelectrical impedance analysis using the Hattori charts from age 5 to 15 years, we observed that the NAFLD group had higher FM levels in males and females (p = 0.001 and p < 0.001, respectively).Regarding the FFM, the NAFLD group had higher values in females (p = 0.002), but not in males (p = 0.05) (Figures 1a and 2a); the percentage of fat at age 5 years was higher in the NAFLD group in males (p = 0.003) and females (p < 0.001).Similarly, the NAFLD group had higher FMI values in males (p = 0.003) and females (p = 0.001).For the FFMI, females with NAFLD had higher values than female controls (= 0.006), but no significant differences were found for males (p = 0.206) (Figures 1b and 2b).Comparing the groups by similar weight (Figures 1a and 2a) or similar BMI (Figures 1b and 2b), both groups had different body compositions in terms of all parameters. Higher Fat Mass Index during Childhood and the Risk of Developing NAFLD in Adolescence In terms of the FMI, the NAFLD group demonstrated elevated values from age 5 years through adolescence.In males, the disparities were statistically significant at age 5 years (p = 0.002), age 10 years (p < 0.001), and age 15 years (p < 0.001).Likewise, significant differences for females were noted at age 5, 10, and 15 years (all p < 0.001).We grouped the sample into FMI increase tertiles during the first 10 years of life, and evaluated the risk of developing NAFLD in adolescence.For the second tertile, the odds ratio (OR) for developing NAFLD in adolescence was 1.92 (1.34-2.74,95% CI), and for the third tertile, it was 6.12 (4.46-8.39,95% CI), when compared with the first tertile.After adjusting for age, sex, maternal pregestational BMI, gestational diabetes, weight gain during pregnancy, and exclusive breastfeeding up to the sixth month, the ORs increased to 2.19 (1.48-3.25,95% CI) and 6.94 (4.79-10.04,95% CI) for the second and third tertiles, respectively (Table 3).When we conducted the same analysis based on the annual FMI measurements, the results demonstrated that there is an increased risk throughout childhood, and the risk is highest at the age of 5 years, with an OR of 2.9 (1.77-4.76,95% CI) (Table 4). Higher Fat Mass Index during Childhood and the Risk of Developing NAFLD in Adolescence In terms of the FMI, the NAFLD group demonstrated elevated values from age 5 years through adolescence.In males, the disparities were statistically significant at age 5 years (p = 0.002), age 10 years (p < 0.001), and age 15 years (p < 0.001).Likewise, significant differences for females were noted at age 5, 10, and 15 years (all p < 0.001).We grouped the sample into FMI increase tertiles during the first 10 years of life, and evaluated the risk of developing NAFLD in adolescence.For the second tertile, the odds ratio (OR) for developing NAFLD in adolescence was 1.92 (1.34-2.74,95% CI), and for the third tertile, it was 6.12 (4.46-8.39,95% CI), when compared with the first tertile.After adjusting for age, sex, maternal pregestational BMI, gestational diabetes, weight gain during pregnancy, and exclusive breastfeeding up to the sixth month, the ORs increased to 2.19 (1.48-3.25,95% CI) and 6.94 (4.79-10.04,95% CI) for the second and third tertiles, respectively (Table 3).When we conducted the same analysis based on the annual FMI measurements, the results demonstrated that there is an increased risk throughout childhood, and the risk is highest at the age of 5 years, with an OR of 2.9 (1.77-4.76,95% CI) (Table 4). Discussion NAFLD has emerged as the primary cause of chronic liver disease in children, and is directly attributable to the substantial prevalence of childhood obesity.This study investigated the associations between body composition trajectories at various stages of infancy and childhood, and the occurrence of NAFLD in adolescence.We found a general prevalence of NAFLD of 9.8%, with no differences according to sex; these findings are similar to other reports in the literature [5,20].We also found that children who developed NAFLD in adolescence had higher a BMI during childhood, mainly due to a higher FM relative to FFM.A previous study of the same cohort by our group showed that the presence of obesity starting at 2 years of age strongly increased the risk of developing NAFLD in adolescence [11]. Adipose tissue performs several metabolic functions, such as the production of adipokines and cytokines involved in proinflammatory status and extrahepatic injury.Thus, adipocyte hypertrophy may contribute to the development of NAFLD [21,22].Visceral fat has been reported as an important risk factor for insulin resistance, type 2 diabetes, and cardiovascular disease [23,24].In contrast, subcutaneous fat of the lower extremities has been associated with an increased sensitivity to insulin [25].The finding that a higher FM gain during childhood is related to the risk of developing NAFLD in adolescence coincides with the publication of Huang et al. [26], who showed that childhood adiposity trajectories are associated with adolescent insulin resistance, a recognized risk factor for NAFLD. In this study, subcutaneous fat during childhood, as measured by skinfold testing, was associated with the subsequent development of NAFLD.A cohort of 1167 Australian adolescents showed this association from 3 years of age for the suprailiac skinfold [27].Similarly, it was found that a larger waist circumference was associated with the development of NAFLD based on data available from age 14 years.In our cohort, we found this association as early as age 3 years.The waist circumference is a simple measurement to obtain and reflects the accumulation of fat in the trunk; however, it does not differentiate between visceral and subcutaneous adipose tissue [28].The severity of obesity, specifically abdominal obesity, determines a higher risk of NAFLD progression [29].One cross-sectional study in adolescents with obesity using different body composition measurements showed that waist circumference, trunk fat (measured by DXA), and intra-abdominal fat (measured by ultrasound) predicted the presence of NAFLD [30]. In this analysis, we observed significant differences in body composition at early stages, which emerged as a notable risk factor for NAFLD.Patients diagnosed with NAFLD showed a greater accumulation of fat mass even as early as 5 years of age, and these differences became even more pronounced during adolescence when compared with the control group.These findings suggest that early-life FM accumulation may be associated with an increased susceptibility to NAFLD later in life. The period of puberty is characterized by various physiological changes, including increased insulin resistance, elevated blood pressure, and changes in cholesterol levels.These factors can contribute to an increased risk of developing metabolic syndrome, which is often associated with the development of NAFLD [31].Furthermore, we observed marked differences in body composition between males and females in both groups.Females tended to have higher levels of FM, while males exhibited higher levels of FFM.These disparities may be attributed to the differences in sex hormone production [10]. In the evaluation of body composition by DXA, a greater accumulation of body fat and trunk fat during childhood was associated with the subsequent development of NAFLD.Girls who developed NAFLD showed higher levels of body fat, particularly fat centralized in the trunk, at 10 years of age; however, after age 12 years, this accumulation was distributed more homogeneously throughout all compartments.In contrast, boys who developed NAFLD had higher levels of fat, but it was homogeneously distributed throughout all compartments. When evaluating the highest tertiles of FMI, we found that the accumulation of adiposity from an early age increases the risk of developing NAFLD in adolescence.A study of 2160 adults (34.5% with NAFLD) explored the link between body composition and fatty liver.These findings showed an inverse correlation with fat-free tissue and a direct correlation with fat tissue regarding NAFLD risk.The risk of NAFLD increased when total fat exceeded 32% and 26% in women and men, and abdominal fat surpassed 21% and 13% in women and men, respectively [32].A study of 100 children with obesity investigated the impact of body composition, particularly the distribution of body fat, and insulin resistance on NAFLD.The results indicated that body fat, particularly abdominal fat, played a role in the development of insulin resistance and subsequent NAFLD [33]. While the assessment of laboratory variables exceeds the scope of the objectives of this study, it is crucial to note that the new definition of MASLD incorporates the determination of cardiometabolic variables, such as plasmatic HDL-cholesterol, plasmatic triglycerides, and fasting serum glucose [8].Therefore, it is highly relevant to consider these factors in the assessment of metabolic liver dysfunction. The strengths of this study encompass the prospective gathering of high-quality data, a longitudinal design, an extended follow-up duration, a large participant pool, and the representativeness of the Chilean pediatric population. One limitation of this study was that the diagnosis of NAFLD relied on ultrasound rather than the gold standard methods of liver biopsy or magnetic resonance imaging with estimated proton density fat fraction.While liver biopsy is considered the most accurate method for detecting NAFLD, it is an invasive procedure that carries risks and is not suitable for large-scale epidemiological studies due to ethical and practical considerations.MRI is a non-invasive technique for assessing liver fat content, but it may be difficult to access in some settings due to its high cost and limited availability.Moreover, both liver biopsy and MRI are not generally considered suitable as screening tests for NAFLD in the pediatric population [34,35].Another limitation is that the data for the calculation of FMI were only available from age 5 years onwards, preventing the establishment of the initial point of the FMI increase associated with the risk of developing NAFLD in adolescence. Conclusions The trajectories of childhood weight gain and adiposity are associated with the development of NAFLD in adolescence.A larger waist circumference and higher levels of body fat, trunk fat, and subcutaneous fat during childhood are associated with the presence of NAFLD in adolescence.A higher FMI during childhood significantly increases the risk of developing NAFLD in adolescence, with the highest risk at the age of 5 years.Future trials of interventions for controlling adiposity gain during childhood would be helpful in better understanding its effect on NAFLD risk in adolescence.Informed Consent Statement: Informed consent and assent was obtained from all subjects involved in this study. Figure 1 . Figure 1.(a) Hattori plot for the mean (circles: • squares: ▪) of the fat-free mass (FFM) and the fat mass (FM) in males with NAFLD and in controls.The X axis shows the FFM, and the Y axis shows the FM, both expressed in kg.The diagonal lines indicate the weight (kg) and the percentage of FM (% fat); (b) Hattori plot for the mean (circles: • squares: ▪) of the FFM index (FFMI) and the FM index (FMI) in males with NAFLD and in controls.The X axis shows the FFMI, and the Y axis shows the Figure 1 . Figure 1.(a) Hattori plot for the mean (circles: • squares: ) of the fat-free mass (FFM) and the fat mass (FM) in males with NAFLD and in controls.The X axis shows the FFM, and the Y axis shows the FM, both expressed in kg.The diagonal lines indicate the weight (kg) and the percentage of FM (% fat); (b) Hattori plot for the mean (circles: • squares: ) of the FFM index (FFMI) and the FM index (FMI) in males with NAFLD and in controls.The X axis shows the FFMI, and the Y axis shows the FMI, both expressed in kg/m 2 .The diagonal lines indicate the BMI (kg/m 2 ) and the percentage of fat (% fat). FMI, both expressed in kg/m 2 .Figure 2 . Figure 2. (a) Hattori plot for the mean (circles: • squares: ▪) of the fat-free mass (FFM) and the fat mass (FM) in females with NAFLD and in controls.The X axis shows the FFM, and the Y axis shows the FM, both expressed in kg.The diagonal lines indicate the weight (kg) and the percentage of FM (% fat); (b) Hattori plot for the mean (circles: • squares: ▪) of the FFM index (FFMI) and the FM index (FMI) in females with NAFLD and in controls.The X axis shows the FFMI, and the Y axis shows the FMI, expressed in kg/m 2 .The diagonal lines indicate the BMI (kg/m 2 ) and the percentage of fat (% fat). Figure 2 . Figure 2. (a) Hattori plot for the mean (circles: • squares: ) of the fat-free mass (FFM) and the fat mass (FM) in females with NAFLD and in controls.The X axis shows the FFM, and the Y axis shows the FM, both expressed in kg.The diagonal lines indicate the weight (kg) and the percentage of FM (% fat); (b) Hattori plot for the mean (circles: • squares: ) of the FFM index (FFMI) and the FM index (FMI) in females with NAFLD and in controls.The X axis shows the FFMI, and the Y axis shows the FMI, expressed in kg/m 2 .The diagonal lines indicate the BMI (kg/m 2 ) and the percentage of fat (% fat). Funding: This research was funded by the Fondecyt projects 1200839 (J.C.G.) and 11190856 (G.A.), and the Research Project Contest in Pediatric Nutrition for Young Researchers of Latin America, LASPGHAN 2020.M.F.postgraduate studies were funded by CONICYT-PFCHA/Magister Nacional/year 2019-file 79190112.Institutional Review Board Statement: This study was conducted in accordance with the Declaration of Helsinki.The Ethics Committee of the School of Medicine of the Pontificia Universidad Católica de Chile (ID: 200312012) and of the Institute of Nutrition and Food Technology (INTA) of the Universidad de Chile approved the protocol and the informed consent used in the study on 20 May 2016. Table 1 . Anthropometric characteristics of adolescents, by sex. Table 2 . Waist circumference and skinfolds from childhood to adolescence in groups with and without NAFLD, by sex. Table 2 . Waist circumference and skinfolds from childhood to adolescence in groups with and without NAFLD, by sex. Table 3 . Risk of developing NAFLD in adolescence based on FMI tertiles during childhood.Adjusted for age, sex, maternal pregestational BMI, gestational diabetes, weight gain during pregnancy, and exclusive breastfeeding up to the sixth month. Table 3 . Risk of developing NAFLD in adolescence based on FMI tertiles during childhood.Adjusted for age, sex, maternal pregestational BMI, gestational diabetes, weight gain during pregnancy, and exclusive breastfeeding up to the sixth month. Table 4 . Risk of presenting NAFLD in adolescence according to annual FMI during childhood.: * Adjusted for age, sex, maternal pregestational BMI, gestational diabetes, weight gain during pregnancy, and exclusive breastfeeding up to the sixth month. Note
2024-03-17T17:09:28.713Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "9975d649c50ebe5b8de5720710931d51d1885525", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/16/6/785/pdf?version=1709981562", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "851bcb467e86574d544c62226a688f61a8e6ed0b", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
253182995
pes2o/s2orc
v3-fos-license
Gonadotropin‐releasing hormone agonist treatment and ischemic heart disease among female patients with breast cancer: A cohort study Abstract Background The risk of ischemic heart disease (IHD) due to the impact of gonadotropin‐releasing hormone (GnRH) agonists among female patients with breast cancer remains a controversy. Methods Information from the Registry for Catastrophic Illness, the National Health Insurance Research Database (NHIRD), and the Death Registry Database in Taiwan were analyzed. Female patients with breast cancer were selected from the Registry for Catastrophic Illness from January 1, 2000, to December 31, 2018. All the breast cancer patients were followed until new‐onset IHD diagnosis, death, or December 31, 2018. A Kaplan–Meier survival curve was drawn to show the difference between patients treated with and without GnRH agonists. The Cox regression analysis was used to investigate the effects of GnRH agonists and the incidence of IHD. Results A total of 172,850 female patients with breast cancer were recognized with a mean age of 52.6 years. Among them, 6071(3.5%) had received GnRH agonist therapy. Kaplan–Meier survival curves showed a significant difference between patients with and without GnRH therapy (log‐rank p < 0.0001). Patients who received GnRH therapy had a significantly decreased risk of developing IHD than those without GnRH therapy (HR = 0.18; 95% CI = 0.14–0.23). After adjusting for age, treatment, and comorbidity, patients who received GnRH therapy still had a significantly lower risk of developing IHD (AHR = 0.5, 95% CI = 0.39–0.64). Conclusion The study showed that the use of GnRH agonists for breast cancer treatment was significantly associated with a reduced risk of IHD. Further research is required to investigate the possible protective effect of GnRH on IHD. | INTRODUCTION Breast cancer is the most common cancer among females worldwide, accounting for 25.4% of total women's cancer, with more than two million newly diagnosed cases. 1 In Asia, female patients with breast cancer were younger compared with patients from Western countries. Luminal histology subtypes were also more predominate among patients in Western countries. 2 For patients with premenopausal or perimenopause endocrine positive breast cancer, gonadotropin-releasing hormone (GnRH) agonists are increasingly administered in combination with tamoxifen 3 or cyclin-dependent kinase 4/6 inhibitor 4,5 in the adjuvant or metastatic settings. GnRH agonists inhibit the pituitary GnRH receptors and suppress the downstream effects of follicle-stimulating hormone (FSH) and luteinizing hormone (LH), resulting in decreased estrogen production in premenopausal ovaries. 6 Previous studies have shown diverse results regarding the effects of GnRH agonists on the cardiovascular system for hormone-dependent cancer management. A previous animal study showed that GnRH agonists may be associated with atherosclerotic effects. 7 Several observational studies showed that GnRH agonists were related to increased cardiovascular disease risk in patients with prostate cancer. [8][9][10] However, a meta-analysis of randomized trials reported no significant associations between GnRH agonists and the risk of cardiovascular disease. 11 Most evidence suggesting an association between GnRH agonists and cardiovascular disease for male patients with prostate cancer came from population-based studies. 8,9,12,13 Several meta-analyses of observational studies disclosed that GnRH agonists were related to an increased incidence of non-fatal cardiovascular disease. 14, 15 Whether or not GnRH agonists are associated with an excess risk of cardiovascular morbidity remains a highly controversial question. 11 To the best of our knowledge, limited literature addressing the associations between GnRH agonists and the risk of cardiovascular disease in patients with breast cancer is available. Therefore, this study intended to determine the relationship between GnRH agonists and the risk of IHD in female breast cancers. | Data source Data from the Registry for Catastrophic Illness, the National Health Insurance Research Database (NHIRD), and the Death Registry Database in Taiwan were analyzed. The NHIRD contains healthcare data of more than 99% of the population in Taiwan, including both inpatient and outpatient medical records. 16,17 The NHIRD contained patient information such as diagnosis, drug administration, and examinations. The Institutional Review Board of TCH certified this research (no. TCHIRB-10709107-W). | Study subjects Female subjects 18 years and older with a diagnosis of breast cancer between January 1, 2000, and December 31, 2018, were identified from the Registry for Catastrophic Illness (ICD-9-CM and ICD-10-CM code for female breast cancer: 174 and C50.x1x, respectively). All the cancer diagnoses recorded in the Registry of Catastrophic Illness Results: A total of 172,850 female patients with breast cancer were recognized with a mean age of 52.6 years. Among them, 6071(3.5%) had received GnRH agonist therapy. Kaplan-Meier survival curves showed a significant difference between patients with and without GnRH therapy (log-rank p < 0.0001). Patients who received GnRH therapy had a significantly decreased risk of developing IHD than those without GnRH therapy (HR = 0.18; 95% CI = 0.14-0.23). After adjusting for age, treatment, and comorbidity, patients who received GnRH therapy still had a significantly lower risk of developing IHD (AHR = 0.5, 95% CI = 0.39-0.64). Conclusion: The study showed that the use of GnRH agonists for breast cancer treatment was significantly associated with a reduced risk of IHD. Further research is required to investigate the possible protective effect of GnRH on IHD. K E Y W O R D S breast neoplasms, cardiovascular diseases, gonadotropin-releasing hormone, heart disease risk factors, myocardial ischemia were confirmed by pathologists. 18 The Death Registry Database in Taiwan confirmed cases of death. Study subjects were followed until new-onset IHD diagnosis, death, or December 31, 2018. | Outcome variables The incidence of IHD was recognized from the NHIRD. It was defined as the occurrence of more than once in inpatient medical records or more than three times in outpatient medical records (ICD-9-CM code, 411-414 except 414.1x and ICD-10-CM code I20-I25 except for I21, I25.3, and I25.4). 19 | Main explanatory variable Information regarding GnRH agonist prescriptions were gathered from the NHIRD. The total administered daily dose of GnRH agonists was calculated and expressed as the defined daily dose (DDD); 0.134 mg for leuprorelin and triptorelin, and 0.129 mg for goserelin, which was suggested by the Anatomical Therapeutic Chemical Classification/Defined Daily Doses (ATC/ DDD) system. 20 | Potential confounders The potential confounders were age, socioeconomic status, breast cancer therapy, including lumpectomy and radiotherapy, and comorbidities. The socioeconomic status included income level and residence. Income level was categorized as low, intermediate, and high (≤19,200; 19,201 to <40,000; ≥40,000 New Taiwan Dollars [NTD]). Residence was categorized as urban, suburban, and rural. Comorbidities were recognized only if the condition occurred more than once in an inpatient setting or more than three times in outpatient medical records. 21 | STATISTICAL ANALYSIS First, the demographic data of the study subjects were shown as continuous data with mean and standard deviation (SD) or categorical data with numbers and percentages. Patients with and without GnRH agonist treatment were compared using the two-sample t-test and Pearson χ 2 test. The incidence of IHD was calculated using events per 1000 person-years. Kaplan-Meier survival curves were drawn to show the difference between patients treated with and without GnRH agonists. The Cox regression analysis was used to calculate hazard ratios (HRs) and 95% confidence intervals (CIs). Dose-response relations were also evaluated between GnRH agonist (as a continuous variable) and incident IHD. Death events were analyzed as competing risk events. 22 Stratified analyses were performed according to age and comorbidities in case interaction may exist. Sensitivity analysis was performed by excluding missing data of the stage of breast cancer and including cancer stage in multivariable Cox regression analysis. The data analyses were conducted using the SAS 9.4 software package (SAS Institute). | RESULTS A total of 196,539 female patients with breast cancer were recognized from the Registry for Catastrophic Illness between January 1, 2000, and December 31, 2018. After excluding those with antecedent IHD (n = 22,687), younger than 18 years old (n = 15), and those with incomplete data (n = 987), there were 172,850 patients included in the analysis. Table 1 shows the baseline features of participants. The overall mean (SD) age was 52.6 (11.5) years, and 3.5% of the subjects received treatment with GnRH agonist. The mean (SD) of the DDDs for GnRH agonists was 41.5 (6.4) among patients receiving hormone treatment. Moreover, the mean (SD) follow-up times were 4.98 (3.80) years in patients receiving GnRH agonists and 7.19 (5.63) years in those not receiving GnRH agonists. Compared with patients not receiving GnRH agonists, those receiving GnRH agonists were younger and more likely to receive lumpectomy and radiotherapy. Moreover, patients receiving GnRH agonists had a lower proportion of comorbidities. Patients received treatment without GnRH agonists were more likely to live in rural areas and have lower incomes. During the study follow-up period, 12,605 female patients with breast cancer had a new-onset of IHD, including 63 (1.05%) patients receiving GnRH agonists and 12,542 (7.52%) patients not receiving GnRH agonists. The incidence rate of IHD per 1000 person-years was 2.10 in patients receiving GnRH agonists and 10.46 in those not receiving GnRH agonists (p < 0.001). In addition, the time to incident IHD was significantly longer in patients receiving GnRH agonists than in those not receiving GnRH agonists (p < 0.001, log-rank test; Figure 1). The univariable Cox proportional hazards model showed that female patients with breast cancer undergoing GnRH agonist therapy had a significantly decreased risk of incident IHD (HR: 0.18, 95% CI: 0.14-0.23). After adjusting for age, sex, and comorbidities, patients using GnRH agonist therapy still had a significantly lower risk of incident IHD (AHR: 0.50; 95% CI: 0.39-0.64) ( Table 2). Patients with higher income levels had a lower risk of incident IHD. Other factors associated with decreased risk of incident IHD consisted of lumpectomy and radiotherapy. Moreover, risk factors of incident IHD consisted of age ≥ 50 years, diabetes, chronic kidney disease (CKD), hypertension, dyslipidemia, cerebrovascular disease, chronic obstructive pulmonary disease (COPD), and liver cirrhosis. A significantly linear dose-response effect per DDD increase in GnRH agonists for incident IHD (AHR, 0.91; 95% CI <0.84-0.98; p = 0.011) was also noted. Figure 2 showed the results of stratified analysis. GnRH agonists were significantly associated with a lower risk of incident IHD in all the subgroups, except in those with CKD or COPD, respectively. Sensitivity analysis was performed after adjustment for the stage of breast cancer. Patients with missing data of stage were excluded from the analysis(n = 104,726). There were 68,124 participants included in multivariable Cox regression analysis. After adjusting for stage of breast cancer, the result showed that female patients with breast cancer undergoing GnRH agonist therapy had a significantly decreased risk of incident IHD (HR: 0.57, 95% CI: 0.38-0.84, p = 0.004) (Table S1). | DISCUSSION This study found that female patients with breast cancer receiving GnRH agonists had a lower risk of developing IHD than patients not receiving GnRH agonists. GnRH agonists bind to GnRH receptors in the pituitary gland, resulting in the secretion and initial surge of FSH and LH which stimulates the production of serum testosterone or estrogen. Subsequently, the negative feedback at the pituitary gland causes downregulation of GnRH receptors. On the contrary, no initial testosterone surge is found after administration of GnRH antagnosits. 14 The distinct impact of GnRH agonists in our study, and bilateral oophorectomy on IHD, might be partially explained by the fact that serum FSH and LH is sustainably inhibited after GnRH agonist administration but upregulated after bilateral oophorectomy. 23 Potential alternative mechanisms explaining the findings of our study were adipogenesis 24 and atherosclerosis. 25 Dysregulated fat deposits to the arterial wall cause atherosclerosis and IHD. 26 Peripheral blood mononuclear cells (PMN) and pro-inflammatory T helper 1 lymphocytes both express GnRH receptors. The activation of these receptors is involved in the activation of PMNs, lymphocytes, and cytokine production, such as an increase in IFN-γ, and decrease in IL-4. 27,28 Different effects of GnRH-I and GnRH-II demonstrated that GnRH-I enhanced proliferation of PMNs and IL-2Rγ expression, while GnRH-II attenuated proliferation of PMNs and IL-2Rγ expression. 29 A large population study evaluating the side effects of bilateral oophorectomy-induced menopause on premenopausal women before age 50 without hormone replacement therapy (HRT) demonstrated a statistically significant increased risk of multimorbidity including hyperlipidemia, and diabetes mellitus. The side effects of coronary artery disease became statistically significant only in adjusted analyses restricted to females receiving oophorectomy before the age of 45. 23,30 The deleterious effects of natural estrogen deprivation after menopause in the Study of Women's Health Across the Nation (SWAN) comprises of increased body and cardiovascular fat and alternations in body weight and waist circumference. [31][32][33] Association between lumpectomy and IHD risk was not yet investigated in previous studies. The procedure of lumpectomy may not be associated with pathogenesis of IHD. In this study, we tried to included detailed treatment procedure, including surgical procedure, radiotherapy, and medical treatment. The detailed surgical procedure was not available in our dataset. Further research is warranted to explore impact of lumpectomy on IHD risk. Previous studies had demonstrated that exposure of the heart to ionizing radiation during radiotherapy for breast cancer increases the subsequent rate of ischemic heart F I G U R E 1 Kaplan-Meier curves for time to diagnosis of incident ischemic heart disease in patients receiving and not receiving GnRH agonists. GnRH, gonadotropin-releasing hormone. T A B L E 2 Univariates and multivariate analyses for risk factors associated with ischemic heart disease among patients with breast cancer disease. 34 But the results of this study showed that radiotherapy appeared to be associated with lower risk of IHD. The detailed radiation therapy regimen including dose and area were not available in this dataset. Even radiotherapy for distal bone metastasis were included in analysis, which may lead to bias on IHD risks of radiotherapy. This study enrolled a large number of patients with breast cancer and had a long follow duration from 2000 to 2018. The diagnoses of breast cancer were confirmed by pathology reports in the Registry for Catastrophic Illness, and the diagnoses of comorbidities were confirmed by medical reports to ensure the validity of this study. Additionally, socioeconomic status and treatment strategies were included as potential confounders. Our study has several limitations. First, similarly to other retrospective population studies, patients were not randomized to both treatment groups. Patients allocated to the GnRH treatment group had significantly higher income levels, urbanization, more lumpectomy, and radiotherapy. However, these patients were younger and had fewer comorbidities including diabetes mellitus and dyslipidemia. Nonetheless, multivariate analysis demonstrated treatment with GnRH agonists as an independent predictive factor associated with lower risk of IHD. The stratified analysis also showed that GnRH agonists were significantly associated with a lower risk of IHD in all subgroups of patients. Second, we used ICD codes to identify the diagnosis of IHD in the administrative database. Although patients with less frequent visits were less likely to be diagnosed with IHD, the frequency of visits ranged from once every month to every 3 months. Patients receiving GnRH agonists usually received treatment at a one-month interval, which made the attribution of lower risk of IHD to lower frequency of visits less likely. The generalizability of this study to other regions requires further certification because most of the study subjects were Taiwanese. Number of patients Our study provides preliminary report for evaluating breast cancer treatment, considering the scarce literature currently available regarding the associations of GnRH agonists and the risk of IHD among women with breast cancer. In conclusion, our large population study is the first to report that treatment using GnRH agonists for patients with breast cancer was associated with a significantly reduced risk of IHD after adjusting for variable confounders. Furthermore, endocrine therapy for breast cancer treatment should weigh the benefits of disease-specific survival against long-term side effects of cardiovascular events. Patients receiving endocrine therapy should try to avoid risk factors of cardiovascular disease. Further research to delineate and confirm the causality and mechanisms is needed.
2022-10-29T06:17:52.537Z
2022-10-28T00:00:00.000
{ "year": 2022, "sha1": "f6563fbfe997508fcd6c9406514c453a241937c6", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "f919cbe089c6be55420794f1e6520c4d0afd0d0f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119403174
pes2o/s2orc
v3-fos-license
Wave dispersion in pulsar plasma: 2. Pulsar frame Wave dispersion in a pulsar plasma is discussed emphasizing the relevance of different inertial frames, notably the plasma rest frame ${\cal K}$ and the pulsar frame ${\cal K}'$ in which the plasma is streaming with speed $\beta_{\rm s}$. The effect of a Lorentz transformation on both subluminal, $|z|<1$, and superluminal, $|z|>1$, waves is discussed. It is argued that the preferred choice for a relativistically streaming distribution should be a Lorentz-transformed J\"uttner distribution; such a distribution is compared with other choices including a relativistically streaming Gaussian distribution. A Lorentz transformation of the dielectric tensor is written down, and used to derive an explicit relation between the relativistic plasma dispersion functions in ${\cal K}$ and ${\cal K}'$. It is shown that the dispersion equation can be written in an invariant form, implying a one-to-one correspondence between wave modes in any two inertial frames. There are only three modes in the plasma rest frame, and it is argued that a claimed `fourth' mode in the pulsar frame is a spurious result of an invalid approximation. Introduction In an accompanying paper (Rafat et al 2017, hereinafter Paper 1) we discuss wave dispersion in the rest frame, denoted K, of a pulsar plasma emphasizing the importance of the intrinsic spread in electron (and positron) energies, γ 1. In this paper we discuss aspects of the plasma physics that involve Lorentz transforming between frames. In particular, we consider the effects of the Lorentz transformation between K and the pulsar frame, K , in which the plasma is streaming outwards at speed β s , where we use "speed" to refer to a velocity component along the direction of the magnetic field relative to the speed of light. As in Paper 1, we describe the wave dispersion in terms of the frequency ω, phase speed z = ω/k c and angle θ of propagation. The Lorentz transformation relates ω, z and θ in K to ω , z and θ in K . As in Paper 1 we suggest that the default choice for a relativistic distribution of particles in K should be a 1D Jüttner distribution. Here we argue that the default choice for the distribution function for a beam, or other streaming distribution of highly relativistic particles, is that obtained by applying a Lorentz transform to a 1D Jüttner distribution in K. Alternative choices for a relativistic distribution function in K include a power-law (Kaplan & Tsytovich 1973, §17), a relativistic Gaussian (Lominadze & Pataraya 1982;Asseo & Melikidze 1998) and water-bag (Arons & Barnard 1986) and bell (Gedalin et al. 1998) distributions. For non-streaming distributions the effects of the different choices is primarily on the form of the relativistic plasma dispersion function (RPDF), and these effects are relatively minor (Gedalin et al. 1998). However, different choices have a much larger effect for streaming distributions. We find that the Lorentz-transformed distribution function is very much broader than the streaming Gaussian distribution usually assumed. We discuss the implications of this for beam-driven instabilities in a pulsar magnetosphere. One notable implication is the effect on the "separation" condition, for two relatively streaming distributions to become separated, rather than overlapping (in u = γβ), so that one can be identified as a beam propagating through the other (the background). Wave dispersion in K may be treated using three different (but equivalent) approaches. One approach is to treat the wave dispersion in K and Lorentz transform the wave solutions to K . Two effects of the Lorentz transformation on a wave are well-known in the context of escaping pulsar radio emission: the effect (Lorentz boost) on the frequency (Lesch et al. 1998) and the effect (aberration) on the direction of propagation (Cordes 1978;Gupta & Gangadhara 2003). The transformation of the phase speed is a trivial application of the relativistic addition of velocities, z = (z + β s )/(1 + zβ s ), but some care is needed in the application to wave dispersion because either ω or z may be opposite in sign to ω or z. Formally, ω < 0 may be treated by using the symmetry of the dispersion equation under ω , k → −ω , −k to relate the positive-and negativefrequency solutions, by requiring that the physical solution of the dispersion relation (in any frame) correspond to a positive frequency. The other approaches involve deriving the wave dispersion directly in K , with the two alternatives relating to the way the dielectric tensor is identified in K . One way is to Lorentz transform the distribution function and use the transformed distribution function in calculating the dielectric tensor in K . The other way is to Lorentz transform the dielectric tensor from K to K . The latter approach involves transforming the relativistic plasma dispersion function (RPDF) z 2 W (z) in K to z 2 W (z ) in K . We establish the equivalence of these approaches in general, by showing that the dispersion equation may be written in invariant form, and we illustrate the equivalence for specific wave modes. The equivalence of the two ways of relating wave dispersion in K and K implies an inconsistency in the literature: it is found that there are only three modes in K (Paper 1) whereas there have been claims of a "fourth" longitudinal mode in K (Beskin et al. 1993;Lyne & Graham-Smith 2006). In principle, a "fourth" mode could arise from a mis-interpretation of the transformation of ω = ω L (z) > 0 in K into ω < 0 in K . However, we argue that this is not the case and that the "fourth" mode is a spurious result of invalid approximations made in evaluating the RPDF in K . In §2 we write down the Lorentz transformation between K and K for a wave and also for a 1D distribution function. In §3 we argue that a beam should be modeled as a Lorentz-transformed Jüttner distribution, and we introduce a multi-beam model composed of several such distributions. In §4 we estimate the separation condition for two such (relatively streaming) distributions to be regarded as non-overlapping, and point out that this condition is more restrictive than might be anticipated. We write down the Lorentz transformation of the dielectric tensor and of the dispersion equation in §5. In §6 we show that the "fourth" mode is a spurious result of an approximation made in K . We discuss our results and summarize our conclusions in §7. Lorentz transformation between rest and pulsar frames In this section we write down the Lorentz transformation between K and K , which is assumed to be streaming at speed β s (in the negative direction towards the pulsar 2) between z and z is plotted for β s = 0.9; the box enclosed by the dashed lines at z, z = ±1 is the subluminal range. surface) relative to K. We also discuss the transformation of a 1D Jüttner distribution between K and K . Lorentz transformation to the pulsar frame The Lorentz transformation from the unprimed frame K to the primed frame K moving along the magnetic field at speed β s applied to a wave, described by frequency ω and components k and k ⊥ , parallel and perpendicular, respectively, to the relative velocity, gives with γ s = (1 − β 2 s ) −1/2 . In terms of the variables z = ω/k c and θ in the unprimed frame and z = ω /k c and θ in the primed frames, equations (2.1) and the inverse transforms imply . (2. 2) The relation between z and z is illustrated (for β s = 0.9) in Figure 1. The relation separates into two branches. One branch includes the subluminal range, −1 < z < 1 with −1 < z < 1, and two superluminal ranges, one where both z, z are negative, −1/β s < z < −1 with −∞ < z < −1, and another where both z, z are positive, 1 < z < ∞ with 1 < z < 1/β s . The other branch is for superluminal negative z and superluminal positive z , −∞ < z < −1/β s with 1/β s < z < ∞, respectively. Assuming a source on the near side of the pulsar, only waves with z > 0 can reach the observer; these include not only forward propagating waves in K, z > 0, but also backward propagating waves with −β s < z < 0 in K, which become forward propagating waves, z > 0, in K . Subluminal waves The subluminal range −1 < z < +1 in K maps onto the subluminal range −1 < z < +1 in K . However, z and z can have opposite signs. The phase speed z = 0 that separates forward-and backward-propagating waves in K maps onto z = β s in K , and the phase speed z = 0 that separates forward-and backward-propagating waves in K corresponds to z = −β s in K. Forward-propagating waves with 0 < z < β s in K correspond to backward-propagating waves −β s < z < 0 in K. However, this interpretation requires further comment. Note that the inverse of the transformation given by equation (2.1), specifically ω = γ s ω (z −β s )/z and k = γ s k γ s (1−z β s ), implies that z has the opposite sign to z due to ω < 0, k > 0. The negative frequency requires interpretation. It is conventional to describe a wave in terms of a positive frequency, and it is always possible to do so because the dispersion equation is unchanged under ω, k → −ω, −k and hence is an even function of z with positive-and negative-frequency solutions ω = ±ω M (z), for some wave mode M . Confusion arises because negative z can be due to either ω or k being negative. A formal way of allowing for the change in sign of the frequency under a Lorentz transformation is to distinguish between forward-and backwardpropagating wave modes with dispersion relations ω = ω M ± (z) > 0. One then requires that if the Lorentz transformation causes the frequency to change sign, one reinterprets this as a change in mode, from forward-propagating, M +, to backward-propagating, M −. The mapping z → z for 1 − z, 1 − z 1 becomes strongly distorted for γ s 1. Important features of the wave dispersion discussed in Paper 1 occur for γ φ = (1 − z 2 ) −1/2 1, and an approximate form for the Lorentz transformation is desirable for this case. The relations (2.2) for 1 − z, θ 1 may be approximated by where we assume γ φ , γ s 1. Thus phase speeds z ≈ 1 near the speed of light, γ φ 1, in K transform into phase speeds much closer to the speed of light, γ φ ≈ 2γ s γ φ γ φ , in K . The approximation (2.3) applies to the parallel Alfvén (or A) mode, with the dispersion Similarly, the maximum frequency of the L mode is determined by the maximum of the RPDF, at z = z m , γ φ = γ m in K, and at z = z m = (z m +β s )/(1+ z m β s ), with γ φ = γ m ≈ 2γ s γ m in K . The features of the wave dispersion in the small range of 0 < 1 − z 1 discussed in Paper 1 are squeezed into an extremely narrow range of phase speeds 0 < 1 − z 1 in K . Superluminal waves The superluminal ranges in K and K also map into each other, but in a less obvious way than for subluminal waves. In this case changes in sign between z and z occur at (z, z ) = (±∞, 1/β s ), or at (z, z ) = (−1/β s , ±∞). The frequency cannot change sign, and the introduction of ± modes is not relevant. In the application to pulsars, superluminal waves are relevant to oscillations that are primarily in time. Purely temporal oscillation correspond to k = 0, or z = ±∞, in K and to k = 0, or z = ±∞, in K , and these may be identified as the conditions for the cutoff frequencies in the two frames. However, the cutoff frequencies in the two frames are not the same (in any meaningful sense) and the relation between them is not obvious. Specifically, assuming k = 0 in K and k = 0 in K implies frequencies that are related by ω = γ s ω and ω = γ s ω, respectively. In a pulsar plasma the only cutoff (in the radio range) is in the L mode at ω = ω x = ω p 1/γ 3 1/2 ≈ ω p / γ 1/2 in K, and this corresponds to ω = ω x /γ s in K . On the other hand, k = 0 in K corresponds to z = −1/β s in K, and to a frequency ω = ω L (−1/β s ) ≈ ω L (−1) = ω 1 , for γ s 1, in K, and hence to ω ≈ γ s ω 1 in K . We remark that the relation ω ≈ γ s ω applies for nearly temporal oscillations (large z) in K and the relation ω ≈ γ s ω applies for nearly temporal oscillations (large z ) in K . There is a rapid transition between these relations near z = −z , with |z| = |z | = 1 + 1/γ s . This rapid transition near z −1, z 1 is evident (for β s = 0.9) in the upper-left branch in Figure 1. Distribution function in the pulsar frame The distribution g(u) in the rest frame may be rewritten in the pulsar frame by noting that it is invariant under Lorentz transformations along the direction of the magnetic field. In a 4-tensor notation, let u µ = (u 0 , u) denote a 4-velocity, with u 0 = γ, u = γβb, where b is the unit vector along the magnetic field. We denote the invariant constructed from two 4-vectors v µ and w µ by vw = v 0 w 0 − v · w. The 4-velocity corresponding to a system at rest is u µ 0 = (1, 0) and the 4-velocity of a system moving at speed β s is u µ s = (γ s , γ s β s b). The parameters γ, β and γ , β are related by the Lorentz transformation: For any distribution function in K that depends only on the energy, it is convenient to write this dependence in terms of γ = u 0 u. We note the invariant u s u = u 0 u constructed from the 4-velocity u µ = (γ, γβb) and from the 4-velocity u µ = (γ , γ β b). It is convenient to write the distribution function g(u) in K as g(γ), when it depends only on the energy, and to rewrite this as g(u 0 u). The distribution function g (u ) in K becomes g(u s u ), with u s u = γ s γ (1 − β s β ). The normalization of g(u) is fixed to the number density, du g(u) = n, in K. The number density in K is n = du g (u ). Streaming Jüttner distribution In this section we re-interpret the Lorentz-transformed Jüttner distribution (2.5) as a streaming Jüttner distribution and argue that this should be the preferred choice to model streaming particles in a pulsar plasma. We start by writing down a multi-beam model that consists of a sum of such transformed Jüttner distributions with different streaming speeds. We then discuss the properties of a single such streaming distribution and compare it with a relativistically streaming Gaussian model that has been used in the pulsar literature. Multi-beam model A multi-beam model for the total distribution function of particles is assumed to consist of a number of components that are streaming relative to each other. Such a model applies in a specific frame, which we leave undefined, with each streaming speed relative to a point at rest in this frame. Let a specific distribution function, g α (u), correspond to a streaming Jüttner distribution with a streaming speed β α , inverse temperature ρ α and number density n α . The contribution of species α to the total distribution function is obtained by Lorentz transforming the Jüttner distribution in the rest frame to the frame in which it is streaming with speed β α . Using equation (2.5), this gives where n α /γ α is the number density in the rest frame of species α. The multi-beam model corresponds to a sum of such distributions: We discuss specific examples involving two such distributions in the next section. Relativistically streaming distributions In discussing choices for the distribution function of a relativistic beam in a pulsar plasma, it is helpful to start from nonrelativistic counterparts. In the absence of streaming the default choice in the nonrelativistic case is a Maxwellian distribution, ∝ exp(−ρβ 2 /2) in the notation used in this paper. The corresponding model for a beam is a distribution streaming with speed β α ; this is ∝ exp[−ρ(β − β α ) 2 /2], which is obtained by applying a Galilean transformation to the Maxwellian distribution. We discuss several different choices of relativistic (non-streaming and streaming) distributions that are generalization of the Maxwellian case. The standard relativistic generalization in the non-streaming case is a Jüttner distribution, which is obtained from the nonrelativistic Maxwellian distribution by replacing β 2 /2 by γ − 1, noting the expansion γ = 1 + β 2 /2 + . . . for β 2 1. This is equivalent to writing the Maxwellian distribution in the form ∝ exp(−ε/T ) and replacing the nonrelativistic energy, ε = mc 2 β 2 /2, by its relativistic counterpart, ε = γmc 2 . Our choice for a relativistically streaming distribution is obtained by applying a Lorentz transformation to the resulting Jüttner distribution. A relativistically streaming Jüttner distribution is qualitatively different from its nonrelativistic counterpart, notably in the absence of any approximate symmetry. Specifically, a streaming 1D Maxwellian distribu- , is symmetric about β = β α , but there is no such symmetry for a relativistically streaming Jüttner distribution. Another choice of relativistic generalization of a Maxwellian distribution involves replacing the 3-speed β by the 4-speed u = γβ. In the absence of streaming this gives a Gaussian distribution ∝ exp(−u 2 /2u 2 th ), with u 2 th = 1/ρ α regarded as a free parameter in the model. This generalization applied to a streaming Maxwellian gives a streaming Gaussian, which is a favored choice in the pulsar literature (e.g., Lominadze & Pataraya 1982;Asseo & Melikidze 1998): ( 3.3) The parameter u 2 th may also be interpreted as the average (u−u α ) 2 over this distribution function. Note that the form (3.3) is obtained by two sequential replacements: including the streaming through β → β − β α and including relativistic effects through {β, β α } → {u, u α }. A different result is obtained if one makes these generalizations in the opposite order, cf. equation (3.4). We note two differences between the relativistically streaming Gaussian (3.3) and a streaming Jüttner distribution. First, like its nonrelativistic counterpart, a relativistically streaming Gaussian is symmetric about u = u α , whereas there is no such symmetry for a streaming Jüttner distribution. Second, a streaming Jüttner distribution is related to its non-streaming counterpart by a Lorentz transformation, but there is no such relation for a relativistic Gaussian. Specifically, the Lorentz-transformed Gaussian is obtained by replacing its dependence on u = γβ in terms of primed quantities using γ = γ α γ (1−β β α ) and β = (β − β α )/(1 − β β α ), where a prime denotes quantities in the frame in which the distribution is streaming. The Gaussian distribution, ∝ exp(−u 2 /2u 2 th ), does not transform into the streaming Gaussian distribution (3.3). The Lorentz transform of any given distribution g α (u) is not g α (u − u α ), but rather g α (u α ) with u α = γγ α (β − β α ). A relativistic Gaussian in its rest frame transforms into where the final form applies for {γ 2 , γ 2 α } 1. A distribution of the form (3.4) has some similarities to the streaming Jüttner distribution (3.1). However, we see no reason to prefer the distribution (3.4) over the streaming Jüttner distribution (3.1). In Figure 2 we plot the Gaussian (solid and dashed) and Jütner (dotted) distributions for ρ α = 0.1. In the left panel we choose u α = 0 for which the two expressions for the Gaussian distribution given by equations (3.3) and (3.4) coincide: u 2 th = 1/ρ α (solid) and u 2 th = 1/ρ 2 α (dashed); and the Jüttner distribution (2.5) is given by the dotted curve. Comparison of the three cases shows that for small |u| the width of the Jüttner distribution is intermediate between a Gaussian with u 2 thα = 1/ρ α and a Gaussian with u 2 thα = 1/ρ 2 α , with the Jüttner distribution having much broader wings at larger |u|. The number density is proportional to the area under the curve, ∝ γ α . The change when streaming is included is shown in the right panel for u 2 th = 1/ρ α with u α = 100 (black curves) and 200 (blue curves). The solid curves show plots of the Gaussian distribution as given by equation (3.3) and the dashed curves show the form given by equation (3.4). The corresponding plots for the Jüttner distribution are given by the dotted curves. It is clear that the Lorentz-transformed Gaussian distribution (3.4) is much broader, with its width increasing as u α increases, whereas the width of the shifted Gaussian (3.3) is independent of u α . Below the peak at u = u α , the positive slope of the Jüttner distribution is much smaller than for either Gaussian, and above the peak the Jüttner distribution decreases much more slowly with u than for either Gaussian. The width of the Lorentz-transformed Gaussian remains comparable to that of the Jüttner distribution when plotted as a function of the logarithm of u = γβ. The streaming Gaussian distribution (3.3) is a poor approximation to a streaming Jüttner distribution for ρ α 1. In particular the slope of the distribution, dg(u)/du, for either Gaussian distribution is a poor approximation to the slope for the Jüttner distribution. This slope is directly relevant to a beam-driven instability, suggesting that the growth rate for a Jüttner distribution is poorly approximated by a streaming Gaussian model. We suggest that the choice of a relativistically streaming Gaussian distribution (3.3) is made primarily for mathematical convenience. The choice (3.3) applies only in a specific frame, in the sense that it does not retain its form under a Lorentz transformation. We adopt the view that the default choice for a relativistic distribution is a Jüttner distribution in the rest frame of the plasma, and that the default choice for a relativistically streaming distribution is that obtained by applying a Lorentz transformation to the distribution function in the rest frame. The fact that the resulting streaming distribution is very much broader than the rest-frame distribution is a characteristic feature, which applies to but is not restricted to a Jüttner distribution. Examples of relativistically streaming distributions In Figure 3 we plot the distribution function (3.1) for ρ α = 0.1, and for several values of u α = γ α β α . On the left panel is shown a non-streaming distribution, β α = 0 (solid), and two streaming distributions, γ α β α = 3 (dashed) and γ α β α = 10 (dotted). The nonstreaming distribution is symmetric about the origin, u = 0; a slight asymmetry develops for a small streaming speed, and for γ α β α ≈ 1/ρ α ≈ γ α the asymmetry is substantial. In the case u α ≈ γ α ≈ 10 the distribution function is almost negligible for u < 0, and increases with increasing u > 0 to a maximum near u = u α ≈ 10, and then decreases slowly for u u α . On the right panel in Figure 3 we show the cases u α = 10 (solid), 30 (dashed), 100 (dotted) on a larger scale. In each case the distribution function has a maximum at u = u α . Note that the normalization in Figure 3 is chosen to show the relative shapes of the distributions: each is normalized so that its maximum is unity. The number density in each case is proportional to the area under the curve, which is ∝ γ α for a streaming Jüttner distribution; with normalization to a fixed number density the maxima would be ∝ 1/γ α . † Figure 4: Comparison of the approximate forms (3.7) and the exact form (3.1) of a Jüttner distribution with ρ α = 0.1 and u α = 100. The solid curve is the exact distribution, the dashed curve is the approximation for γ γ α , and the dotted curve is the approximation for γ γ α . For comparison we consider the same highly relativistic approximation to the Lorentztransformed Gaussian distribution (3.4). In place of equation (3.7), this gives (3.8) The distribution (3.8) is very much broader than the conventional form (3.3) for a relativistically streaming Gaussian distribution, as is evident by the way in which they fall off for γ 2 γ 2 α : specifically ∝ exp(−γ 2 /2u 2 th ) and ∝ exp(−γ 2 /8γ 2 α u 2 th ), respectively. "Separation" of relatively streaming distributions In the familiar bump-in-tail instability, in which Langmuir waves grow due to a beam of fast electrons in a nonrelativistic plasma, growth requires a minimum in the total distribution function between the thermal background and the fast beam. In this section we discuss the generalization of this "separation" condition to the relativistic case for Jüttner distributions. We first estimate the condition for separation between two counterstreaming distributions. Counter-streaming distributions An idealized counter-streaming distribution consists of two streaming Jüttner distributions, α = ±, withn ± =n/2, |β ± | =β, and the same temperature ρ ± = ρ ≈ 1/ γ 1. The resulting distribution function is (4.1) We first discuss how the distribution changes as the speedū =γβ increases from zero tō γ 1/ρ 1. We then transform to the frame where one of the distributions is at rest and consider the highly relativistic case. Forū = 0 the two distributions are identical, and their sum is a single Jüttner distribution, corresponding to the solid black curve on the left in Figure 5. As shown in Figure 5, the curves move apart with increasingū, becoming almost completely separated forū 10. This "separation condition" is important in estimating the conditions under which the combined distribution can be interpreted as a beam propagating through a background distribution. The separation condition was discussed by Lazar et al. (2010), who considered the 3D counterpart, but this difference is unimportant here. Separation occurs forγβ 2 > 1/ρ, as shown on the right in Figure 5. For {ū, γ } 1 we may write this separation condition asū/ γ 1. showing the disappearance of the peak at u = 0. The normalization of both distributions is chosen such that its maximum, at u = ±ū, is unity. Transformation to rest frame of one beam The properties of the counter-streaming distribution are useful in discussing the weakbeam model. The idea is that by transforming to the frame in which the backward propagating distribution is at rest, the backward propagating distribution is re-interpreted as the background distribution, with the forward propagating distribution being regarded as the beam. The weak-beam case follows by multiplying the latter distribution by the ratio of the beam to background densities. The relative speed between the two distributions becomes the beam speed Let a quantity in the frame in which the backward propagating beam is at rest be denoted by a prime. Then in equation (4.1) one has Let n 0 be the number density of either beam in the rest frame of that beam. Using the fact that g cs (u ) = g cs (u) is an invariant, in the primed frame equation (4.1) becomes with n b = γ b n 0 in this case. Equation (4.4), with primes omitted, becomes a weak-beam model for n b /γ b n 0 1. Separation condition The conditionū ≈γ γ for two identical counter-streaming Jüttner distributions to become well separated, transforms into γ b 2 γ 2 in the frame in which one of the beams is at rest. The Lorentz transformation to the new frame makes this separation condition appear to be more extreme than in the counter-streaming frame. This separation condition is a direct result of the Lorentz transformation, and is not specific to Jüttner distributions, as may be seen by considering counter-streaming Gaussian distributions. For counter-streaming Gaussian distributions of the form given by equation (3.3), with α = ±, u ± = ±ū, n ± =n/2 and the same u th , the separation condition is closely analogous to that for the corresponding nonrelativistic counterpart, in which a Gaussian distribution is equivalent to a Maxwellian distribution. The two distributions become well separated when the streaming speeds exceed the thermal spreads, corresponding tō u u th . This condition transforms into γ b 2u 2 th , which is equivalent to γ b 2 γ 2 for Jüttner distributions. Weak-beam model In a weak-beam model there are only two components, which we denote by α = 0, b, where α = 0 refers to the background and α = b refers to the beam. The frame of interest is identified as the rest frame of the background in this case. The distribution function is then g(u) = g 0 (u) + g b (u), with ε n = n b /γ b n 0 1, where the first term is g 0 (u) and the second term is g b (u). For ρ 0 = ρ b = ρ, this result also follows from equation (4.4) by omitting the primes and allowing arbitrary n b /γ b n 0 1. A conventional approach to treating wave dispersion in this case is based on an expansion in ε n 1. To zeroth order the beam is ignored, such that the wave dispersion is determined by the background plasma alone. To first order the beam contributes a correction to the frequency, which includes both imaginary and real parts, with the former determining the growth rate of any beam-driven instability. In the case of a maser instability, due to negative absorption, growth requires dg(u)/du > 0 at the resonant frequency, determined by u = γβ with β = z. In Figure 6 we plot the weak-beam distribution function (4.5) for ρ = 0.1, for two values of ε n = 0.1 on the left and 0.01 on the right, and for u b = 100, 200. For ε n = 0.1 the minimum and maximum (at u = u b ) in g(u) that would be present in the absence of the background have almost disappeared for u b = 100, but are still present for u b = 200. For ε n = 0.01 the minimum and maximum for u b = 100 are nearly smoothed out. A decrease in the relative density, ε n , of the beam affects the separation condition: whereas for ε n = 1 the two contributions are equal for γ = (γ b + 1) 1/2 / √ 2 ≈ (γ b /2) 1/2 , this equality moves to higher γ ≈ (γ b /2) 1/2 + ( γ /2) ln(1/ε n ) with decreasing ε n . This tends to suppress a beam-driven instability. Transformed dispersion relations We illustrate the transformation of the dispersion relations from the rest frame K to the pulsar frame K in the special case of parallel propagation. The dispersion relations in K are z = z A for the A and X modes, and ω = ω L (z) for the L mode. The dispersion relation z = z A in K may be rewritten, using equation (2.2), as (z − β s )/(1 − z β s ) = z A or z = z A = (z A + β s )/(1 + z A β s ). The dispersion relation evaluated in K is Λ 1 1 = 0 or Λ 2 2 = 0 for θ = 0. This becomes the dispersion relation where Ω e is unchanged by the Lorentz transformation, and where ω 2 p is regarded as a constant, determined by the normalization of the distribution function to the number density in K. With n = γ s n one has ω 2 p = γ s ω 2 p , and one could replace the constant ω 2 p in equation (5.14) by ω 2 p /γ s , but we do not find it helpful to do so. Using the identity (5.15) one finds that the dispersion relation (5.14) reproduces the dispersion relation in K, confirming that the dispersion relations derived in the two frames are equivalent. The dispersion relation for the L mode in K may be written as ω 2 L (z ) = ω 2 p z 2 W (z ). Using the relations (5.12) or (5.13) and (2.1)-(2.4), one confirms that this is related to ω 2 L (z) = ω 2 p z 2 W (z) by the Lorentz transformation. For oblique modes the equivalence can be confirmed explicitly using the primed form of the dispersion equation derived in Paper 1, which may be written as with z A = (z A + β s )/(1 + z A β s ). 6. Non-existence of a "fourth" wave mode The suggestion that there is a "fourth" wave mode in a pulsar plasma (Beskin et al. 1993;Lyne & Graham-Smith 2006) arises when approximations are made in treating the wave dispersion in the pulsar frame K . In this section we show that the approximations leading to this conclusion are not valid. The relevant approximations are made in two places. One is effectively in the average γ → ∞ so that the dispersion relation for the A mode reduces to z 2 = 1 and that for the X mode reduces to z 2 = 1/ cos 2 θ . The other is in the average in the definition (5.11) of W (z ). Inside the average one assumes β = 1 − 1/2γ 2 → 1 giving (6.1) If the approximation β → 1 were made consistently in equation (6.1), it would also imply 1/γ 3 → 0, hence W (z ) → 0 for z = 1. To proceed with further discussion of this approximation, we ignore this inconsistency, and use exact relations to rewrite the right hand side of equation (6.1). the actual dispersion relation for the L mode, these dispersion relations are misleading. For example, the dispersion curve for the L mode crosses the light line ω = k c and this cannot be approximated by two lines nearly parallel to and on either side of the light line. Such an approximation is incompatible with a crossover of the dispersion curves for the A and L modes. These comments are based on the case of parallel propagation and also apply for slightly oblique propagation. Rather than there being reconnected O and Alfvén modes, as the exact treatment implies, there remain three solutions corresponding to the Alfvén mode along the light line and the other two solutions, e.g., given by equation (6.6), on either side of the light line. We conclude that the "fourth" mode is a spurious consequence of invalid approximations made in deriving the form (6.2). It is the approximations that are misleading, not the choice of frame. In treating the wave dispersion correctly it is essential to take the correct form of the RPDF into account. The approximate form W (z ) ∝ 1/(1 − z ) 2 , or W (z) ∝ 1/(1 − z) 2 , is misleading. Discussion and conclusions In this paper we extend the discussion in Paper 1 of waves in the rest frame of a pulsar plasma to treat several problems that involve Lorentz transforming between frames. We discuss the transformation between the rest frame, K, and the pulsar frame, K , in detail, emphasizing the transformation of the phase speed of the waves. We show that the dispersion equations in the two frames are proportional to each other, implying a one-to-one correspondence between wave modes in the different frames. The wave properties in the pulsar frame may be found either by treating the wave dispersion in K and transforming them to K , or by treating the wave dispersion directly in K . We demonstrate the equivalence of these two procedures explicitly for simple cases, including a case that involves the transformation of the RPDF between the two frames. In §3 we apply a Lorentz transformation to an arbitrary 1D distribution (including a 1D Jüttner distribution), g(u), in the rest frame to derive the corresponding streaming distribution, g (u ) = g(u) in K . We argue that relativistic streaming should be included in this way, that is, by applying a Lorentz transformation to a rest-frame distribution. A surprising implication is that such a Lorentz-transformed distribution is much broader (in K ) than the original distribution (in K). Specifically, a distribution confined to a range of u of order γ 1 in K is spread over a range of u of order γ s γ in K . In Paper 1 we emphasize the importance of including the relativistic spread in Lorentz factors, γ , on the properties of wave dispersion, and in this paper we show that the effects of γ 1 can be surprisingly large on the distribution function when the streaming is included. In particular, the transformed Jüttner distribution, g(u) ∝ exp(−ργ) transforms into the much broader g (u) ∝ exp[−ρ(γ 2 s − γ 2 )/2γ s γ]. A conventional choice of a relativistically streaming distribution is a Gaussian distribution of the form (3.3), that is, g(u) ∝ exp[−(u − u α ) 2 /2u 2 th ]. However, the only rationale for the choice of such a distribution seems to be mathematical convenience. There is no obvious physical justification for such a distribution, and there seems to be no physical reason for assuming such a distribution in preference to a distribution obtained by Lorentz-transforming a rest frame distribution. We adopt the view that a Lorentz-transformed Jüttner distribution should be the preferred choice for a relativistically streaming distribution. A relativistically streaming Gaussian distribution is intrinsically much narrower (by approx 1/γ s ) than any streaming distribution function obtained by Lorentz transforming, and such a choice should either by avoided or given a specific physical justification.
2018-12-18T00:57:12.000Z
2018-12-18T00:00:00.000
{ "year": 2018, "sha1": "2a99e24ac86a537c0e85c5d938758dd505bb6a1b", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1812.07120", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1e82baa7c96e38d7111c10801b2804da13de915f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
213354549
pes2o/s2orc
v3-fos-license
Genome-Wide Detection of SNP Markers Associated with Four Physiological Traits in Groundnut (Arachis hypogaea L.) Mini Core Collection In order to integrate genomics in breeding and development of drought-tolerant groundnut genotypes, identification of genomic regions/genetic markers for drought surrogate traits is essential. We used 3249 diversity array technology sequencing (DArTSeq) markers for a genetic analysis of 125 ICRISAT groundnut mini core collection evaluated in 2015 and 2017 for genome-wide marker-trait association for some physiological traits and to determine the magnitude of linkage disequilibrium (LD). Marker-trait association (MTA) analysis, probability values, and percent variation modelled by the markers were calculated using the GAPIT package via the KDCompute interface. The LD analysis showed that about 36% of loci pairs were in significant LD (p < 0.05 and r2 > 0.2) and 3.14% of the pairs were in complete LD. The MTAs studies revealed 20 significant MTAs (p < 0.001) with 11 markers. Four MTAs were identified for leaf area index, 13 for canopy temperature, one for chlorophyll content and two for normalized difference vegetation index. The markers explained 20.8% to 6.6% of the phenotypic variation observed. Most of the MTAs identified on the A subgenome were also identified on the respective homeologous chromosome on the B subgenome. This could be due to a common ancestor of the A and B genome which explains the linkage detected between markers lying on different chromosomes. The markers identified in this study can serve as useful genomic resources to initiate marker-assisted selection and trait introgression of groundnut for drought tolerance after further validation. Introduction Groundnut or peanut (Arachis hypogaea L.) is an important food legume grown worldwide and is a rich source of protein for both humans and animals. Groundnut seed contains high-quality edible oil (50%), easily digestible protein (25%), and carbohydrate (20%) [1]. The crop was grown on 27.9 million hectares worldwide with a total production of 47.1 million metric tons [1]. Developing countries account for 96% (26.8 million hectares) of groundnut areas and 92% of the global production with the semi-arid tropics (SAT) region cultivating about 90%. Despite the developing countries being the largest producers of groundnut, the average yield per hectare (China, 2490 kg ha −1 and Nigeria 840 kg ha −1 ) is low when compared to the United State of America (3673 kg ha −1 ) [1]. Climate change is a major threat to groundnut yield and quality in the SAT regions. Among the factors contributing to low yield, drought adversely affects the crop's performance [2]. Shortage in the amount and distribution of rainfall in SAT regions has increased in the recent past thereby exacerbating climate risks including crop failures [3]. Drought stress has an adverse influence on water relations, photosynthesis, mineral nutrition, metabolism, growth and yield of groundnut [4]. Plants, being sessile, have evolved specific acclimation and adaptation mechanisms to respond to and survive short-long-term drought stresses [4]. Some physiological responses that allow adaptation to water deficit include root traits, stomatal conductance, SPAD (Soil Plant Analysis Development) chlorophyll meter reading, leaf area, and canopy temperature and are important measures for the agronomic response of yield under moisture stress [5]. Chlorophyll content and fluorescence parameters determine the integrity of the internal apparatus for photosynthesis and provide a precise platform for detection and quantification of plant tolerance to drought stress [6] and were speculated as a useful indicator for photosynthetic capacity in groundnut [7]. Canopy temperature (CT) was reported to be a marker for drought tolerance through its negative correlation with transpiration cooling and carbon dioxide exchange rate [8]. Genotypes that maintain cooler canopies under stress conditions possess a high potential for water stress tolerance and high yield [8]. Healthy vegetation measured by normalized difference vegetative index (NDVI) under drought conditions correlates to high photosynthetic potential and yield of groundnut [8]. Drought reduces leaf area by constraining mitosis, cell proliferation, leaf expansion and carbohydrate supply [9] and genotypes with wider leaf areas under water stress have the capacity for high photosynthesis. These drought surrogate traits are adaptation mechanisms used by groundnut (and other crops) to survive drought conditions [8]. In order to integrate genomics in breeding and development of drought-tolerant groundnut genotypes, identification of genomic regions associated with drought tolerance traits is essential. With the advent of genomic tools, marker-assisted breeding (MAB) has been deployed to enhance the efficiency of the selection of target traits in groundnut [6,[10][11][12][13]. Very few informative and good quality single nucleotide polymorphism (SNP) markers are available in groundnut in contrast to the availability of thousands of simple sequence repeats (SSRs) [12]. However, SNPs can be more easily generated than SSRs and are usually preferred due to low cost. In recent times, restriction site-associated DNA sequencing (RADseq) [14] and genotyping by sequencing (GBS) [15] methods have allowed researchers to identify and genotype thousands of SNPs in plants. Diversity Arrays Technology (DArT), which is based on genome complexity reduction and SNP detection through hybridization of PCR fragments [16] has been used in genome-wide association studies (GWAS), construction of dense linkage maps and mapping quantitative trait loci (QTL) [10,[17][18][19]. DArTseq is used for SNP discovery and genotyping, which enables considerable discovery of SNPs in a wide variety of non-model organisms and provides measures of genetic divergence and diversity within the major genetic groups that comprise crop germplasm [16]. The DArTseq technology from DArT produces data on bi-allelic SNP markers as well as the older dominant DArT markers. GWAS has lent itself to extensive application in genome-environment and genome-phenotype association mapping to identify loci of local adaptation and stress conditions in crops. The GWAS approach begins with phenotyping traits of interest followed by a forward genetic analysis to identify loci and candidate genes [20] by marker-trait association (MTA); approach adopted in the present study [21,22]. GWAS has improved the identification of MTA with genomic regions by utilizing natural populations without the need for making large numbers of combinations from bi-parental mating. Furthermore, the magnitude of linkage disequilibrium (LD) present in genetic resources is important prerequisites to deduce the genetic makeup, composition and genomic predictions of traits of interest during selection [23]. Linkage disequilibrium per se could also be used as a predictor of the resolution at which significant genomic regions with influence on traits can be detected through marker-trait-association analysis [23]. Plant genetic resources are widely used in breeding programs for imparting resistance to various stresses [24,25]. Over the years, large numbers of groundnut accessions have been evaluated for resistance to biotic and abiotic stresses [25]. However, considering the available large number of groundnut accessions available in gene banks (>15,000), many precious accessions might not get evaluated for traits of interest, as it is cumbersome to screen such a huge collection under field conditions. Hence, researchers have developed core [26] and mini core collections [27] of groundnut that represent the genetic variability of the entire collection and serve as handy germplasm sets for evaluating important biotic and abiotic stresses. The selection of resistant sources through systematic screening of mini core collection accessions is in practice for infusing genetic diversity [24]. The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) evaluated 184 accessions of the mini core collections which led to the identification of some accessions with high SPAD chlorophyll meter reading (SCMR) [28] and the accessions are currently being utilized in the breeding program for drought tolerance. The development of climate-smart and stable groundnut varieties will assist in meeting the growing demands of the increasing population against the threats of climate change. Identification of genetic markers that are linked to important traits affected by climate change in groundnut is needed for a reliable approach to the development of new varieties. In this study, SNP markers were used for the genetic analysis of a groundnut mini core collection from ICRISAT for a genome-wide marker-trait association for some physiological traits associated with drought tolerance and to determine the magnitude of LD present in the genetic resource. Plant Materials and Phenotyping The research was carried out in the research field of ICRISAT located at Minjibir (Latitude 12 • 19 N and Longitude 8 • 63 E) in 2015 and Bayero University, Kano (BUK, Latitude 11 • 58 N and Longitude 8 • 25 E) in 2017; both locations in Sudan Savanna of Nigeria. The long term means annual rainfall of both locations is about 800 mm, and; variations about this value is up to ±30%. The recorded weather information is presented in Figure S1a,b for Minjibir and BUK, respectively. The soil at Minjibir was typic Utipsamments and Typic Kanh aplustalf at BUK. One hundred and twenty-five groundnut mini core accessions including five check varieties were evaluated using 25 × 5 randomized incomplete block design with three (3) replications. The description of the mini core collections along with the checks are presented in Table S1. The mini core collection was obtained from ICRISAT Kano and used in the current study based on seed availability. Sowing was done at Minjibir on 16 July 2015 and in BUK on 21 July 2017. Each plot in a replication consisted of a single row measuring 5 m in length with a row spacing of 0.75 m, plant distance of 0.1 m and 1 m alley between replications. Recommended specific practices for growing groundnut were strictly followed. A basal application of Nitrogen, Phosphorous, and Potassium was done to all plots at the rate of 20 N kg ha −1 , 40 P 2 O 5 kg ha −1 and 40 K 2 O kg ha −1 at planting. Hand weeding was done using hoes at 3rd, 8th, and 12th weeks after sowing (WAS) to prevent weed infestation and competition between plants and weeds. Data was collected for canopy temperature (CT) using leaf thermometer (Meco IRT550 Infrared Thermometer, Sunshine instruments, Tamil Nadu, India), SPAD chlorophyll meter reading (SCMR) using SPAD 502 PLUS Chlorophyll meter (Spectrum Technologies, Inc., Aurora, IL, USA), normalized difference vegetative index (NDVI) using Hand Held optical sensor unit (Model 505 NTech Industries, Inc., Ukiah, CA, USA) and leaf area index (LAI) was measured by the use of Leaf Area Meter (ACCUPAR LP-80, Meter Group, Inc., Pullman, WA, USA). The data were collected at 60 WAS (peg formation stage) from five randomly selected plants in each plot. DNA Extraction and Genotyping Groundnut leaves were collected at 2 WAS into 96 deep well sample collection plates and sent to Integrated Genotyping Service and Support (IGSS) platform located at Biosciences Eastern and Central Africa (BecA-ILRI) Hub in Nairobi for Genotyping. The DNA extraction was done using the Nucleomag Plant Genomic DNA extraction kit. The genomic DNA extracted was in the range of 50-100 ng/ul. DNA quality and quantity were checked on 0.8% agarose. Libraries were constructed according to Kilian et al. [29]. DArTSeq complexity reduction method through digestion of genomic DNA and ligation of barcoded adapters was done followed by PCR amplification of adapter-ligated fragments. Libraries were sequenced using Single Read sequencing runs for 77 bases. Next-generation sequencing was carried out using Hiseq2500. The DArtSeq protocol was executed according to Kilian et al. [29]. The IGSS platform uses a GBS DArTseq TM technology, which provides rapid, high quality, and affordable genome profiling, even from the most complex polyploid genomes. DArTseq markers scoring was achieved using DArTsoft version 14, which is an in-house marker scoring pipeline based on algorithms. Two types of DArTseq markers were scored, SilicoDArT markers (scored as present or absent; 1, 0) and biallelic SNP markers which were both scored for the presence of the reference allele, the alternative allele, or both. In the genomic representation of the sample, both SilicoDArT markers and SNP markers were aligned to the reference genomes of Arachis duranensis (V14167, A-genome ancestor) and A. ipaensis (K30076, B-genome ancestor, https://www.peanutbase.org/) to identify chromosome. Data Analysis The analysis of phenotypic data was done to obtain the best linear unbiased prediction (BLUP) values for each accession by fitting the following mixed linear model in the R "lme4" package. (1) where g i is the effect of the ith line, e j is the effect of the jth environment, r(e) jk is the effect of the kth replication nested in the jth environment, ge ij is the genotype by environment interaction and error ijk is the error associated with each observation. The analysis was run in R with the lme4 package [30]. Entry means broad-sense heritability was calculated as: where σ 2 g is the variance among lines, σ 2 ge is the genotype by environment interaction variance, σ 2 error is the error variance, r is the number of replications, and e is the number of environments. The phenotypic correlation between traits was also determined. Linkage Disequilibrium and Marker-Trait Association Polymorphism information content (PIC) and principal component analysis (PCA) were carried out on the genotypic data using KDCompute. The unweighted pair-group method was used to cluster the accessions into a dendrogram. The parameter r 2 was used to estimate LD between SNPs on each chromosome via the software package TASSEL 5.0 [31]. Marker-trait association analysis, probability values, and percent variation modelled by both SilicoDArT and biallelic SNP markers were calculated using the GAPIT package via the KDCompute interface (https://kdcompute.igss-africa.org/kdcompute/ home). The GWAS threshold for the significant marker-trait association was p < 0.001 without multiple testing correction due to the small population size used in the present study. The first three principal components and the relationship matrix were included in the model to account for population structure. SNPs with minor allele frequency (MAF) of <5% and missing data >20% were excluded from the analyses. Missing values were imputed using the choice of the nearest neighbor algorithm using TASSEL 5.0 [31]. Phenotypic Evaluation The results of the phenotypic evaluations showed highly significant differences (p < 0.01) between lines for CT, SCMR, and NDVI but no significant genetic variation for LAI (Table 1). The interaction between lines and the environment was significant (p < 0.01) only for SCMR (Table 1). The heritability of the traits was moderate to high except for LAI that had a low heritability (0.03) ( Table 1). The four traits were normally distributed ( Figure S2). The phenotypic correlation of SCMR with LAI and CT was negative and non-significant but was positive with NDVI. Among the mini core collections, ICG 9926 had the highest CT (38. (Table S1). A negative and significant correlation was observed between CT and LAI as well as CT with NDVI. The correlation between LAI and NDVI was negative and non-significant (Table 2). A cluster analysis on the accessions based on the geographical origin of the accessions revealed two groups with group two having sub-group 2A and 2B (Figure 1). Marker Data The DArTseq genotyping produced 3591 biallelic SNP markers of which 3396 had a call rate that exceeded 70%. The average PIC of the 3396 markers was 0.077. Of the 3396 markers, just 396 had a MAF that exceeded 0.05. A total of 3124 markers were successfully assigned to their chromosome by mapping them to the A and B genomes: 368 (11.8%) were aligned to only the A genome, 449 (14.4 %) to only the B genome, and 2308 (73.8%) to both genomes. Over 73% of the markers that aligned with both the A and B genomes were assigned to homoeologous chromosomes and the correlation of their position on those two sets of homologues was 0.87. In the principal component (PC) analyses of the data from the 3,124 markers assigned to a chromosome(s), the first PC accounted for 61% of the variation and the first two PCs accounted for 78% of the variation (Figure 2). Cluster analysis of the marker data suggested two groups of lines with one group having two subgroups named 2A and 2B ( Figure S3). Marker Data The DArTseq genotyping produced 3591 biallelic SNP markers of which 3396 had a call rate that exceeded 70%. The average PIC of the 3396 markers was 0.077. Of the 3396 markers, just 396 had a MAF that exceeded 0.05. A total of 3124 markers were successfully assigned to their chromosome by mapping them to the A and B genomes: 368 (11.8%) were aligned to only the A genome, 449 (14.4%) to only the B genome, and 2308 (73.8%) to both genomes. Over 73% of the markers that aligned with both the A and B genomes were assigned to homoeologous chromosomes and the correlation of their position on those two sets of homologues was 0.87. In the principal component (PC) analyses of the data from the 3,124 markers assigned to a chromosome(s), the first PC accounted for 61% of the variation and the first two PCs accounted for 78% of the variation (Figure 2). Cluster analysis of the marker data suggested two groups of lines with one group having two subgroups named 2A and 2B ( Figure S3). The DArTseq genotyping produced 12,693 dominant silico markers with a call rate that also exceeded 70%. Only 2349 (18.5%) of these had a minor allele frequency (MAF) > 0.05. The average PIC of the 2349 markers was 0.070. A total of 12,611 markers were given chromosome assignments: 1709 (13.6%) were aligned to only the A genome, 2502 (19.8%) to only the B genome, and 8400 (66.7%) to both genomes. Over 76% of the markers aligned with both the A and B genomes were assigned to homoeologous chromosomes and the correlation of their position on those two sets of homologues was 0.91. This reflects the common origin of the A and B genomes. There was some evidence for the inter-genome exchange of genes between non-homoeologous chromosomes. A set of 46 markers located in a 16.72 Mbp region of chromosome A08 was also found in a 73.95 Mbp region of B07. The correlation of positions for the 46 markers was 0.769. Another set of 20 markers spanning a 3.93 Mbp region of chromosome A02 and an 11.67 Mbp region of B09 were also found ( Figures S4 and S5). The correlation of positions for the 46 markers was −0.604 showing any putative exchange involved an inversion. Linkage Disequilibrium Linkage disequilibrium analysis conducted using 305,919 loci pairs within chromosomes showed that 36.3% of loci pairs had significant LD (Table S1). Furthermore, 9592 (3.14%) of the pairs were in complete LD (r 2 = 1). There was a rapid decline in LD with distance and the correlation analysis revealed negative correlation (r = −0.0795) between the LD (R 2 ) and the physical distance; as well as between the R 2 and p-value (r = −0.5381), revealing the existence of linkage decay ( Figure S6). The DArTseq genotyping produced 12,693 dominant silico markers with a call rate that also exceeded 70%. Only 2349 (18.5%) of these had a minor allele frequency (MAF) > 0.05. The average PIC of the 2349 markers was 0.070. A total of 12,611 markers were given chromosome assignments: 1709 (13.6%) were aligned to only the A genome, 2502 (19.8%) to only the B genome, and 8400 (66.7%) to both genomes. Over 76% of the markers aligned with both the A and B genomes were assigned to homoeologous chromosomes and the correlation of their position on those two sets of homologues was 0.91. This reflects the common origin of the A and B genomes. There was some evidence for the inter-genome exchange of genes between non-homoeologous chromosomes. A set of 46 markers located in a 16.72 Mbp region of chromosome A08 was also found in a 73.95 Mbp region of B07. The correlation of positions for the 46 markers was 0.769. Another set of 20 markers spanning a 3.93 Mbp region of chromosome A02 and an 11.67 Mbp region of B09 were also found ( Figures S4 and S5). The correlation of positions for the 46 markers was −0.604 showing any putative exchange involved an inversion. Linkage Disequilibrium Linkage disequilibrium analysis conducted using 305,919 loci pairs within chromosomes showed that 36.3% of loci pairs had significant LD (Table S1). Furthermore, 9592 (3.14%) of the pairs were in complete LD (r 2 = 1). There was a rapid decline in LD with distance and the correlation analysis revealed negative correlation (r = −0.0795) between the LD (R 2 ) and the physical distance; as well as between the R 2 and p-value (r = −0.5381), revealing the existence of linkage decay ( Figure S6). Marker-Trait Association Due to the insignificant genotype by environment interaction for all traits except for SCMR, the GWAS was performed using phenotypic BLUPs estimated over all environments. The marker-trait association (MTA) analysis was done for both the dominant silico and biallelic SNP markers. However, significant associations were only detected from the dominant silico markers and none was detected from the biallelic SNP markers. Only the MTAs from the dominant silico markers that had p-values < 0.001 (Table 3) were considered as significant for all traits (details of significant MTAs with p-values between 0.05 and 0.001 are presented in Table S2). We found 20 MTAs with 11 markers (Table 3). Two markers (M1, M2) identified four possible loci for LAI. Marker M1 was associated with chromosomes A03 and B03 with allelic effects of −1.33 and −1.31, respectively while M2 was associated with chromosomes A06 and B07 with allelic effects of 1.96 and 1.97, respectively. The individual markers explained 6.8% to 7.3% of the total phenotypic variation observed (Table 3). Thirteen loci within seven markers (M3-M9) showed MTAs with CT and six of these markers (M3-M8) were located in chromosomal regions on both the A and B genomes. The CT markers explained between 9.6% to 16.6% of the variation observed and all had negative allelic effects. One marker (M10) on chromosome B05 was found to be associated with SCMR and explained 20.8% of the observed phenotypic variation with an allelic effect of −11.65. Another marker (M11) located on chromosomes A04 and B02 was associated with NDVI and had an allelic effect of 0.16 for both chromosomes. From all the 20 MTAs detected, the B genome had 11 and the A genome had nine. An equal number of associations were found on each genome except for CT and SCMR. The A genome had six MTAs and the B genome had seven MTAs for CT. The MTA with SCMR was exclusively detected on the B genome. Discussion The core, mini core, and reference collections developed by several germplasm resource centers are significant sources of genetic variation. By screening these groundnut sources, sources of tolerance or resistance to traits needed for the development of climate-smart varieties can be identified. Physiological traits such as LAI, SCMR, canopy conductance and canopy temperature are important measures of agronomic response to yield under water stress [32]. The phenotypic evaluation showed that the groundnut collections varied significantly for the physiological traits except for LAI (Table 1). Nageswara Rao et al. [7] also reported similar specific leaf area between peanut genotypes. These physiological traits (LAI, CT, SCMR, and NDVI) are important in improving productivity and are used as indirect indices for improving drought tolerance in peanut [8]. Traits such as SCMR and NDVI determine the photosynthetic potential of plants and have been reported to be highly associated with yield and can be effective as in-season predictors of yield [33,34]. Some of the identified mini cores with desirable physiological traits can be integrated into breeding programs for the development of varieties that are drought tolerant. The heritability of CT, SCMR, and NDVI ranged from moderate to high and contribution of genetics to the phenotypic variation in LAI was low. The low correlation coefficients between most of the traits suggests that the traits are fairly independent ( Table 2). The low genetic diversity of both SNP and silico markers and their non-distinct separation on the PC plot suggests that the populations were not highly structured. The two groups of mini core collections and one group having two subgroups as suggested by the marker data may be due to different origins of the collections as evident from the cluster analysis of the accessions based on origins. Cluster 1 consisted solely of accessions from India while cluster 2B consisted of only USA accessions. The accessions from other countries were all grouped in cluster 2A. Pandey et al. [10] also observed three groups in groundnut using SSR and DArT markers. The A genome is more conserved because it has fewer markers than the B genome. Both markers systems assigned a similar portion of the markers to both genomes pointing to a common ancestry of the collections. While many polymorphic markers were detected, a large portion had MAF < 0.05 and the average PIC values for both types of markers were very low; about 0.07. Taken together, the results of the current study support earlier reports on the low polymorphism rate and low genetic diversity of groundnut [35][36][37]. In general, most markers assigned to the A and B genomes were assigned to homoeologous positions and their genetic positions in the A and B genome were highly correlated and suggest that their position in the genomes has been mostly conserved since evolving from their common ancestor. Of the markers mapped to non-homoeologous chromosomes in the A and B genome there was some evidence for the exchange of chromosome segments between chromosomes A02 and B09 and then A08 and B07. Linkage disequilibrium analysis showed that about 36% of loci pairs were in significant LD (p < 0.05) and 3.14% of the pairs were in complete LD with an average distance of 31 kb among these pairs, indicating that LD extended quite some distance in groundnut. This is not surprising given the low polymorphism rate and PIC values in groundnut which means that detectable recombination is likely to be very low. Several studies have reported LD decay with distance [10,38,39] which agrees with the findings of the present study. GWAS has created a considerable need for downstream studies including genetics, physiology, and biochemistry to ascertain genotype-phenotype associations that can be used to decipher the underlying mechanisms for intricate traits such as yield and stress responses [31]. The current study revealed 20 significant MTAs (p < 0.001) involving 11 markers. The p-values used for identifying significant MTAs were not adjusted using multiple testing corrections because of the small sample size used in the study and we want to be able to identify QTL of moderate effect. Our analysis cannot distinguish if a single marker on more than one chromosome exhibits MTA independently or is a result of additive contribution from the two homoeologous chromosomes. Markers associated with physiological traits show uneven distribution among chromosomes and between the genomes. Chromosome B05 of the B genome houses three markers that showed MTA with CT and SCMR. In addition, two markers each were found on chromosomes A03, B03 and A05. Genome-wise comparison showed that all the eleven markers detected are found on B-genome while only 9 were found on the A genome. Three of these markers (M2, M7, and M11) are found on non-homoeologous chromosomes. It is possible that some chromosome rearrangement has caused the marker sequence to appear on different homeologs in the course of evolution of groundnut, although an error in sequencing or bioinformatics alignment can result in a similar outcome. Validation studies will, therefore, be needed to see if these markers are identifying one locus or perhaps a locus duplicated in the two genomes. The allelic effects of the markers identified for CT and SCMR were negative which shows that the markers are identified in genomic regions that have decreasing effects for these traits. Plants with lower CT are preferred in drought-prone areas because the lower CT enables the plant to reduce its transpiration rate and therefore conserved moisture [8]. However, for SCMR, groundnut genotypes with higher SCMR are preferred to maintain healthy vegetation which will promote kernel yield. The marker identified for NDVI had an increasing effect on the phenotype while for LAI, M1 had a decreasing effect and M2 had an increasing effect. Though the difference between the accessions was not significant phenotypically for LAI, the two markers were identified to be associated with LAI. A previous study reported the additive and additive × additive gene actions for specific leaf area in groundnut [40]. We would assume that the contrasting effects of M1 and M2 might act additively and lead to insignificance of the overall difference in LAI between the accessions. Nevertheless, the identified markers could be used for selection of LAI genomic region in groundnut. Furthermore, after validation of all the identified markers, they can be deployed in marker-assisted breeding for the selection of groundnut genotypes with desirable physiological traits. Pandey et al. [10] used SSR markers to identify some significant MTAs for physiological traits including LAI and SCMR in groundnut which were also observed in the present study. All the Four MTAs identified for LAI were associated with two markers (M1 and M2) found among which the MTA on chromosomes A06 and B07 may be the same as those previously identified by Pandey et al. [10] for total leaf area and leaf area respectively. Eight of the thirteen (13) MTAs identified for CT are associated with four markers located on homoeologous chromosomes in both A and B genomes. The one MTA associated with M1 was located on chromosome B05 but absent on its homeolog. Interestingly, M10 accounts for >20% of the phenotypic variation in SCMR and is different from the previously reported loci on A06 [11]. It is therefore suggested to be novel. Markers M5 and M6 were in high LD and may possibly suggest the same MTA. The two markers associated with NDVI were located on both the A and B genome but on different chromosomes. From the analysis, most of the MTAs identified on the A subgenome were also identified on the respective homoeologous chromosome on the B subgenome. Argawal et al. [13] reported that a significant proportion of marker loci with assigned physical locations to the chromosome of one genome were mapped to respective homeologous positions on chromosomes of the other genome. Our results support this hypothesis as the correlation of markers positions between the A and B genome exceeded 0.87 for both marker types. Most of the homeologous MTAs were seen between chromosomes A03 and B03 which is similar to the study of Argawal et al. [13]. Other homeologous MTAs detected in the present study are located between chromosomes A02 and B02, and A05 and B05. Homeologous mapping of QTLs in groundnut has also been reported between chromosomes A07 and B07, and A08 and B08 [41]. We found some evidence for genetic exchanges occurring between the groundnuts genomes as reported earlier [38]. Also, many similar markers were placed on the genetic map on different chromosomes. The possible translocation we noted does not appear to be terminal or reciprocal unlike the translocations noted by Farre et al. [42]. Translocations of markers have been previously reported in groundnut [13,41]. Some of these observed 'translocated' markers might be also due to miss-assignments because of the highly repetitive structure of the groundnut genome [37]. From the present study, the DArTs markers showed the highest reproducibility and consistency than the SNP markers. The DArTseq approach generated a large set of useful SNPs with broad genome coverage which represented both coding and non-coding regions thereby allowing for the accurate assessment of structure and quantity of genetic diversity in the mini core collections. This study identified a total of 20 highly significant marker-trait associations for four physiological traits of importance in groundnut; LAI, CT, SCMR, and NDVI. Chromosome B05 of the B genome contained more markers associated with drought surrogate physiological traits in groundnut. The markers identified in this study can serve as useful genomic resources to initiate marker-assisted selection and trait introgression of groundnut for drought tolerance. The identified MTAs can also be used for fine mapping and cloning of the underlying genes. Further studies are required to validate significant markers identified in the present study using a larger population size. Supplementary Materials: The following are available, Table S1: Table S1-Name, botanical grouping, origin and physiological performances of the mini core collections. Table S2: LD of markers for A and B genomes, Table S3: Significant marker-trait associations, Figure S1: Weather information for (A) Minjibir 2015 and (B) BUK 2017. Figure S2: Histogram of BLUP of leaf area index (LAI), canopy temperature (CT), chlorophyll content (SPAD) and NDVI from the groundnut mini core collection. Figure S3: Dendrogram from unweighted pair-group clustering of accession from the mini core collections. Figure S4: Distribution of markers on A genome of the groundnut accessions, Figure S5: Distribution of markers on the B genome of the groundnut accessions. Figure S6: Scatter plot showing the association between linkage disequilibrium (r 2 ) and distance (a) and significance of the r 2 value (b).
2020-02-06T09:06:27.076Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "a24ce1645a1c0c071f923b5316ebbd28ca38c2e4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4395/10/2/192/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d4caeaacb6c5fa6cc8feb699693685c46c0a38fd", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
222319518
pes2o/s2orc
v3-fos-license
Myocardial adaptation as assessed by speckle tracking echocardiography after isolated mitral valve surgery for primary mitral regurgitation The risk of left ventricular (LV) and right ventricular (RV) maladaptation after surgery for isolated primary mitral regurgitation (PMR) is poorly defined. We aimed to evaluate LV and RV contractile function using speckle-tracking analysis alongside with quantification of exercise tolerance in patients with PMR after mitral valve surgery. All consecutive patients with symptomatic PMR undergoing mitral valve surgery between July 2015 and May 2017 were prospectively enrolled. Sequential echocardiographic studies along with clinical assessment were performed before and three months after surgery. Mean age in 138 patients was 65.8 ± 12.7 years, 48.2% (66) of whom were female. Mean LV ejection fraction decreased from 57 ± 12% to 50 ± 11% (p < 0.001), LV global longitudinal strain deteriorated from −19.2 ± 4.1% to −15.7 ± 3.8% (p < 0.001), and mechanical strain dispersion increased from 88 ± 12 to 117 ± 115 ms (p = 0.004). There was a reduction in tricuspid annulus plane systolic excursion from 22 ± 5 mm to 18 ± 4 mm (p < 0.001), as well as a slight deterioration of RV free wall mean longitudinal strain from −16.9 ± 5.6% to −15.7 ± 4.1% (p = 0.05). The rate of moderate to severe tricuspid regurgitation significantly decreased (p < 0.005). Regarding exercise tolerance, the New York Heart Association class improved (p < 0.001) and the walking distance increased (p < 0.001). During mid-term follow up after surgery for PMR, a deterioration of LV and RV contractile function measures could be observed. However, the clinical status, LV dimensions, and concomitant tricuspid regurgitation improved which in particular imply more effective RV contractile pattern. Introduction Primary mitral regurgitation (PMR) due to mitral valve degeneration is the most common etiology in patients undergoing mitral valve surgery [1]. Surgical mitral valve repair or replacement, if repair is unfeasible, is the treatment of choice in case of symptomatic severe PMR [2]. Yet, patients with mitral regurgitation are often referred too late for surgery due to alleged preserved left ventricular (LV) function in echocardiographic controls [3]. Due to the load dependence of standard echocardiographic parameters which are used for the assessment of LV function, LV ejection fraction may substantially overestimate myocardial performance [4,5]. However, the risk of functional LV maladaptation, the reaction of right ventricular (RV) function, and the resulting clinical implications after mitral valve surgery for isolated mitral regurgitation are poorly defined [6]. On the other side, evaluation of RV function, particularly after cardiac surgery, is challenging due to the complexity of RV geometry, the high RV sensitivity to hemodynamical changes and ventricular interdependence [7]. Speckle-tracking based myocardial deformation analysis has meanwhile become an established method to evaluate myocardial function. Speckle-tracking based assessment of longitudinal strain is independent of the insonation angle, and can be used retrospectively on digitally archived standard grey-scale images [8]. Hence, we aimed to evaluate LV and RV contractile function using longitudinal strain by speckle-tracking analysis together with the clinical status of patients with isolated PMR before and 3 months after mitral valve surgery. Methods Assessment of exercise tolerance by the New York Heart Association (NYHA) classification alongside, the 6-min walking test and echocardiographic examinations were prospectively performed before and 3 months after surgery in all consecutive patients with severe PMR who underwent isolated mitral valve surgery between July 2015 and May 2017. The decision for surgical treatment was made after heart team discussion for each case individually. The study was approved by local Ethics Committee of Ruhr University of Bochum and carried out in accordance with the Declaration of Helsinki. All data were included in a database, which is registered at www.clini caltr ials.gov (NCT02296710). Standard echocardiography All study participants underwent standard transthoracic echocardiography (EPIQ seven, Philips Electronics, Netherlands). The echo studies were performed by highly qualified medical staff and analysed by the same echocardiographer with long-time experience. The analyses and grading of the mitral regurgitation were performed according to the recommendations of the American and European Societies of Echocardiography [9,10]. In cases with irregular cardiac rhythm (e.g. atrial fibrillation, frequent atrial or ventricular ectopy) at least five loops were recorded and the average values has been provided. LV ejection fraction was assessed using the Simpson´s method. LV stroke volume was calculated by subtraction of the LV end-systolic volume from the end-diastolic volume. The Nyquist-limit was placed around 50-60 cm/s in color Doppler settings. To characterize RV function, tricuspid annular plane systolic excursion (TAPSE) and RV fractional area change (RV-FAC) were measured alongside with RV free wall longitudinal strain analysis. Strain analyses LV global longitudinal strain (GLS) was assessed as previously described using the speckle-tracking algorithm provided within the QLAB system (QLAB Version 10.2) [11]. Through three apical views (four-chamber view, threechamber view, two-chamber view) the end-diastolic frame was selected and the endocardial contour was tracked manually ( Fig. 1a-c). RV free wall longitudinal strain assessment was performed using a RV focused view with optimized RV endocardial borders according to the recommendations of the European and American Societies of Echocardiography ( Fig. 1d) [12]. The other frames of the cineloop were tracked automatically and adjusted manually, if needed. Additionally, strain dispersion was documented for each LV segment. Mechanical strain dispersion was calculated as the difference between the highest and the lowest value from time to peak strain assessed through the three apical planes [13]. Statistical analysis Statistical analysis was performed using the SPSS-Software (Version 21, IBM Corporation, Armonk, NY, USA). Continuous variables are reported as mean ± standard deviation. Categorical variables are presented as frequencies and percentages. Baseline data were validated for normal distribution using the Kolmogorov-Smirnov method. Student's T-test for unpaired and paired parametric samples or their analogues for nonparametric samples (Mann-Whitney and Wilcoxon signed rank) or the chi-square test were performed for group comparisons. A p-value < 0.05 was considered significant for all comparisons. Results A total of 156 consecutive patients with primary mitral regurgitation were admitted and evaluated for mitral valve repair between July 2015 and May 2017. Five of them also required myocardial revascularization and four patients presented with a combined valve disease which had to be addressed. eight patients refused participation in the study and one patient was found to suffer from mitral valve endocarditis. Finally, 138 patients were included in the analyses. The baseline characteristics including parameters for mitral regurgitation severity are shown in Tables 1 and 2. Patients' mean age was 65.8 ± 12.7 years, and 66 (47.8%) of them were female. Mean EuroScore II was 2.6 ± 2.8%, defining a low to intermediate perioperative risk. Mean LV ejection fraction was 57 ± 12%, and degree of mitral regurgitation was characterized by an effective regurgitant volume of 43 ± 3 mm 2 , a regurgitant volume of 67 ± 7 ml and a mean biplane vena contracta of 7.3 ± 0.5 mm. Out of the entire group 95 patients (68.9%) underwent mitral valve repair and 43 (31.1%) valve replacement. Details of echocardiographic parameters before and after surgery are presented in Table 3 (left ventricle) and Table 4 (right ventricle). Three months after surgery, 121 patients (87.7%) had no residual MR and in 17 patients (12.3%) only trivial MR was detectable. LV end-diastolic volume markedly decreased from 157 ± 57 ml to 138 ± 51 ml (p < 0.001) following valve surgery, while the other morphological parameters such as end-systolic diameter, septal thickness and posterior wall thickness did not change. Regarding exercise tolerance, NYHA classification (at baseline 66.3% were in NYHA class III or IV, 3 months after surgery 85.2% were in NYHA class I or II; p < 0.001) and walking distance in the 6-min walking test (372 ± 32 m to 425 ± 117 m; p < 0.001) improved significantly (Fig. 2a, c). Discussion Due to the poorly defined risk for ventricular dysfunction after mitral surgery and its clinical impact, we evaluated the adaptation of the left and right ventricle after surgical mitral valve treatment in patients with severe mitral regurgitation and the clinical status before and 3 months after surgery. Left ventricular dysfunction after mitral valve surgery Mean LV GLS in our patients was −19.2% at baseline and showed a deterioration after mitral valve surgery as an indicator for LV dysfunction. This is in accordance with the retrospective observation of Witkowski et al. who described a GLS worse than −19.9% as an independent predictor for LV dysfunction in severe primary mitral regurgitation [14]. Hiemsatra et al. described LV GLS as independently associated with all-cause mortality and cardiovascular events in a cohort of 593 patient who underwent mitral valve surgery with a median follow-up of 6.4 years, (Hazard ratio 1.13; 95% confidence interval: 1.06 to 1.21 p < 0.001). In this study, LV-EF and LV GLS showed a similar deterioration of the contractile function (3). In a retrospectively analysed cohort of 506 patients with a wide range of cardiac comorbidities and a median follow-up of 3.5 years, Kim et al. postulated GLS to better predict cardiac events and allcause mortality than standard echocardiographic parameters (Multivariate Cox models HR 1.229 95% CI: 1.135 to 1.331; p < 0.001). The authors concluded this measure to be helpful to estimate the optimal timing for mitral valve surgery [15]. Interestingly, mechanical strain dispersion also increases after mitral valve surgery (Table 4). Prolonged mechanical strain dispersion is a sign for heterogeneity of systolic myocardial contraction due to the development of fibrosis formation and is associated with cardiac arrhythmias [16]. Therefore, strain dispersion could provide important information about cardiac remodeling during patient evaluation for mitral valve surgery [17]. However, despite functional impairment of the left ventricle, the patients showed pronounced clinical improvement in NYHA class and 6-min walking distance (Fig. 2a, c). Moreover, LA and LV diameter and volumes decreased after mitral valve surgery demonstrating a relevant reverse remodelling. By eliminating the regurgitation fraction of overall stroke volume, LV enlargement receded allowing for normal stress shortening [18,19]. However, since stroke volume and ejection fraction are required for antegrade flow only, myocardial performance is optimized and economized [20] whereas, according to our results, at least a temporary postoperative medical therapy to support myocardial unloading and reverse remodelling is suggested. Right ventricular dysfunction after mitral valve surgery Mitral regurgitation leads to a volume overload of the LA [18]. The LA is initially able to keep the pressure stable through its enlargement, but over time the pressure in the pulmonary venous system increases which eventually leads to an increased pulmonary artery pressure [7]. In the absence of volume overload after surgery, the pressure in the pulmonary vascular bed and consecutively in the right ventricle decreases. Right ventricular dimensions and functional tricuspid regurgitation are consecutively reduced [7]. However, as on the left side, some measures of RV function decreased. While FAC did not change, RV free wall strain and TAPSE were reduced. This deterioration is probably explained by geometric changes of the RV due to pericardial incision and the loss of pericardial support [21]. Depending on the pericardial incision and the surgical access path, parameters for the longitudinal RV function can show a decrease, despite overall normal global right ventricular function [21]. Another aspect is the reduced mobility of the septum due to the increased LV impairment. In addition, the incompletely understood cardioplegia effect may have played a role [3,[22][23][24]. The septal wall is involved in the mechanism of "squeezing out" the right ventricle. Together with the apex, the septal wall serves as an abutment to counteract the tension of the bellow-type right ventricle, and thus transports the blood towards the pulmonary arteries. About 24% of the RV function is taken over by the septal wall [7]. Our mid-term follow-up data on exercise tolerance demonstrate a clear clinical improvement, which implies an economization and higher effectiveness of RV myocardial performance. Accordingly, tricuspid regurgitation also improved after surgery probably because of improved hemodynamic and lack of volume overload which is also a sign of recovered clinical status [25,26]. Limitations The study is descriptive and not designed to explain the phenomena it observes and can therefore only generate hypotheses. In addition, further studies should investigate whether and to what extent the deteriorated function parameters persist during longer-term follow-up and whether this has a long-term impact on survival. Conclusion During mid-term follow up after surgery for PMR, a deterioration of LV and RV contractile function measures could be observed. However, the clinical status, LV dimensions, and concomitant tricuspid regurgitation improved significantly which in particular imply more effective RV contractile pattern. Author contributions All authors have made substantial contributions to the manuscript, are responsible for the contents, and have read and approved the manuscript for submission to the International Journal of Cardiovascular Imaging. Funding Open Access funding enabled and organized by Projekt DEAL. The study was supported by the Medical Faculty, Ruhr-Universität Bochum, Germany (FoRUM programme F811-14). Data Availability All presented data are available and will be issued if necessary. Conflict of interest None. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
2020-10-14T14:06:32.294Z
2020-10-13T00:00:00.000
{ "year": 2020, "sha1": "f3caae9baa92af3fa0010e1000e609293c7bef64", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10554-020-02065-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "f3caae9baa92af3fa0010e1000e609293c7bef64", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
31001611
pes2o/s2orc
v3-fos-license
Organizational Well-Being in a Public Research Agency: The Point of View of Administrative Staff and Researchers Email: elisa.coli@istc.cnr.it Abstract: The aim of this paper is to investigate organizational wellbeing in a Public Research Agency, exploring the point of view of two different categories of workers, administrative staff and researchers, employed in the same organization. We hypothesized that, in a complex organization, the kind of work performed, along with other factors, could influence the representation of organizational well-being. The study involved 37 administrative staff and 24 researchers of the Italian National Research Council (CNR), the largest Public Research Agency in Italy. According to different key areas of organizational well-being in CNR, seven focus groups were carried out and collected data was analyzed using the qualitative data analysis software NVivo9. Results of this study seem to confirm the authors’ hypothesis. In effect, even though the framework of organizational well-being is the same for the two categories of employees considered, there are differences in meaning and in importance given by stakeholders to each dimension of the construct. As a whole, the specificity of the points of view might be explained by considering not only the different working conditions and the different kind of work performed, but also the different cultural values of the Research Institutes and of the Central Administration. These aspects should be taken into account in the predisposition of tools for evaluation of organizational well-being, above all in complex organizations, in order to have at the organization’s disposal research tools able to be representative of the entire population. A set of recommendations for improving organizational well-being in complex organizations are provided. Introduction Over the last few years, interest in the topic of organizational well-being has increased not only in a national context, but also in an international context, becoming the subject of several theoretical and empirical studies (Schaufeli, 2004;Horn et al., 2004). This construct has been studied in relation with the construct of psychological well-being, showing that feeling good at work has benefits for both the person and the organization (Avallone and Paplomatas, 2005;Diener and Seligman, 2004). Indeed, in a healthy organization employees feel well, take delight in work and make a commitment to their organization. At the same time, if employees are physically and psychologically well, they bring passion, motivation and volition to their working environment, contributing to improve efficiency and productivity of the entire organization. According to this perspective, developed in the context of functional psychology (Rispoli, 2001), personal and corporate well-being are not opposed, but are mutually reinforcing. In support of this perspective, recent research focused on the link between job performance, psychological well-being and organizational commitment, underlining that the absence of organizational well-being can cause a decrease of productivity, a high absenteeism rate, poor working motivation, poor availability to take on work, lack of trust (Meyer et al., 2002;Wright and Hobfoll, 2004;Mowday et al., 2013). Therefore, one of the interests in organizational wellbeing is due to practical consequences for the life and functioning of the entire organization. One of the biggest difficulties associated with the study of organizational well-being is related to the definition and conceptualization of this construct. In effect, it is a multidimensional (Donald et al., 2005;Wilson et al., 2004) and dynamic construct, consisting of several interdependent levels and influenced by the context. Some authors have defined this construct as the overall health of an organization comprised of many constructs including organizational climate (i.e., the overall ambiance of an organizational system, what it feels like to be at work; Steele and Jenks, 1977), social climate (i.e., perceived social support and morale among employees; Stokols et al., 2002), employee productivity, performance, turnover and absenteeism. Others have written about organizational well-being as: "the whole of the cultural nucleus, processes and organizational practice that animate coexistence in the working context, promoting, maintaining and improving the quality of life and the physical, social and psychological well-being of working communities" (Avallone and Bonaretti, 2003, p. 42). These characteristics have made difficult not only a shared conceptualization of the construct, but also the construction of survey instruments for evaluation of organizational well-being. The Italian Public Administrations (PA) had to deal with this problem after the introduction of Legislative Decree 150/2009, which motivated them to develop research projects aimed at evaluating and promoting organizational wellbeing. This represented a key moment for Italian organizations, above all for the possibility to turn a legal obligation into a real opportunity to provide public administrations with tools for organizational analysis and employee feedback. Many Italian PAs decided to evaluate their organizational health through the Magellano project, sponsored by the Department of Public Administration, by using as a research tool the Multidimensional Organizational Health Questionnaire (Avallone and Bonaretti, 2003). Adhesions to this project were, above all, by local authorities, health services and schools, whereas only 4.56% were by universities and 1.30% were by research agencies. Other organizations, above all research agencies such as the Italian National Research Council (CNR), decided to involve their employees in the definition of areas and dimensions of organizational well-being and developed original assessment tools able to take into account the multidimensionality of the construct and the specificity of the context (Colì and Rissotto, 2013). One of the problems that need to be faced when dealing with complex organizations is related to the coexistence of different categories of workers for which organizational well-being could have different meaning. In the CNR case, we are in the largest Public Research Agency in Italy, in which 7996 employees work, 60% of whom are researchers and 40% of whom are administrative staff. What makes the CNR a complex organization and a shared definition of organizational well-being difficult are these characteristics and others, such as the articulation of the Agency in the Central Administration and research network, the deployment of researchers in more than 100 Research Institutes located nationally, the numerous external collaborations with other public administrations, universities and industries, the multidisciplinary nature of studies performed and the different theoretical background of the employees. Starting from these considerations, we hypothesized that not only the roles (Colì and Rissotto, 2014a), but also the kind of work performed, could influence the representation of organizational well-being. In particular, we explored and compared points of view of CNR administrative staff and researchers, taking into account the key areas of organizational well-being in this Agency as identified in a previous study (Colì and Rissotto, 2013). Materials and Methods Qualitative research design was chosen because we wanted an in-depth understanding of employees' points of view, exploring the research topic from the perspective of the interviewee. Coherently with this approach, we made knowledge claims by adopting a constructivist perspective, generating meanings from the data collected in the field (Creswell, 2013). Taking into account the assertion that the professional profile of the employees is a variable that could influence organizational well-being, we made use of purposive and quota sampling, which are suitable for our study. Two sub-groups belonging to different professional profiles, those of administrative staff and researchers, were identified and participants were extracted from a list of CNR employees, proportionally for each group. Sixtyone employees of CNR, 24 of whom with administrative profiles and 37 of whom with researches profiles, were involved in 7 focus groups. This qualitative research tool was chosen because it is suited to explore social processes and to promote the emergence of shared meanings (Corrao, 2000). The main aim of the focus groups was to explore the representation of organizational well-being that these two different categories of workers have, identifying, for each area, the key factors of organizational well-being in the Agency. Table 1. Focus groups and sample characteristics N° focus groups N° participants Profile Unit of affiliation 3 24 Administrative staff Central administration 4 37 Researchers Research Institutes Total 7 61 Overall, as shown in Table 1 and 3 of the 7 focus groups were carried out with administrative profiles of the Central Administration, while 4 of the 7 focus groups were carried out with researchers. Among the participants, 57% were male and 43% were female. Their age, in 77% of cases, exceeded 45 years. The focus groups followed a semi-structured interview-guide, which was open and flexible in line with the research method chosen. The focus groups, taped and transcribed, lasted about 1 h and 30 m. Using qualitative data analysis software NVivo9 (Coppola, 2011), interview transcripts were categorized and coded according to different key areas of organizational well-being in CNR. Through a process of attribution of meaning to the text based on a review of the interview data, dimensions of organizational well-being were identified and distinguished based on the two different categories of workers, administrative staff and researchers. An interpretive content analysis was also performed and the extracts of participants' phrases are quoted in italics, between quotation marks. Tomorrow Area Both administrative staff and researchers spoke about the "Tomorrow" area, but dedicated attention to different dimensions supporting organizational well-being. In particular, researchers gave prominence to the "Future outlook" dimension, whereas administrative staff gave prominence to the "Innovation" one (Table 2). Future Outlook Administrative staff spoke about the importance of developing a new clear and shared vision of the direction that the Agency should take: "It seems to me that the Agency is the mirror of our country. […] Let us sit down and try to figure out where we want to go. We know where we come from, but where do we want to go?" This point of view was also shared by researchers, who underlined the absence of expectation related to their working future and the sensation of uncertainty, typical of temporary workers, as well as the consequent frustration and lack of work motivation (Table 3): "Researchers are frustrated because they cannot see the way forward, where to go. There is no motivation, we all feel adrift. We stay here and we try to survive." Innovation Administrative staff spoke also about the importance of technological innovation, aimed at sharing information between administrative staff and between administrative staff and researchers. They referred the presence of punitive attitudes towards innovation in general, which thwarted the introduction of changes that could improve daily work (Table 3): "We need to be braver. […] This punitive attitude is maniacal and stops us from working effectively. As time goes on it gets worse. There was a period in which they told us to be more enterprising and we had the courage to introduce some innovations, but, in actual fact, now it's become something that is unsustainable." Staff Management Area Both administrative staff and researchers spoke about the "Staff management" area, but dedicated attention to different dimensions supporting organizational wellbeing. In particular, administrative staff spoke about the three dimensions of this area, that is "Recruitment and staff turnover", "Staff appraisal and professional growth" and "Evaluation", with slightly more prominence given to the second one. Also researchers spoke about these three dimensions, giving instead greater prominence to "Evaluation" (Table 4). Recruitment and Staff Turnover Administrative staff spoke aboutthe absence of a culture of Human Resources Management (HRM) as a whole, from recruitment planning to staff replacement, from staff turnover to work continuity. For them, these aspects had different consequences that prevented organizational wellbeing, such as the loss of knowledge and competences: "Knowledge is tied to people, when a person leaves, knowledge leaves, we lose documents, we lose procedures. […] We lose something important, skills." Also researchers spoke about the absence of a plan for new staff recruitment and for the management of temporary workers. They referred the lack of policies for recruitment and management of people with disabilities, but also the excessive turnover of managers and related negative consequences, such as the loss of the continuity of the leadership's vision (Table 5): "The Agency has changed four presidents in five years, each one with his own perspective. […] There's a general disorientation and it's difficult to give staff the continuity of an Agency vision." Staff Appraisal and Professional Growth Administrative staff and researchers focused on human resources management able to value each employee and to promote their professional growth. To support the management of human resources in this way, administrative staff mostly proposed the use of noneconomic incentives, such as participation in training courses, appreciation and promotion ("In my opinion, economic incentives will never be a reality in the Italian Public Administration, but there are other interesting incentive schemes that can be applied."), whereas researchers mostly proposed the use of economic incentives according to their productivity (Table 5): "If we fail in differentiating salaries, nothing will change. I have very capable and productive researchers, but why they receive the same salary as the others? So, if we want to make this Agency really productive, we need to differentiate salaries." Evaluation Administrative staff spoke about a psychological evaluation of the entire staff, above all, for people with mental health disease: "We need a psychologist that periodically evaluates the employees. He should evaluate all the staff, particularly people with mental fragility, who can create not only problems in the workplace, but can also represent a risk." They discussed two different kinds of evaluation, the evaluation of the person and of the entire working group and evaluation criteria, such as the need to be objective. Evaluation could have different purposes, such as the purpose to pick out employees who do not want to work and employees who overwork, to define the way to allocate economic benefits or rewards or to identify bad working conditions, for example, those characterized by the absence of adequate work facilities. Evaluation could also have negative effects, for example, it could generate hostility or competition between employees: "The fact that there is no evaluation is good for all of us because, let's face it, this situation ensures we all get the essential. So it generates neither conflict, nor competition." Also researchers discussed evaluation, but they pointed out different aspects. They focused on evaluation of the entire Agency and of the Research Institutes and they spoke about a past evaluation of the Institutes that did not produce any changes at all: "Let's remember that we went through an evaluation of the Research Institutes that lasted many years, which cost a lot of money and of which we don't know anything, in the sense that no change has happened. […] The evaluation was intended to make a screening for how to use funds, but this didn't happen." Furthermore, they spoke about the evaluation of research results and the criteria used for this process. In regard to this last aspect, the debate focused on the criteria of impact factor, which seemed to favor some fields of research and to penalize others and on the necessity to find more complex criteria able to take into account different aspects, such as the applicability of the research results: "The evaluation of research activities is rather complicated, because there are niche sectors with low coefficients of impact even if the research is still valid." "Other aspects, such as the applicability of the research, should be taken into account. […] If the impact factor remains the only evaluation criterion, it is clear that some sectors will be favored over others." Evaluation was also associated with not very transparent criteria used in public competitions, both in the case of recruitment of new staff and in the case of career advancement. As well as administrative staff, researchers spoke about the evaluation of employees, in respect to which they proposed a working group evaluation rather than an individual one. Also in their opinion, evaluation could be a useful tool to combat work inefficiency (Table 5). Inside and Outside Area Administrative staff and researchers gave prominence to different dimensions in the "Inside and outside" area. The main difference between these two professional profiles concerned the "Communication and sharing" dimension and the "Sense of belonging" one. In particular, researchers gave more prominence to the second. On the other hand, for administrative staff the first dimension was more important. There were no significant differences regarding the prominence given by administrative staff and researchers to the dimension related to "Relationships and integration" (Table 6). Communication and Sharing Administrative staff spoke about the importance to share knowledge and information and to look for formal and informal communication channels or spaces that could facilitate this process, such as periodic institutional meetings or unofficial debates in the Agency canteen or café. Also a direct and constant communication with managers could, for them, sometimes simplify the flow of information between different hierarchical levels. They also underlined the importance of sharing knowledge not only with newcomers, but also with employees of the same office, in order to increase the intellectual capital of the Agency without waste existing knowledge: "The need for exchange is really felt by all of us. I saw that many times without exchanging views and without meeting, we did the same work. No-one knew what other coworkers were doing. There is a waste of energy and resources. We don't converge on the same goal." Researchers focused on the possibility to share not only knowledge, but also equipment and research tools with other research groups. They also referred the importance of the territorial proximity of the Research Institutes, which could promote information exchanges and collaboration between different groups (Table 7): "Proximity allows us to know what a coworker is doing, perhaps by chatting at lunch, without necessarily searching for his publications. We end up collaborating more." Relationship and Integration Both administrative staff and researchers spoke about the importance of mutual collaboration, aimed at sharing and integrating respective knowledge and expertise. Administrative staff focused, above all, on integration among coworkers and between managers and employees and on the importance of a good company climate: "There is no contact with our supervisors. […] If you meet your manager a few times a year it is already quite something. My manager hardly knows me, hardly knows the staff." "I think it is important, when you go to work, to find a person you can talk to, laugh and joke with, because you have to stay 8 hours in a room together. At least a person you get on with, go for a coffee with, rather than go alone. This aspect is psychologically fundamental I think." Researchers underlined the need for integration between working groups, aimed at developing shared projects and between Research Institutes and Departments (Table 7): "An important aspect is integration between colleagues. […] We are competing within our Institution. We should unite and present ourselves as one Institute rather than as an inexistent critical mass. We're missing this, to be united, especially in European projects in which a critical mass is required. This is the case for the individual researcher, but also at the department level. I remember that, five years ago, there was an attempt to coordinate departments, but nothing came of it." Sense of Belonging Both administrative staff and researchers focused on the sense of belonging. The former referred especially to a sense of belonging to the entire organization ("We are enthusiastic about working in this institution, we really love the CNR."), the latter referred especially to a sense of belonging to their working group and to their work, which they continue to perform with great passion despite different kinds of difficulty (Table 7): "Research is work that, if you do seriously, really involves you. Therefore you do it regardless of your salary, regardless of whether you have a laboratory at your disposal, regardless of whether you have to deal with administrative staff or with a manager." Resources Area Both administrative staff and researchers spoke about three dimensions in this area, which were "Financial", "Human" and "Space". The first group focused more on "Human" resources and "Space", the second group focused above all on "Financial" resources (Table 8). Financial Administrative staff focused on the economic crisis that produced staff cuts and reduction of internal training opportunities: "With the cuts to the Public Administration, we have had problems with both staff and training, two things that right now are quite lacking." These economic resources were also necessary to guarantee the contractual continuity of temporary workers and to avoid the loss of skills and expertise (Table 9). Human Both administrative staff and researchers spoke about the necessity to have at their disposal not only economic resources, but also human resources, that is skills and expertise. Researchers also spoke about the importance of having administrative skills at their disposal and of using them as support for research activities: "One aspect that creates a lot of inconvenience is when researchers don't find in administrative staff adequate support for bureaucratic matters that become more and more burdensome every day. […] This has an impact on the mood of the researcher that sometimes is forced to perform alone the administrative aspects of a research project." Administrative staff spoke about the possibility of creating an archive of CNR employees' expertise, in order to share and use, in the best way, the skills in the agency ( Space Both administrative staff and researchers referred to the importance of work spaces, which needed to be suited to the number of people and to be adequate to the kind of working activities. Spaces were also important for promoting good social relationships between coworkers ( Work Area Administrative staff and researchers gave prominence to different dimensions of the "Work" area that support organizational well-being. In particular, the first group gave more attention to "Working methods", whereas the second focused mostly on "Job satisfaction" (Table 10). Job Satisfaction Administrative staff spoke about job dissatisfaction, which could be reduced through, for example, a more comfortable working environment or a better management of daily work activities: "If you take a tour of the corridors of CNR, you'll realize that everybody's complaining that things aren't going well. […] Personally, I think that, right now, I'm not doing the best I can do and so I feel unsatisfied. Once, together with another manager, I was responsible for all the administration and my day was full and satisfying. Now it's not like that anymore." Researchers referred to different aspects of their job that contribute to improve job satisfaction, such as working autonomy, flexible use of working hours, creativity inherent in research activities, relations with other researchers of the national and international context and the possibility of continuous training (Table 11): "The CNR allowed me to continue studying and carry out activities I like. I feel lucky for this and other aspects, such as the working autonomy, research freedom, international contacts and the world-wide reach of what we do." Working Methods Administrative staff spoke about the need for an appropriate distribution of workload and for planning able to avoid periods of overwork or periods of lack of work. To plan work objectives with coworkers and to have a working method emphasizing teamwork and cooperation were important too: "We have lost the ability to plan our work in relation to urgencies. In some periods we work at an intense pace and this is the cause of great agitation, confusion, fatigue to achieve work goals. […] Then there are some months in which there's nothing to do in terms of work activities." Researchers focused on the need for a working method able to take into account working priorities. In particular, they referred the problem of time spent writing new research projects, time taken away from other important activities, such as the writing of scientific articles. This generated other difficulties, related, for example, to the continuity of their research themes. They also underlined the importance of a flexible use of working time in improving their productivity (Table 11): "Self-management of my time leads me to work more than I would if I were chained to my chair eight hours a day. The trust they put in me makes my work time productive. I don't know how to express it, but it is like that, it makes me feel empowered and has positive effects on my satisfaction in work." Roles Administrative staff focused on the absence of well-defined and recognized roles that, in some cases, could obstruct the flow of work activities. The continuous changes in the Agency, such as those of the statute, made the distinction between roles and between functions more difficult: "I think the important thing is recognition of role within the organization. Meaning to recognize, in some way, the person who has a specific role, who participates in work activities and who contributes to the achievement of those results." Also researchers spoke about the importance of well-defined and recognized roles, referring in particular to administrative staff and researchers and to the importance of their collaboration in the implementation of research projects. They also spoke about the importance of appreciation of their role both in the Agency and in society (Table 11): "I sometimes have the feeling that society actually doesn't perceive our work as useful, in terms of the training offered and of contribution to the development of society and the economy." Participation and Accountability Area Both administrative staff and researchers spoke about the "Participation and accountability" area, giving respectively attention to different dimensions supporting organizational well-being. In particular, administrative staff spoke about all three dimensions of this area, "Decisions", "Accountability" and "Risk and prevention", with more attention given to the third. Researchers, on the other hand, spoke about the first two dimensions, giving more attention to "Decisions" (Table 12). Decisions Administrative staff spoke about the importance of a person, in the organization, able to take decisions in a short time in order not to impede daily work. Participation of employees in decision-making process was important too, especially if decisions could have consequences on workers: "A very critical aspect is, in my opinion, the absence of decisions. No-one makes decisions and in this way an organization cannot operate. […] There are important decisions to make and they continue to get postponed." Also researchers spoke about decisions and participation as real involvement in decision-making. They also focused on criteria of a decisional process that needs to be transparent, explicit and shared (Table 13): "It doesn't seem me we belong to anything. […] When I go to a meeting of the Institute, we talk and talk, but is all useless, because everything we decide at the meeting has already been decided before." Accountability Both administrative staff and researchers spoke about the attribution of accountability according to different roles and positions: "There are people that if you gave them a job for which they were truly responsible, right from the person who makes photocopies, they would stay longer in the job, they would be happier, organizational well-being would increase." Researchers focused in particular on the need to distinguish and make explicit the accountability of administrative staff and researchers as a way to improve collaboration and productivity (Table 13). Risk and Prevention Administrative staff spoke about the risk represented by the presence, in the working environment, of people with mental health problems. They underlined the necessity of preventive intervention by the head of security in order to avoid damage to workers (Table 13): "First of all, there is the need to identify situations before happen. […] For people who have mental fragility and difficulty relating with colleagues and that can not only create problems, but also pose a risk in the workplace, thus for the protection of ourselves and of the institution. But nothing is done." Discussion Results of this study highlight that the framework of the construct of organizational well-being is the same for different categories of employees working in the same organization. Even though the structure of organizational well-being is the same, differences emerged in this study relating to representations of the construct. In particular, these differences were seen in the importance given by the two groups of stakeholders to each dimension of organizational wellbeing and in different contents and meanings. Regarding the different importance given to dimensions by these two categories of workers, we can suggest explanations for each area of organizational well-being. For the "Tomorrow area", the prominence given by researchers to the "Future outlook" dimension could be explained by taking into account that temporary workers, for which there is little certainty of future employment, are more numerous among researchers rather than among administrative staff. The importance of job future in promoting organizational well-being has also been underlined in literature. A number of studies (e.g., Ashford et al., 1989;Barling and Kelloway, 1996;Hellgren et al., 1999) have found that job insecurity was associated with negative perceptions of physical and mental health, as well as lowered job satisfaction and higher levels of turnover intention. A perceived insecurity concerning one's future role in the organization appeared to make employees less inclined to remain with the organization (Arnold and Feldman, 1982;Dekker and Schaufeli, 1995). The prominence given by the administrative staff to the "Innovation" dimension could be due to the need for flexibility in the Central Administration, a structure where a bureaucratic and rigid culture prevails. For the "Staff management" area, the prominence given by the researchers to the dimension of "Evaluation" might be explained by taking into account that evaluation is an important and much discussed theme in academic communities (Kaukomen, 1997) and that the evaluation process could have repercussions not only on researchers and their work, but also on the Research Institutes and on the entire Agency. Evaluation results are increasingly used as inputs in research management (Van Steen and Eijffinger, 1998), but evaluation is also used to decide funding following performance assessments of researchers, projects, programs, departments and institutions (Geuna and Martin, 2003). The prominence given by the administrative staff to the "Staff appraisal and professional growth" dimension could be explained by taking into account the scarcity of internal rewards that are instead more present in research activities. In the literature, rewards are one of the variables that improve organizational well-being. In particular, regarding nonmonetary rewards, research has shown that people are moved by incentives other than wage, such as social approval, fairness and other non-monetary aspects of their jobs (Gächter and Falk, 2002). For the "Inside and outside" area, the prominence given by the researchers to the "Sense of belonging" dimension could probably be explained by taking into account that sense of belonging is strictly related to the kind of work, which is more engaging and fascinating in the case of research work rather than in administrative work. A sense of belonging to something beyond oneself is not only an important element of employee engagement and of the promotion of organizational well-being, but also a basic human need (Baumeister and Leary, 1995). The administrative staff gave instead prominence to the dimension of "Communication and sharing", probably because aspects related to circulation of information are more problematic in the Central Administration, where those that have important information tend to keep it to themselves because it can help them maintain a position of power. Strategies that involve open communication (DeJoy et al., 1995;Schurman and Israel, 1995) and broad-based participation (Vandenberg et al., 1999) have been shown to be important for promoting organizational well-being. On the contrary, deficiencies in communication can result in maldistribution of knowledge and, as a consequence, thwart organizational well-being (Kivimäki and Elovainio, 1995). For the "Resources" area, the prominence given by researchers to the "Financial" dimension is probably due to consequences that the lack of economic resources have not only on their daily work, but also on their long term work, thus orienting their research themes (Massy, 1996). For the "Work" area, aspects related to "Job satisfaction" were more important for the researchers, probably because of the kind of work and working context, bearing in mind that, in Italy, researchers are not wellpaid and the Agency does not offer them incentives, for example, in terms of career advancement or even verbal recognition. Therefore, intrinsic motivation becomes an important aspect able to promote organizational well-being (Gächter and Falk, 2002). The correlation between job satisfaction and both economic and non-economic incentives, has been shown in literature (Locke, 1976). The prominence given by administrative staff to "Working method" could probably be explained by the need, in the Central Administration, to have an efficient organization of working activities. In effect, in the Central Administration there is a strict organizational structure, characterized by not very flexible use of working time and not very permissive working methods. For the "Participation and accountability" area, the prominence given by researchers to the "Decisions" dimension is probably due to their lack of involvement in decisionmaking processes. The employees' involvement in the decisions that affect them has been underlined in literature (Harter et al., 2003) as important for promoting organizational well-being. In particular, this dimension is strictly related to the sense of belonging and impact on workers levels of interest and ownership in organizational outcomes (Wrzesniewski et al., 1977). The importance given by administrative staff to the "Risk and prevention" dimension is probably due to the rigid structure of the Central Administration and the consequent need to bring all processes under control, for example in order to prevent any problems that people with mental health disease can cause. Organizational culture has been shown to be an important element affecting the work experiences of employees who are different from the majority (Spataro, 2005). In particular, CNR would seem to belong to the culture of differentiation, in which disability is not recognized as a value for the organization (Colì and Rissotto, 2014b). With respect to different meanings given by these two categories of workers, the main differences were related, for example, to the "Future outlook" dimension, seen by administrative staff as the general vision of the Agency and by researchers in terms of the future of work for employees. Other differences were related to the "Recruitment and staff turnover" dimension, seen by the administrative staff in terms of general human resources management and by the researchers in terms of the management of a weak class of workers, such as temporary workers and employees with disabilities. The point of views of the administrative staff and the researchers also differed in the "Communication and sharing" dimension, in respect of which administrative staff spoke about information and knowledge, while researchers also spoke about equipment and research tools. Regarding "Relationship and integration", administrative staff spoke about this dimension in terms of integration between colleagues and between managers and employees, whereas researchers spoke about this dimension in terms of integration between working groups and between research Institutes and Departments. Other differences are related to the "Working methods" dimension, seen in terms of the distribution of workload and planning of work by the administrative staff and in terms of use of time by researchers. To sum up, this study highlighted differences in the way in which administrative staff and researchers represented organizational well-being, both in terms of importance given to each dimension and in terms of content and meaning attributed to the dimensions themselves. This study shows similarities with other studies performed in the same field, in particular with regard to the aspects promoting organizational well-being. However, in literature, there are no other similar studies investigating differences in points of view of different categories of workers of the same organization. As a whole, the specificity of the points of view of these two categories of workers considered in our study might be explained by considering not only the different working conditions and the different kind of work performed, but also the different cultural values of the Research Institutes and of the Central Administration. The specificity of the points of view should be taken into account in the evaluation of the organizational health state, above all in complex organizations where different categories of workers, performing different kind of work, could have different representations of the construct of organizational wellbeing. Different points of view should be equally represented and integrated into the predisposition of research tools for evaluation. Different tools for different main categories of workers should also be considered, as well as the integration of quantitative research tools with qualitative ones. Conclusion On the basis of the main results of this study, we provide a set of recommendations that could be applied to improving organizational well-being in the CNR and in other similar complex organizations: • Transmit to employees a clear vision of the Agency, also in terms of future working outlook • Promote communication and collaboration, not only between different categories of workers, such as administrative staff and researchers, but also among coworkers, between working groups and between managers and employees • Activate knowledge management processes able to explicit tacit knowledge and share existing knowledge • Plan the recruitment of new staff on the basis of the real needs of the Agency • Make policies for recruitment and management of people with disabilities and for employment of temporary workers • Provide for an incentives system, able to value each employee and to promote their professional growth • Use participatory evaluation as a tool for a better human resources management and for improving the quality of work • Create a comfortable working environment, considering spaces for socialization • Support a clear definition of roles, competences and accountability • Foster the participation of employees in Agency decision-making processes The proposed interventions, to be effective, should take into consideration the specificity of each working context and of the different points of view of employees. The authors recommend further studies in similar complex organizations, such as research agencies or universities, in order to verify the results of this study in other working contexts and to stimulate debate around this theme.
2017-05-19T12:49:05.458Z
2015-10-22T00:00:00.000
{ "year": 2015, "sha1": "57b1914decd71c25d28bfd0a94f26ff5365afff3", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/jssp.2015.381.394", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "57b1914decd71c25d28bfd0a94f26ff5365afff3", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
10112059
pes2o/s2orc
v3-fos-license
Wild and Hatchery Populations of Korean Starry Flounder (Platichthys stellatus) Compared Using Microsatellite DNA Markers Starry flounder (Platichthys stellatus) is an important sport and food fish found around the margins of the North Pacific. Aquaculture production of this species in Korea has increased because of its commercial value. Microsatellite DNA markers are a useful DNA-based tool for monitoring the genetic variation of starry flounder populations. In this study, 12 polymorphic microsatellite DNA markers were identified from a partial genomic starry flounder DNA library enriched in CA repeats, and used to compare allelic variation between wild and hatchery starry flounder populations in Korea. All loci were readily amplified and demonstrated high allelic diversity, with the number of alleles ranging from 6 to 18 in the wild population and from 2 to 12 in the farmed population. A total of 136 alleles were detected at the 12 microsatellite loci in the two populations. The mean observed and expected heterozygosities were 0.62 and 0.68, respectively, in the hatchery samples and 0.67 and 0.75, respectively, in the wild samples. These results indicate lower genetic variability in the hatchery population as compared to the wild population. Significant shifts in allelic frequencies were detected at eight loci, which resulted in a small but significant genetic differences between the wild and hatchery populations (FST = 0.043, P < 0.05). Further studies with additional starry flounder sample collections are needed for comprehensive determinations of the genetic varieties between the wild and hatchery populations. These microsatellite loci may be valuable for future population genetic studies, monitoring the genetic variation for successful aquaculture management and the preservation of aquatic biodiversity. Introduction The Korean starry flounder Platichthys stellatus (Palas 1788) belongs to the family Pleuronectidae. This pleuronectid flatfish is distributed in countries surrounding the Pacific Ocean, stretching from the northeastern coast of Korea to Japan's Sea of Okhotsk and from the Chukchi Sea, Bering Sea, and Aleutian Islands south to Los Angeles Harbor, California, USA [1]. This species appears to prefer shallow water (less than 73 m), however it has been recorded as deep as 274 m; young fish are often intertidal [2]. In Korea, P. stellatus is an important fishery resource that has been considered a target for prospective aquaculture diversification using production techniques similar to those previously developed for closely related pleuronectids, including Paralichthys olivaceus. Therefore, interest has been directed toward resource enhancement. Complete culturing including reproduction control and captive spawning, hatching, and larval and juvenile rearing are possible and, recently, artificially hatched juveniles of P. stellatus were released into Korean coastal breeding grounds for a sustainable fishery [3,4]. Although mass release of P. stellatus juveniles reared in hatcheries is expected to have an immediate effect on stock abundance, it could also cause changes in the genetic structure of wild populations. The reduced genetic diversity observed in most hatchery stocks may have detrimental effects on commercial traits such as growth rate, survival and disease resistance, which can post a great risk for aquaculture [5,6]. Thus genetic monitoring for hatchery stocks and natural populations is recommended to preserve genetic variation in natural populations [7]. Therefore, it is vital and critical to investigate the genetic variability of wild and cultured starry flounders for the management of wild populations and successful aquaculture. Molecular markers are an important tool for evaluating levels and patterns of genetic diversity and have been used to study genetic diversity in a number of fish species [8]. Among various molecular markers now available to study genetic diversity in different fish species, microsatellites (MS) are markers of choice because of their highly polymorphism with codominant inheritance [9,10]. Microsatellites have been used to monitor genetic differences between hatchery stocks and wild populations in various fish species [11][12][13]. However, despite the importance of starry flounder for commercial aquaculture in Korea, there are only a limited number of MS markers available [14]. Furthermore, no study has focused on the genetic variability and population structure of this species. Therefore, additional highly informative microsatellite markers need to be developed and screened to identify markers that are the most informative for various other applications, including studies of genome mapping, parentage, kinships and stock structure. The present study is aimed at identifying new microsatellite loci and comparing the genetic similarity and differences of wild and hatchery starry flounder populations in Korea. Microsatellite Marker Isolation In total, more than 500 white colonies were obtained from the transformation with the Korean (CA) n -enriched genomic DNA library, approximately 200 of which were screened by PCR for the presence of a repeat-containing insert. Sequencing of the inserts from these 200 colonies revealed 130 loci containing MS arrays with a minimum of five repeats, corresponding to an enrichment efficiency of 26%. These were primarily 2-bp repeat motifs, some of which were combined with other 2-bp repeat motifs. Primers were designed and tested for 53 loci that exhibited adequately long (>20 bp) and unique sequence regions flanking the MS array. Seventy-seven loci were discarded because the MS sequences were so close to the linker sequence that primer sequences could not be designed for amplification. After initial PCR assays, only 17 primer sets (KPs1, KPs2, KPs3A, KPs5A, KPs12B, KPs15, KPs17A, KPs18, KPs20, KPs23, KPs25, KPs27, KPs29, KPs31, KPs32, KPs33, and KPs36) successfully yielded variable profiles. The remaining 36 primer sets gave either inconsistent or no PCR products, despite adjusting the dNTP concentrations and using an annealing temperature gradient. An initial evaluation of the polymorphic status of each locus was done by genotyping in 16 individuals randomly selected from the wild population. All loci were polymorphic with the exception of KPs2, KPs5A, KPs 23, KPs29, and KPs31, which had one allele, showed great allele varieties with clear peak patterns and thus suited for further investigation. The primer sequences, repeat motifs, annealing temperatures, fluorescent labels, and GenBank accession numbers for the 17 newly identified MS loci are summarized in Table 1. A homology search using BLAST program showed that none of these 17 sequences had similarity to any of the sequences in the GenBank. Generally, in case of magnetic bead-based enrichment, the types and ratios of biotin-labeled probes and the positive clone selection strategy can affect the success of cloning and the efficiency of enrichment. In this study, we created MS libraries enriched for CA repeat sequences by following the protocol of Hamilton et al. [15] with modifications that have been previously described [16,17]. Of the positive clones obtained, about 26% contained microsatellite repeats (130 of 500); this number is comparable with number obtained from the normalized cDNA library enriched for CA-repeats for Asian seabass, Lates calcarifer (26.5%) [18], but lower than that for flounder (74%) [19] and tilapia (96%) [17]. Except for the efficiency of enrichment procedure, the differences in enrichment efficiency are probably a result of the use of different biotin-labeled oligonucleotide probes and the proper ratio. In the case of tilapia, a variation of the hybrid capture method was used, which is likely a reflection of the relative complexity of several enriched libraries with different size selection of the restricted genomic DNA. In the genome of bivalves, however, remarkable differences in microsatellite density among closely related species were suggested [20]. Genetic Variation within Populations Samples of 48 wild and 30 hatchery-bred P. stellatus collected from around the eastern coast of Korea were screened for variation at the 12 new polymorphic MS loci. The 12 primer sets yielded variable profiles; reruns were conducted for 20% of the samples to ensure that the allele scoring was reproducible. No differences were observed, indicating that there were no genotyping errors. Samples that failed to amplify after the rerun were not included, which made it unlikely that poor DNA quality affected our results. The MICRO-CHECKER analysis showed that some loci may have been influenced by one or more null alleles in both the wild and hatchery samples; our data showed that loci KPs1, KPs17A, and KPs32 in the farmed samples and loci KPs1, KPs17A, and KPs36 in the wild population were affected. Loci KPs1 and KPs17A appeared to be influenced in both the wild and hatchery samples, indicating that using loci KPs1 and KPs17A for population genetic analyses that assume Hardy-Weinberg equilibrium (HWE) may prove to be problematic. However, loci KPs32 and KPs36 were affected by null alleles in only one sample; thus, they were included in further analyses. All 12 MS loci were found to be highly polymorphic in both populations. A total of 136 different alleles were observed and the average number of alleles per locus was 11.3. The number of alleles varied from two at the KPs18 and KPs20 loci to 18 at the KPs12B locus ( Table 2). Not all loci were equally variable. Especially, KPs12B, KPs17A, KPs25 and KPs27 displayed greater allelic diversity, as well as higher levels of heterozygosity. The observed heterozygosity ranged from 0.24 at locus KPs18 to 0.94 at KPs27, whereas expected heterozygosity varied from 0.20 at locus KPs18 to 0.89 at KPs27 (Table 2). Due to the difference in sample size of the wild and hatchery populations, the parameter allelic richness (A R ) was employed to compare different populations independent of sample size. Overall allelic richness varied from 2 to 14.81 (Table 2). In this study, a high level of genetic diversity (mean heterozygosity = 0.75; mean allelic number (N A ) = 10.75) in the wild population was detected, although N A is dramatically lower than the reported N A for marine fish (N A = 19.9 ± 6.6 averaged from 12 species) [21]. However, the mean observed (Ho = 0.67) and expected heterozygosity (He = 0.75) of starry flounder was comparable to other marine fish species (H = 0.77 ± 0.19 averaged from 12 species) [13,21], suggesting that these polymorphic microsatellites may be sufficient to reveal intraspecific diversity of P. stellatus. Inbreeding coefficients (F IS ) varied among markers from −0.06 (KPs27) to 0.41 (KPs17A) in the hatchery samples, and from −0.18 (KPs3A) to 0.66 (KPs17A) in the wild samples. Average F IS , including all markers, was 0.11 in the hatchery samples and 0.06 in the wild samples (Table 2). There was significant heterogeneity between wild and hatchery allele frequencies for eight loci following sequential Bonferroni correction for multiple tests (P < 0.004) (Table 3, Figure 1). The maximum number of alleles was detected at locus KPs12B (n = 18), and the allele frequencies at this locus were clearly different between the wild and hatchery populations. Distinct differences in the allele frequencies between the wild and hatchery populations were also observed at the loci, KPs3, KPs15, KPs18, KPs20, KPs25, KPs32, and KPs36. Common alleles shared across wild and hatchery populations. More unique alleles were observed in the wild population (49) than the hatchery population (7), though their frequencies were very low ( Table 2). Allele frequency distributions indicated the presence 32 rare alleles (frequency < 5%) of a total of 87 alleles over all loci (mean 36.8%) in the farmed sample, whereas 68 rare alleles of a total of 129 alleles (mean 52.7%) were observed in the wild sample (data not shown). Rare alleles were detected at most loci and were never associated with a particular locus in either population. No significant linkage disequilibrium between all pairs of the 12 microsatellite loci was detected (P > 0.004). Significant departures from HWE, even after Bonferroni correction (P < 0.004), were found at three loci (KPs1, KPs17A, and KPs32) in the hatchery samples and three loci (KPs1, KPs17A, and KPs36) in the wild samples, all due to heterozygote deficiency. In the hatchery populations, homozygote excess is commonly caused by a limited number of founders or founder effects. In the wild populations, homozygote excess could be explained by a population effect such as the Wahlund effect or inbreeding or the effective population size and artificial and natural selection during seed production and cultivation [22]. However, these explanations seem unlikely because other loci were consistent with HWE expectations. Thus, a likely explanation for the observed heterozygote deficit in our microsatellite data might be primer site sequence variation resulting in null alleles. Null alleles, a locus-dependent effect found frequently at microsatellite DNA loci, are the most likely cause of the heterozygote deficiency in HWE tests [23]. A high frequency of null alleles complicates many types of population genetic analyses that rely on HWE because false homozygotes are common [24]. Indeed, our MICRO-CHECKER analysis revealed the presence of null alleles at those loci, with a significant heterozygote deficit. Therefore, most deviations from HWE in this study might have been due to the presence of null alleles resulting from base substitutions or deletions at the PCR priming sites in the flanking region of the microsatellites. The importance of null alleles as an explanation for heterozygote deficiency has been reported for other marine fish [19,25]. Considering that this study was limited by the number of populations screened, the genetic diversity parameters for each population may be explained by data from additional populations, which may allow for a more precise genetic characterization of the MS loci used. Therefore, our results should be interpreted with caution. Further study is required to assess the genetic resources of wild populations and the influence of aquaculture on the genetic structure of this important fishery species. Genetic Differentiation between the Wild and Hatchery Populations The wild population had a higher number of alleles and a higher allelic richness than the hatchery-bred population. A statistically significant (P < 0.05) reduction of allele richness was observed in the hatchery-bred individuals of starry flounder (mean = 7.25) compared to the wild population (mean = 9.50). The mean observed and expected heterozygosities were 0.667 and 0.749 respectively in the wild samples and 0.620 and 0.677 respectively in the hatchery samples. However, genetic diversity in terms of heterozygosity was not markedly reduced (P > 0.05) in contrast to the significant reduction of allelic richness. Loss of rare alleles in population can greatly influence allelic richness, but has little effect on heterozygosity [26]. This non significant reduction in heterozygosity in cultured stock in contrast to wild population was reported in other studies [27]. F ST estimates were significantly different between the hatchery and wild populations whether or not the two loci with potential null alleles (KPs1 and KPs17A) were included (F ST = 0.043, P < 0.01 and F ST = 0.053, P < 0.01, respectively) ( Table 2). The significant F ST estimates indicate the presence of genetic differentiation between the populations, which was likely a result of reduced genetic variation. For starry flounder in Uljin, Korea, the progeny produced for release had a different genetic composition with significant reductions in genetic diversity compared with the wild population, although with no significant reductions in mean heterozygosity (P > 0.05; Table 2). In fact, the loss of alleles is more important than a change in allele frequencies because the latter can be changed again by random drift; no way exists to recover a lost allele. The decline of genetic variation in cultured stock may be caused by the increased effect of genetic drift resulting from using a small number of parental individuals and artificial selection existing in the hatchery environment [28]. Reduced genetic variation can result in reduced performance in aquaculture because this is the source of variation for important traits such as growth rate and disease resistance [5,29]. For proper management of stock enhancement programs, monitoring for genetic structure and diversity must be considered in addition to biological, ecological, and fishery factors. Therefore, samples from the wild population should be taken and analyzed with genetic markers before the fish is used as broodstock. A sample of hatchery-reared fish should then be taken for genetic analysis. This information will be useful in evaluating the feasibility of the enhancement program to maintain the genetic diversity of wild populations, as well as to improve hatchery management for the production of high quality starry flounder. Sample Collection and DNA Extraction For genomic DNA isolation of high-molecular-weight and microsatellite enriched partial genomic library construction, fin clips were collected from an individual starry flounder from Uljin, Korea. Samples of 48 wild and 30 hatchery-bred starry flounders were collected at the East Sea Fisheries Research Institute of the National Fisheries Research and Development Institute in Uljin, Korea in 2006. Wild starry flounders were sampled from broodstock that was captured from the eastern coast of Korea since 2000 and farmed samples were obtained from the first generation of hatchery-reared stock produced in 2004. In general, both wild caught and captive cultured sources of the broodstock were used for artificial reproduction. However, details of the exact proportion of these two different sources of individuals and their origins on the cultured stock were not available. All the samples were placed in absolute ethanol and kept frozen at −20 °C until DNA extraction. The TNES-urea buffer method [30] was used to isolate high-molecular-weight DNA for microsatellite isolation. For genotyping, total DNA from fin-clips of each sample was extracted using a MagExtractor-Genomic DNA Purification Kit (TOYOBO, Osaka, Japan) for an automated DNA extraction system, MagExtractor MFX-2100 (TOYOBO, Osaka, Japan). Extracted genomic DNA (20 μg) was stored at −20 °C until further use for PCR. Microsatellite-Enriched Genomic Library Construction and Microsatellite Sequencing A partial genomic library enriched for CA repeats was constructed using a slightly modified enrichment procedure with pre-hybridization polymerase chain reaction (PCR) amplification, as described previously [16,17]. The extracted DNA was digested with the restriction enzymes AluI, RsaI, NheI, and HhaI (New England Biolabs, Beverly, MA, USA). DNA fragments in the range of 300-800 bp were isolated and purified using the QIAquick Gel Extraction Kit (Qiagen, Hilden, Germany). The selected fragments were ligated to an adaptor (SNX/SNX rev linker sequences), and the linker-ligated DNA was amplified using SNX as a linker-specific primer for PCR. For enrichment, the DNA was denatured and biotin-labeled repeat sequences ((CA) 12 GCTTGA) [31] were hybridized to the PCR products. The hybridized complex was separated with streptavidin-coated magnetic spheres (Promega, Madison, WI, USA). After washing, the bound, enriched DNA was eluted from the magnetic spheres and re-amplified with an adaptor sequence primer. PCR products were purified using a QIAquick PCR Purification Kit (Qiagen, Hilden, Germany). Isolation of Microsatellite-Containing DNA Fragments and Microsatellite Sequencing The purified PCR products were digested with NheI, cloned using an XbaI-digested pUC18 vector (Pharmacia, Piscataway, NJ, USA), and transformed into Escherichia coli DH5α competent cells. White colonies were screened for the presence of a repeat insert by PCR using the universal M13 primer and non-biotin-labeled dinucleotide primers. PCR products were examined on 2% agarose gels, and inserts producing two or more bands were considered to contain a microsatellite locus. Positive clones were cultured and purified. Plasmids from insert-containing colonies were recovered using the QIAprep Spin Miniprep Kit (Qiagen, Hilden, Germany) and sequenced using the BigDye Terminator Cycle Sequencing Ready Reaction Kit (ver. 3.1; Applied Biosystems, Foster City, CA, USA) and an automated sequencer (ABI Prism 310 Genetic Analyzer, Applied Biosystems, Foster City, CA, USA). Primer Design and Allele Scoring Primers were designed based on sequences flanking the MS motifs using the OLIGO software package (ver. 5.0; National Biosciences, Plymouth, MN, USA). A gradient PCR was performed on each primer pair to optimize the annealing temperature (ranging from 50-60 C) using eight starry flounders captured from Uljin, Geongsangbukdo, Korea. The PCR amplification was performed using a PTC 200 DNA Engine (MJ Research, Ramsey, MN, USA) in a 10-µL reaction containing 0.25 U of Ex Taq DNA polymerase (TaKaRa Biomedical, Shiga, Japan), 1 × PCR buffer, 0.2 mM dNTP mix, 100 ng of template DNA, and 10 pmol of each primer, where all forward primer were labeled with 6-FAM, NED, and HEX dyes (Applied Biosystems, Foster City, CA, USA). The PCR reaction ran for 11 min at 95 C, followed by 35 cycles of 1 min at 94 C, 1 min at the annealing temperature (Table 1), and 1 min at 72 C, with a 5-min final extension at 72 C. Microsatellite polymorphisms were screened using an ABI PRISM 3100 Automated DNA Sequencer (Applied Biosystems, Foster City, CA, USA) and alleles were designated by PCR product size relative to a molecular size marker (GENESCAN 400 HD [ROX], Applied Biosystems, Foster City, CA, USA). Fluorescent DNA fragments were analyzed using the GENESCAN (ver. 3.7) and GENOTYPER (ver. 3.7) software packages (Applied Biosystems, Foster City, CA, USA). Sample Comparisons Samples were screened for variation at the newly developed MS loci. MICRO-CHECKER 2.2.3 [32] was used to detect genotyping errors due to null alleles, stuttering, or allele dropout using 1000 randomizations. For genetic diversity parameters, the number of alleles per locus (N A ), size of alleles in base pairs (S), frequency of the most common allele (F), and number of unique alleles (U) were determined for each local sample at each locus using the program GENEPOP (version 4.0) [33]. This was also used to identify deviation from Hardy-Weinberg equilibrium (HWE; exact tests, 1000 iterations) and the observed and expected heterozygosities, indicating an excess or deficiency of heterozygotes. FSTAT (version 2.9.3.2) [34] was used to calculate the inbreeding coefficient (F IS ) [35] per locus and sample and allelic richness (A R ) [36], suitable for comparing the mean number of alleles among populations regardless of sample size. ARLEQUIN was used to assess linkage disequilibrium for all pairs of loci, whose empirical distribution is obtained by a permutation procedure [37] and to calculate single-locus and global multilocus values (F ST ; 1000 permutations) [35]. Significance levels were adjusted for multiple tests by using sequential Bonferroni correction [38]. The significance for differences in the genetic diversity of both samples was tested using Wilcoxon sign rank test. Conclusions In conclusion, genetic studies on starry flounder with microsatellite DNA markers are very rare with only one recent report on the development of microsatellites as a tool for discriminating a hybrid between olive flounder and starry flounder [14]. No detailed information is available to date on the genetic diversity of wild and cultured stocks of starry flounder. In this study, microsatellite-enriched genomic library of starry flounder was constructed and a total of 12 highly polymorphic microsatellite loci were characterized and used to study genetic differences between the wild and hatchery populations. This study demonstrated that genetic changes, including reduced genetic diversity and significant differentiation, have taken place in hatchery starry flounder stock compared to the wild population due to random genetic drift during aquaculture practices. For starry flounder in Korea, genetic variation changes of the cultured stock relative to wild population should not be neglected in the stock enhancement program. Continued monitoring of genetic variance with additional starry flounder sample collections in broader perspective is essential for the establishment of suitable guidelines for resource management and selective breeding. The use of these novel microsatellite markers will certainly facilitate this purpose.
2014-10-01T00:00:00.000Z
2011-12-09T00:00:00.000
{ "year": 2011, "sha1": "d4685a02baf6200829ca6a4bc7dacb1a11186e78", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/12/12/9189/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d4685a02baf6200829ca6a4bc7dacb1a11186e78", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
229241240
pes2o/s2orc
v3-fos-license
Investigating the Alignment Between the CELPIP-General Reading Test and the Canadian Language Benchmarks: A Content Validation Study Investigating the Alignment Between the CELPIP-General Reading Test and the Canadian Language Benchmarks Investigating the Alignment Between the CELPIP-General Reading Test and the Canadian Language Benchmarks: A Content Validation Study Test scores alone are insufficient in supporting test users to make meaningful decisions about test takers. They can also leave test takers insufficiently informed of their own proficiency levels and abilities. Aligning a test to an external proficiency framework links the test scores to a set of language criteria, lending greater meaning to the scores (Kane, 2012) and allowing scores from different tests to be indirectly compared. As a result, the past decade has seen an emerging interest in test alignment (Brunfaut & Harding, 2014;Papageorgiou et al., 2015;Tannenbaum & Wylie, 2004, 2008. Importantly, the relationship between the test and the proficiency framework is not an observable fact, but an assertion for which we, as test developers and researchers, must continuously provide evidence. The present study uses a variation of the scale anchoring method to evaluate the content validity of the high-stakes, large-scale test, the Canadian English Language Proficiency Index Program (CELPIP)-General, the scores of which are linked to the Canadian Language Benchmarks (CLB). This approach uses a combination of quantitative and qualitative methods in which the former selects anchor items that are most discriminating between adjacent score bands and the latter draws in expert judgements to map the selected items to the levels of the external proficiency framework to which the test is linked. This method is particularly helpful in contexts where new test items are continuously being adding to the item bank or the number of items is too large to be individually reviewed by an expert panel in a validation study. We start by introducing the CELPIP-General test as well as the CLB and their use in Canada. Next, we discuss test linking, content validity, and the use of the scale anchoring method in large-scale test settings. We then present a modified scale-anchoring approach for validating test content in relation to external performance standards using data from the CELPIP-General reading test. To better prepare researchers and practitioners to use the CLB in a similar context, we end the paper by discussing the challenges associated with using the CLB in projects that rely on expert judgement. The CELPIP-General Test The CELPIP-General test is designed to measure the communicative competence or functional English proficiency required for successful participation in Canadian communities where English is used as a medium for communication in various social, educational, or workplace contexts. Following Bachman and Palmer's model, communicative competence refers to an individual's ability to integrate language knowledge and skills in order to understand and produce language to achieve communicative goals (Bachman & Palmer, 1996. This implies the comprehension and production of not only the forms and structures of the language but also its objectives and rhetorical conventions. Communicative competence is also described as functional language proficiency or "the expression, interpretation, and negotiation of meaning involving interaction between two or more persons belonging to the same (or different) speech community" (Savignon, 1997, p.272). The communicative approach to language teaching and assessment views language as a vehicle for meaning-making and focuses on the development and measurement of learners' functional proficiency in authentic contexts (Savignon, 1991(Savignon, , 1997. Consistent with the underlying theory and construct of the test, CELPIP-General test tasks assess the skills needed for the interpretation and production of language as it is used in a variety of general or day-to-day interactions in common social and workplace contexts. The interpretations of CELPIP scores are criterion-referenced to the 12 benchmarks of the CLB, and these scores are used for Canadian immigration and citizenship purposes. The CELPIP-General test scores have been linked to the CLB through standard-setting studies (Chen, 2016;Paragon Testing Enterprises, 2013a, 2013b. Multiple methods were used in these standard-setting studies to establish the correspondence between CELPIP scores and CLB levels. For the listening and reading tests, both of which consist of multiple-choice questions, Paragon Testing Enterprises (hereafter, Paragon) used a modified Angoff method (Angoff, 1971) to link the CELPIP scores to the CLB levels and consolidated the results using the Direct Consensus method (Sireci et al., 2004). For the speaking and writing tests, which are based on raters' evaluations of test taker performances, Paragon used a modified Judgmental Policy Capturing procedure (Hambleton & Pitoniak, 2006) in the initial standard-setting studies and triangulated the results using the Body of Work method (Kingston et al., 2001). These standard-setting procedures allowed Paragon to establish a correspondence between CELPIP test scores and CLB levels, providing initial evidence of the alignment between the two. The CLB and Their Use in Canada Language proficiency frameworks are an established set of criteria that describe the language ability of learners at various levels. These language standards are developed by experts in the field to help bring scholars and practitioners together to share a common understanding of language abilities across the proficiency spectrum. Several language proficiency frameworks are currently used in Canada, including the Common European Framework of Reference for Languages (CEFR), the Échelle québécoise des niveaux de compétence en français des personnes immigrantes adultes (EQ), and the Canadian Language Benchmarks (CLB)/Niveaux de compétence linguistique canadiens (NCLC; the French-language counterpart of the CLB, Centre for Canadian Language Benchmarks [CCLB], 2012; Centre des niveaux de compétence linguistique canadiens, 2012). Among them, the CEFR has been most widely adopted and used for multiple languages and contexts worldwide, providing an international standard for the description of second language proficiency. In Canada, the EQ provides a common framework of reference for describing the French language competence of immigrants to Quebec, and the CLB/NCLC provide the national language standards for adult users of English/French as a second language (ESL/FSL) in work, study, and social contexts. Like the CEFR, the CLB have been used in the development of a wide range of language curriculum and assessment tools. In contrast to the CEFR, which was designed to be a generic language reference document, the CLB, by comparison, are designed specifically for the English language and contextualized within work, study, and social contexts in Canadian society. Consequently, while the CEFR has been criticized for failing to account for the influence of context on language proficiency (termed "context validity" by Weir, 2005), the CLB embed the demand of the context within the proficiency descriptors. The CLB describe language progression not only in terms of increasingly precise, complex, lengthy, and flexible language use but also in terms of the increasing demand associated with the context of the communicative task. Validity evidence for the CLB (and NCLC) has been reported in multiple sources (Bournot-Trites et al., 2015;Bournot-Trites & Barbour, 2012and Elson, 2012a, 2012bas cited in Bournot-Trites, 2017North & Piccardo, 2018). The initial construct and content validity of the CLB (and NCLC) was established through a three-stage validation process undertaken by a Canadian team of experts in 2010. The first stage involved the development of a common theoretical framework for the CLB and NCLC, which was subsequently reviewed and validated against the relevant literature, as well as against the CLB and NCLC descriptors, by teams of independent experts. The second stage involved a comparison of the common theoretical framework against other common proficiency frameworks, namely, the CEFR, the EQ, and the American Council for the Teaching of Foreign Languages (ACTFL) Proficiency Guidelines (Bournot-Trites et al., 2015). In the third stage, content experts developed sets of exemplars for each of the 12 CLB/NCLC benchmarks including reading and listening texts and tasks as well as speaking and writing prompts and corresponding samples of learner performances. These exemplars were then trialled with over 100 practitioners across Canada to confirm the appropriateness of the exemplars representing each benchmark with respect to language instructors' firsthand experience with learners at these levels. According to Bournot-Trites et al., the revised and validated CLB/NCLC conform to the standards for reliability and validity imposed by the Standards for Educational and Psychological Testing (hereafter Standards; American Educational Research Association [AERA] et al., 1999). They also report that the results of the validation process support the use of the CLB/NCLC as national language standards in Canada as well as for other purposes including use in high-stakes contexts. For example, the CLB/NCLC may serve as a reference for the desired indicators of ability at different assessment levels of a high-stakes language test. As a set of national language standards, one of the primary users of the CLB is the Language Instruction for Newcomers to Canada (LINC) program for adult newcomers (permanent resident or Convention Refugee), funded by the Government of Canada (Immigration, Refugees and Citizenship Canada, Government of Canada, 2018). LINC curriculum guidelines are based on the CLB and developed in consultation with CLB experts. The guidelines instruct program coordinators and teachers to develop and plan course content consistent with the criteria of the CLB (Hajer et al., 2002). Additionally, the CLB have been used to develop various assessment tools. These assessments take many forms, including learner self-assessment, portfolio assessment, and instructor-based assessment. For example, the Canadian Language Benchmarks Placement Test (CLBPT), designed by the CCLB, is a low-stakes placement tool used for entry into language programs such as LINC (Bruni & Irwin, 2007). The CLB are also heavily involved in government-mandated assessment provisions (e.g., for immigration and professional certification purposes). For example, to support decisions regarding immigration and citizenship applications, the Government of Canada sets English (and French) language proficiency standards in reference to the CLB (and its French counterpart, the NCLC) levels (Government of Canada, 2020). Scores of standardized tests, including the CELPIP and the International English Language Testing System (IELTS), are accepted as proof of English language proficiency. Although neither of these tests is an assessment of the CLB per se, through the process of score linking, their scores have been aligned with the CLB and can be interpreted in relation to the CLB levels (Chen, 2016;Paragon Testing Enterprises, 2013a, 2013b. In these government-mandated testing contexts, test scores are used to support high-stakes decisions. A misclassification of a test taker's language proficiency, as expressed in CLB levels, could result in the delay or rejection of the individual's application. Thus, the validity of the test scores and, accordingly, the evidence that supports the alignment between the test scores and the CLB is of the utmost importance. Linking Test Scores to External Proficiency Standards Test scores are summative indicators of an individual's proficiency levels; however, they are abstract and may not always convey a clear meaning. Even with the labels and brief descriptions that are typically provided in a score report, it may still be difficult for stakeholders (e.g., test takers and score users) to interpret the scores clearly and consistently. On the other hand, language proficiency frameworks and standards often detail the criteria for achieving each performance level as well as the strengths and limitations of learners at each level. They also provide some indication to learners as to the skills and abilities needed in order to progress to higher levels. Linking test scores to such frameworks and standards facilitates the interpretability of the scores and enables indirect comparisons across tests of similar constructs. Although it is possible to compare scores from different tests indirectly when they are linked to a common scale, such indirect comparisons must be interpreted with caution. Test linking, like many other measurement procedures, is rarely able to precisely translate every score from one scale to the other (i.e., there exists some degree of inaccuracy or measurement error) and indirect comparisons may amplify such discrepancies. This is particularly concerning when the tests, linked by indirect comparisons, differ in their constructs, formats, and/or reporting scales. Despite the caution against indirect comparisons, the linking of test scores to external standards is considered one method of test validation. It is widely accepted that the interpretation and use of test scores are an essential consideration in the validity argument (Bachman & Palmer, 2010;Kane, 1992Kane, , 2002bKane, , 2006 as cited in Kane, 2013). For example, according to Bachman and Palmer's Assessment Use Argument (AUA) framework, interpretations about the ability to be assessed must be sufficient for the decision to be made. The warrant for this claim is that the interpretation of the scores provides sufficient information for score users to make the required decisions concerning test takers. Linking test scores to external language standards is, therefore, "relevant to the sufficiency of interpretations" (Papageorgiou & Tannenbaum, 2016, p. 117) and facilitates proper use of test scores as the link enhances score interpretation (Kane, 2012). While the act of linking test scores to external standards can be considered a form of test validation in and of itself, the results of the linking study must also be validated. According to the manuals for relating language tests to the CEFR (Figueras et al., 2009;North & Jones, 2009), validation of the linking study results should be a regular part of the linking process. Broadly speaking, one could validate the results of a linking study in one of two ways, replication or independent validation. The replication approach involves repeating the linking study but varying some of its features (e.g., recruiting another group of experts, selecting different sets of items and/or responses, or adopting a different linking method). The second approach involves conducting an independent validation study (e.g., evaluating the consequences of applying the linking results to a different population; Kane, 1994). The alignment of test scores and external standards is not a relationship that exists objectively and statically, rather, it is a value-driven claim that must be continually supported by evidence. For tests that have new content and items continuously being developed and administered to test takers, a one-time validity check of the test alignment at the end of a linking study is not sufficient. Ideally, all items created according to the same test specifications are expected to be interchangeable and remain so over time. Thus, the results of a linking study based on a specific subset of items should be generalizable to all items, and the relationship between the test scores and the external proficiency framework should be stable over time. In reality, this may not always be the case. Although many test organizations have rigorous procedures to ensure the high quality of test content, including the evaluation of content compliance with specifications, it is still possible that, over time, the features of items (i.e. their target knowledge, skills, and performance levels) may slightly drift away from those used initially to establish the linkage between the test scores and the proficiency standards. Although linking test scores to external standards primarily concerns the comparability of the performance levels, a shift in the features of the test content could still threaten the appropriateness of the previously established alignment (Dorans, 2018;Liu & Walker, 2007). Therefore, to support the validity of the linking results, the test content and items must continue to reflect the relevant domains and criteria described in the chosen proficiency framework. Content Validity Evidence to Support Linking Results Many researchers (Brown, 1996;Kane, 2013;Lissitz & Samuelsen, 2007) and the Standards (AERA et al., 2014) recognize content validity as a major source of support for test score validity. Brown defines content validity as the extent to which the test content is representative of what the test intends to measure, and he describes it as one of the "three main strategies" for validating test scores (p.232). Within Kane's argument-based framework for test validation, the extrapolation inference claims that the test is representative of the construct such that test scores can be taken to represent language ability in the target domain. This claim assumes that test performance reflects the criteria for language proficiency (Kane, 2013). Content-based validity evidence, therefore, supports the extrapolation inference of the validity argument by confirming the relationship between the test content and the language requirements of the target domain. According to the Standards, content-based validity evidence concerns the adequacy with which the test covers the content domain and the appropriateness of the content difficulty with respect to the target domain (AERA et al., 2014). Content validity evidence may be particularly relevant for tests that are linked to external proficiency frameworks as the interpretation and use of test scores rely on the alignment to the performance standards. Considering that the majority of test alignment occurs in retrospect when a previously developed test is linked retroactively to an external framework, some discrepancy is likely to exist between their constructs or target domains. Test linking, especially the methods based on statistical analyses of scores (e.g., linking through equal percentile equating), focuses on aligning scores between different reporting scales and does not fully evaluate or account for the qualitative differences underlying these scales (e.g., small differences in their purposes, target populations, and domain coverage). A mismatch between the test at hand and the content domain as described by the external proficiency framework could limit the interpretability and usefulness of the linking results. In other words, the validity of the link relies on the correspondence between the content of the test and the proficiency descriptions/criteria detailed by the framework to which the test is linked. As such, to establish a stronger basis to support the correspondence, test developers and researchers should collect additional evidence to investigate the adequacy and appropriateness of test content coverage in relation to the chosen proficiency framework. To demonstrate content validity, typically, researchers and test developers enlist well-trained colleagues (e.g., subject matter experts) to make judgments about the degree to which the test items match the test objectives or target construct. Compared with validation studies that focus on evaluating final test scores using correlational based approaches, welldesigned content validation studies often draw on multiple sources of data (e.g., expert judgement, test taker performance, and item statistics) and allow for a more detailed analysis of test items. By providing more fine-grained information, this type of analysis can help test developers better understand, and, if necessary, improve the extent to which the test covers its intended scope. However, it is challenging to directly apply this approach to the content validation of large-scale tests where qualitatively reviewing all test content is impossible or inefficient. Large-scale tests often have a great number of test items and they constantly add new items to the pool. Without an explicit and well-laid-out strategy, investigating the content validity of such tests is daunting and may result in weak evidence to support or refute the validity of the test scores. To this end, this study proposes an approach to the content validation of a largescale language proficiency test based on the scale anchoring method. Scale anchoring is used internationally in educational testing contexts including language and subject matter assessment (Gomez et al., 2007;Jaeger, 2003;Liao, 2010;Philips et al., 1993). Conventionally, it is a process that attributes meaning to test scores by identifying test items representative of particular score points along a score scale (Beaton & Allen, 1992;Kelly, 1999). The typical scale anchoring methodology involves two main steps. Performance data is first analyzed to identify items associated with particular score points, or anchor points. These are items that are likely to be answered correctly by test takers scoring at each anchor point, but not by test takers at the anchor point below; in other words, items that discriminate between performance levels. Next, a panel of experts examines the selected items to identify the language knowledge or skills demonstrated by these items. These are the language abilities that are said to anchor at each performance level. These abilities are then used to develop statements describing the language competence that would be expected of test takers at each level. Instead of applying the scale anchoring approach to deriving descriptors for each score level, we slightly modified the original methodology to focus on assessing the alignment between the test content and the external proficiency framework to which the test is linked (see the Method section for more details on our modified approach). Method The present study assesses the alignment between the content of the CELPIP-General reading test and the reading proficiency descriptors of the CLB. The CELPIP-General reading test was designed to assess test takers' ability to comprehend a variety of written English texts. In order to support continuous test administration, new test content must be constantly developed, and a large number of test items are used on a rotating basis. It would therefore be impossible to qualitatively assess each of these items in one validation study. Instead, we identify the anchor items for critical score levels using a modified scale-anchoring approach (see Beaton &Allen, 1992 andKelly, 1999 for examples of standard scale-anchoring methods) and map these anchor items to the CLB levels through expert judgement. To seek evidence upon which to evaluate the content validity of the reading test, we focused on the comparison of two elements: (1) the items' anchor levels as determined by test taker response data and the scoring model (i.e., a twoparameter item response [2PL IRT] model for the CELPIP-General reading test), and (2) the assessment levels of the items as judged by experts using CLB descriptors (CCLB, 2012). Data A pool of 341 reading items was analyzed. Each item was answered by an average of 3,172 test takers (minimum 198, maximum 6,611). The analysis focused on seven performance levels from CELPIP 4 to 10, which correspond to CLB 4 to 10. These seven levels were selected because they cover the range of proficiency levels that are often used to support high-stakes decisions, such as those related to Canadian immigration and citizenship applications. A total of 35 items (5 items × 7 score levels) were selected to be reviewed by a panel of four CLB experts. The number of panellists was largely constrained by operational limitations, including the budget, time, and the availability of the experts. As the first reported study using the modified scale-anchoring method for content validation, when recruiting the panellists, we prioritized their experience and expertise with the CLB and the target population. All the panellists had extensive experience (minimally five years) working with the CLB in the context of teaching, curriculum development, and assessment design. Additionally, they had intimate knowledge of the target test taker population (i.e., new Canadian immigrants) through their work as English language teachers and LINC program coordinators. All the panellists reviewed the 35 items and judged the target skills, knowledge, and contexts of each item, as well as the item's correspondence to the CLB descriptors. The CELPIP-General Reading Items The reading component of the CELPIP-General test is presented in testlet format. Each testlet consists of a passage and a corresponding set of items. The passages represent a variety of text types including correspondence, brochures, articles, and opinion pieces. The items are written to evaluate test takers' reading comprehension in terms of their ability to understand the main ideas, identify details, and make inferences about the content. A total of 35 items were selected for the panel to review. The CLB Reading Component The CLB document is organized by language component (listening, speaking, reading, and writing). Each component is divided into 12 benchmarks, which are grouped into three stages of Basic (CLB 1-4), Intermediate (CLB 5-8), and Advanced (CLB 9-12) language ability. For each component, the CLB document is composed of three main sections: Profiles of Ability, Knowledge and Strategies, and the Canadian Language Benchmark pages. While the entire CLB document was available for reference during the study, the panellists started by mapping items according to the descriptions in the Profiles (see Figure 1 for an illustrative example and see CCLB, 2012, p.86 for an example of Profiles in the CLB document). During the independent review stage, panellists were able to consult the Benchmark pages to make a judgement if they could not directly map an item to the descriptors provided in the summary tables of the Profiles. During the discussion stage, panellists were also able to refer to either the Profiles or the Benchmark pages to justify their evaluations of items. Procedures To evaluate the alignment of items to the performance standards of the CLB, we adopted a two-stage procedure similar to a typical scale anchoring study. Stage 1 focused on identifying anchor items. Anchor items are defined as items that demonstrate strong discrimination power at a given score level. For example, an anchor item for score level 7 is an item that a typical level 7 test taker would answer correctly while a typical level 6 test taker would get wrong. The CELPIP-General reading test uses a 2PL IRT model to predict test takers' proficiency and then transfer the predicted continuous score to the reporting scale by applying the cut scores established in previous standard-setting studies (Paragon Testing Enterprises, 2013a, 2013b. As such, in this study, we operationalize a "typical" test taker at a given score level as one whose proficiency level is at the mid-point of the adjacent cut scores. In the above example, a typical level 7 test taker is represented as someone whose theta score (i.e., proficiency on the IRT theta scale) equals the median of the cut scores for level 7 and level 8. For each item, we first computed the probability of a correct response by test takers at a given proficiency level (i.e., conditional probability) along the proficiency continuum-at each of the mid-points of the adjacent cut scores. The conditional probabilities were calculated based on the 2PL IRT model for dichotomous items, ( $ = 1| ) = 1 1 + ,-. (/,0 . ) where ( $ = 1| ) represents the conditional probability of a correct response to item i at proficiency level or theta, and ai and bi are the item discrimination and difficulty parameters of item i. Then, we grouped items based on their anchor levels. An item (i) is deemed to anchor level K, if ( $ = 1| 1 ) > 0.50 and ( $ = 1| 1,6 ) < 0.50, where K is a score level from the reporting scale and 1 represents a "typical" test taker at band level K (i.e., the test taker's theta score is at the mid-point of the adjacent cut scores). This implies that typical test takers at level K have a higher chance of answering this item correctly than getting it wrong (i.e., ( $ = 1| 1 ) > 0.50), while typical test takers at one level lower (K-1) are more likely to answer incorrectly than correctly (i.e., ( $ = 1| 1,6 ) < 0.50). After grouping items by their anchor levels, we selected five items that showed the highest discrimination power for each of the CELPIP levels. The difference between the conditional probabilities of adjacent levels (i.e., the level of focus and one level below), ∆ 9 $ = 1| 1,1,6 ) = ( $ = 1| 1 ) − ( $ = 1| 1,6 <, indicates an item's discrimination power at that particular proficiency range. For levels 4 through 10, a total of 35 items were selected for expert review (see Table 1). Stage 2 involved a qualitative analysis of the selected anchor items by a panel of CLB experts. This panel analyzed each item and members offered their opinions as to which CLB competency statements and linguistic functions were assessed by each item and to which CLB level they corresponded. The items were reviewed in random order. Neither the panellists nor the facilitator was aware of the CELPIP levels that each item was selected to represent. The review was done in two steps. First, each panel member made their judgements individually. Then, they discussed their responses at an in-person meeting during which the panellists could edit their responses. The panellists were requested to justify their evaluations but were not required to reach a consensus, and their final judgements were submitted individually after the meeting. These individual judgements were aggregated and then compared with the model-suggested anchor levels (i.e., the CELPIP levels that the items were selected to represent). Table 1 lists the five anchor items selected for each proficiency level along with the probability of a correct response to each item at two points along the proficiency scale. The difference between the probabilities of a correct response to each anchor item at the anchoring level and at the level beneath it represents the discrimination power of the selected item. Compared to the anchor items at levels 4 to 6, the anchor items for the higher levels had relatively low discrimination power (an average of 0.24 vs. 0.15). Lower discrimination at some proficiency levels indicates that less information could be obtained there. Currently, no clear guidelines exist for interpreting and evaluating the discrimination power of anchor items. Also, the total number of levels selected along the proficiency continuum may affect the observed discrimination power. The more levels there are, the closer the adjacent levels are to each other on the theta scale and the more difficult it is to achieve strong discrimination for all levels. Despite some variability in the discrimination power for CELPIP levels 4 through 10, anchor items were identified for every level. Item 20 (score level 9) had the lowest discrimination power among the selected items. The probability of answering this item correctly by typical test takers at level 9 was 0.58, which was 0.11 higher than the probability of a correct answer by typical level 8 test takers. As shown in Table 1, the anchor items for the lower proficiency levels (CLB 4 and 5) were more likely to be items from CELPIP reading testlet types 1 and 2, and the anchor items for the higher proficiency levels (CLB 9 and 10) mostly belonged to CELPIP reading testlet types 3 and 4. This pattern is consistent with the specifications for the CELPIP reading testlets, which dictate the target contexts, language functions, and intended difficulty of the four testlet types. Table 2 provides a brief description of each of the four testlet types in the CELPIP reading test. First, read a letter or email and answer a set of multiple-choice questions; then, read a response and complete the blanks with the provided options. R2 Reading to Apply a Diagram First, read a general text with accompanying graphics and complete an email with the provided options; then, answer a set of multiple-choice questions. Reading for Information First, read a general text and a set of statements; then, decide which paragraph (if any) supports each statement. Reading for Viewpoints First, read an opinion article and answer a set of multiple-choice questions; then read a response and complete the blanks with the provided options. Note. To view a sample of the test content, follow the link below: https://secure.paragontesting.ca/InstructionalProducts/FreeOnlineSampleTest/FOST/View/1ba67a01-763a-487c-9efe-5000023fe7b4 To aggregate panellists' judgements, we computed both the mean and the median of their ratings. While the mean accounts for all panellists' judgements, it can be distorted by outliers. In contrast, the median is more robust in the presence of outliers but does not account for everyone's opinion. As shown in Table 3, the mean and median ratings are very similar to each other for each level. That said, using means, rather than medians, makes it easier to interpret the variability of the ratings (i.e., standard deviations in Table 3) and the mean differences (i.e., the last two columns of Table 3). Thus, we used the mean ratings to represent the panellists' collective judgments for each item, and for consistency, we also used the mean to summarize the panellists' judgments for all the items within a level. The difference between the panellists' aggregate judgement and the model-suggested anchoring level was calculated in both overall and absolute terms. Overall, the panel-estimated CLB levels increase along with the increase in the items' anchoring levels suggested by the model. However, there were some minor discrepancies. The panellists believed that the five anchor items at Level 4 tapped higherlevel knowledge and skills. For the items anchoring at Level 10, the panellists considered them slightly easier than that described by the CLB 10 descriptors. For levels 5, 6, 8, and 9, the average difference within each level was small, indicating that the item content and their corresponding performance levels (as judged by experts) were in line with the anchor levels identified by the statistical analysis of test taker responses. Compared to their judgements at other levels, the panellists showed higher variability in their views at levels 7 and 8, suggesting a potentially inconsistent interpretation of the criteria for CLB 7 and 8 and/or the correspondence between the item content and those criteria. Discussion Using a modified scale-anchoring method, this study evaluates the content validity of the CELPIP-General reading test by mapping a selected set of items to their corresponding CLB levels. The CLB provide a nationally recognized set of language proficiency indicators that essentially describe the CELPIP target language use (TLU) domain, namely English as a second language use in work, study, and social contexts in Canada. A close alignment between the CELPIP test items and the criteria of the CLB is one critical piece of evidence in support of the claim that the test reflects its target language use domain and measures it intended construct. As discussed in the introduction, the link between test scores and external proficiency frameworks should not be taken for granted; rather, it is a claim that must be supported by empirical evidence. One way to assert such a claim is to conduct independent validation studies to collect evidence from various sources, including evidence related to test content. In this study, we presented a method that may be useful in the planning and implementation of content validation studies for large-scale tests. In the context of largescale tests, content validation studies often rely on experts evaluating a portion of the test content. Having a strategy that systematically selects items for expert review enhances the transparency and replicability of such validation studies. Future studies could apply our method to other tests and content areas. Future research could also examine other criteria for selecting anchor items. For example, instead of focusing on typical or average test takers, it is possible to conceptualize a minimally competent test taker for each level and identify anchoring items accordingly. In the literature concerning scale anchoring methods, cut-offs other than 0.50 have also been suggested for the conditional probability, ( $ = 1| 1 ), when deciding the anchor level of an item (e.g., 0.65 and 0.80; Beaton & Allen, 1992). Over time, with evidence accumulated from a wider application of this method, researchers and practitioners will develop a better understanding of how to optimize these parameters for content validation studies. Admittedly, as a relatively small-scale study (i.e., 35 items and four panellists), the present study alone serves as just one piece of validity evidence in support of the alignment between the CELPIP-General reading test content and the CLB proficiency indicators. Ongoing efforts to continue this line of research will further strengthen the link between CELPIP test scores and the criteria of the CLB. Working with a small group of experts with rich experience allowed us to create a proof of concept to test the modified scale anchoring method for collecting content-based validity evidence. According to the general principles of qualitative studies, an adequate sample is achieved when researchers observe sufficient variability and an indication of convergence (i.e., saturation) in the study (Fusch & Ness, 2015;Guest et al., 2006). The degree of variability and agreement among panellists' evaluations of the items (as shown in Table 3) lend some support to the credibility of the results. That said, when resources allow, it could be beneficial to repeat the study and involve more experts in the review panel to represent a broader range of views. Both the linking of test scores to proficiency frameworks such as the CLB and the validation of such alignment help strengthen the validity of the scores. In doing so, researchers and test developers often rely on expert judgements. In the present study, we recruited experts who had been working with the CLB in teaching and assessment settings for many years; however, we observed some differences in their interpretations of the CLB when reviewing the anchor items. From our perspective, these differences may be attributable to four main issues concerning the CLB descriptors: (1) variation in word choice or synonymy across the benchmarks, (2) under-defined terminology, (3) limitations in the operationalization of key features (e.g., reading text length as specified by the CLB -"moderate length" at CLB 8 means "up to about 5 pages" (CCLB, 2012, p.96) -may be unattainable in some assessment contexts), and (4) concerns about the cultural context. Some of these issues mirror those reported by Alderson and colleagues (2006) during a project to develop a CEFR-based reading and listening assessment tool. Future content validation studies could consider adding a training session at the beginning of the panel meeting to address these issues and ensure a shared understanding among the panellists. The challenges involved in applying a general language proficiency framework to the development and/or validation of a standardized test are at least partly due to the conflicting nature of the two. While language proficiency frameworks are often designed to account for a wide range of contexts and uses, a test often serves more specific purposes within more limited contexts. In our case, the commonalities between the target domains of the CELPIP test and the CLB permit the use of the CLB for the purpose of test score interpretation and content validation, and even to inform some aspects of on-going test development; however, we must not ignore the differences between the two. The CLB are neither a test nor a test blueprint; they are a set of general proficiency indicators that describe the progression in the features of language and contexts of use across the proficiency spectrum. It is expected that teachers, test developers, and researchers will make the necessary judgements, modifications, and adaptations when applying the CLB to more specific instructional and/or assessment contexts. Conclusion The relationship between the test scores and the proficiency frameworks or standards to which they are aligned is not directly observable, nor is it a constant connection. It is an indirect relationship that test developers must continuously provide evidence to support. In addition, it is important to consider that high-stakes language proficiency tests can affect the lives of many individuals, such as students and immigrants, as critical decisions are made based on the results of these tests. Therefore, it is crucial for test organizations to actively evaluate and maintain their tests in order to ensure consistent alignment with the chosen proficiency standards over time. In this paper, we describe one approach to test validation focusing on content validation and score interpretation using a process of scale anchoring and item mapping. In doing so, we share some of the benefits and challenges of working with the CLB in a standardized testing context. Although we have primarily worked with the descriptors of ability to help infuse detail and substance into the interpretation of the CELPIP score levels, the CLB offer much more to language practitioners. We encourage those engaged in the instruction or assessment of English as a second language in Canada to consider how the CLB might support their work. Correspondence should be addressed to Michelle Chen. Email: mchen@paragontesting.ca
2020-12-03T09:06:14.159Z
2020-10-16T00:00:00.000
{ "year": 2020, "sha1": "756c7c646803c281aacf3a1735776e53211a440f", "oa_license": "CCBY", "oa_url": "https://journals.lib.unb.ca/index.php/CJAL/article/download/30649/1882526577", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3d89fece076e278a76cfd7d4f7fe5bb97e3377f5", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
216681214
pes2o/s2orc
v3-fos-license
SECURE AND EFFICIENT MULTIPARTY PRIVATE SET INTERSECTION CARDINALITY . In the field of privacy preserving protocols, Private Set Intersection (PSI) plays an important role. In most of the cases, PSI allows two parties to securely determine the intersection of their private input sets, and no other information. In this paper, employing a Bloom filter, we propose a Multiparty Private Set Intersection Cardinality (MPSI-CA), where the number of participants in PSI is not limited to two. The security of our scheme is achieved in the standard model under the Decisional Diffie-Hellman (DDH) assumption against semi-honest adversaries. Our scheme is flexible in the sense that set size of one participant is independent from that of the others. We consider the number of modular exponentiations in order to determine computational complexity. In our construction, communication and computation overheads of each participant is O ( v max k ) except that the complexity of the designated party is O ( v 1 ), where v max is the maximum set size, v 1 denotes the set size of the designated party and k is a security parameter. Particularly, our MSPI-CA is the first that incurs linear complexity in terms of set size, namely O ( nv max k ), where n is the number of participants. Further, we extend our MPSI-CA to MPSI retaining all the security attributes and other properties. As far as we are aware of, there is no other MPSI so far where individual computational cost of each participant is independent of the number of participants. Unlike MPSI-CA, our MPSI does not require any kind of broadcast channel as it uses star network topology in the sense that a designated Introduction The widespread use of Internet greatly facilitates the distribution and exchange of information. Immediate access to content with low cost delivery is one of the main benefits Internet based distribution brings and has the potential to open up new markets. However, these raises privacy issues regarding intellectual property and copyright due to the vulnerability nature of digital contents for unauthorized distribution and use. With the advent of Internet and distributed computing, the necessity of privacy preserving data sharing increases rapidly. In this field, one interesting problem arises when the participants wish to learn the intersection of their data sets secretly, but not more than that. PSI is ideal to solve this problem. It is mostly executed between two parties, but it can be extended to a multiparty environment in the context of PSI. This multiparty private set intersection is referred as MPSI, and has several application. For instance, a central investigative agency (e.g., CBI) wants to compare its list of suspects with the lists of local investigative agencies (e.g., local police, military, BSF, etc.). In this case, neither of the agencies will reveal their whole list of suspects to the other. Privacy and correctness are two most important properties for an MPSI, where privacy ensures that none of the parties learn beyond the intersection and correctness means that each of the participants learn the correct output. Apart from privacy and correctness, flexibility is another desirable feature in the context of MPSI. If an MPSI is flexible that implies that the choice of input set of a party is independent from the others. In several practical scenarios, the participants want to jointly determine the cardinality of the intersection rather than the contents. For example, suppose n(≥ 2) different health organizations are doing a survey on a particular disease in a village and they wish to determine the number of common villagers who are suffering from that disease. However, none of them will disclose their list of suspects to other. Note that revealing the name of the suspects may create an impact on patient's mind. In such scenarios, we need the cardinality version of the MPSI, known as MPSI-CA. Designing efficient and flexible MPSI-CA is a challenging task. Related works. • Two-party Private Set Intersection. We now give an overview of prior works on two-party PSI protocols by classifying them in four groups based on the constructions as follows: (i) Oblivious Polynomial Evaluation (OPE) Based PSI: The concept of PSI relying on OPE was introduced by Freedman et al. [32], where the basic idea is to represent a set as a polynomial. Utilizing OPE and and additively homomorphic encryption (AHE), Kissner and Song [46] designed a PSI protocol. Following this work, Camenisch and Zaverucha [7] proposed a PSI based on OPE, where the inputs need to be certified by a trusted party. The work of [32] was further improved by Hazay and Nissim [38]. While the constructions of [32,38] are one-way in the sense that at the end of the protocol only one of the participants learns the intersection, the constructions of [46,7] are two-way, meaning that at the end both parties receive the intersection. None of the constructions from [32,38,46,7] achieve linear computation complexity. Recently, Dong et al. [25] employed an OPE technique to construct the first fair two-way PSI protocol in the standard model against malicious entities with the help of a semitrusted third party. Fairness ensures that either both the involved parties receive or none of them receive the intersection of their private input sets at the completion of the protocol. (ii) Pseudorandom Function (PRF) Based PSI: Hazay and Lindell [37] demonstrated how to obtain a PSI relying on Oblivious Pseudorandom Function (OPRF) which is a two party protocol that enables a sender with private key k and a receiver with private input x to securely compute a pseudorandom function (PRF) f k (x). Later, Jarecki and Liu [42] adopted AHE to extend the work of [37] in the standard model against malicious adversaries. In the following year, Jarecki and Liu [43] introduced the idea of the unpredictable function (UPF) based PSI protocol, where UPF works similar to OPRF. Recently, Hazay [36] gave a construction of an efficient PSI based on algebraic PRF. All these constructions [37,42,43,36] are one-way achieving linear complexity. More recently, the authors of [23] proposed a two-way fair PSI protocol relying on two-way OPRF with linear complexity over composite order group. (iii) Decisional Diffie-Hellman (DDH) Based PSI: A sequence of one-way PSI protocols [16,15,17] was proposed by De Cristofaro et al. using random hash functions and zero-knowledge proofs. All these constructions attain linear complexity. The work of Huang et al. [41] showed how to employ garbled circuit (GC) in designing a PSI protocol. The scheme is secure under the Decisional Diffie-Hellman (DDH) assumption in the ROM against semi-honest adversaries and achieves linear communication and Θ(v log v) as computational complexity. Recently, Debnath and Dutta [21] designed a fair optimistic two-way PSI over prime order group. The scheme is optimistic in the sense that it uses an off-line semi-trusted third party. The security of this scheme is achieved in malicious environment without random oracles. (iv) Bloom Filter (BF) Based PSI: A Bloom filter [2] is a data structure that represents a set by an array with entries 0 or 1. It exhibits itself as an useful tool to scale large data sets. The first Bloom filter based protocol was proposed by Many et al. [48], where the participants jointly execute AND of their Bloom filter to get the intersection. However, this protocol does not remain secure as it reveals information about the other party's set. Following [48], Kerschbaum [44] gave a construction of Bloom filter based PSI by incorporating Goldwasser-Micali encryption [35]. The security of this protocol is achieved in the semi-honest environment with linear complexity. Later, Dong et al. [26] combined an oblivious transfer together with a Bloom filter to construct two PSI protocols. One of the constructions of [26] is secure in the semi-honest adversarial model, while the other one is secure in malicious adversarial model under the Computational Diffie-Hellman (CDH) assumption. In the subsequent year, Debnath and Dutta proposed a sequence of PSI protocols in [18,19,20] employing a Bloom filter retaining linear complexity. In [45], Kiss et al. transformed four existing PSI protocols into the precomputation form such that in the setup phase the communication is linear only in the size of the larger input set, while in the online phase the communication is linear in the size of the smaller input set. (v) Other Paradigm Based PSI: Utilizing fully homomorphic encryption, Chen et al. [9] build a PSI in the honest-but-curious setting. Later, Rindal and Rosulek [50] proposed a PSI employing dual execution. In the following, the concept of Reactive PSI was introduced by Cerulli et al. [8]. In [11], Ciampi and Orlandi presented PSI protocol based on special purpose oblivious transfer (OT). Later, Falk et al. [29] came up with the an improved hashing-based generic PSI in semi-honest environment. • Multiparty Private Set Intersection. In the last few years, although there has been a lot of research works in the direction of two-party PSI, there are only a few constructions of MPSI in the existing literature. Kissner and Song [46] designed the first secure MPSI protocol employing OPE and AHE. Their construction achieve quadratic complexity. Later, Sang and Shen [51] implemented a new MPSI protocol incurring quadratic overhead in the size of the input sets. Following that, some work on MSPI was presented in [52] in the honest majority setting, and they used bilinear groups in their construction. These constructions were further improved by Cheon et al. [10], where the dependency on the input sets is reduced from quadratic to quasilinear. However, the communication and computation overhead per player grow quadratically with the number of participants. In [12], Dachman-Soled et al. build a multivariate polynomials based MPSI protocol. Their construction attains O n · v max + v max · log 2 v max and O n · v 2 max as communication and computation complexity respectively, where n is the number of participants and v max is the maximum over all input set sizes. Later, a Bloom filter based approach in MPSI was proposed by Miyaji and Nishida [49], where the security is achieved in a semi-honest environment. Their construction attains O(n·v max ) and O (n · v max ) as communication and computation overhead complexities for the designated party. Hazay and Venkitasubramaniam [39] proposed an MPSI protocol utilizing the two-party PSI protocol of Freedman et al. [32], and very recently, Kolesnikov et al. [47] presented a new paradigm for MPSI in a semi-honest setting from symmetric key techniques. • Private Set Intersection Cardinality. Agrawal et al. [1] introduced the concept of two-party PSI-CA in a semi-honest setting under the DDH assumption. Utilizing OPE, Hohenberger and Weis [40] constructed an efficient two-party PSI-CA that offers better performance over the PSI-CA obtained by extending the two-party PSI scheme of Freedman et al. [32]. Later, Kissner and Song [46] came up with the construction of MPSI-CA relying on OPE. Following this work, Camenisch and Zaverucha [7] constructed a fair two-party PSI-CA protocol for certified sets based on OPE. De Cristofaro et al. [14] designed a two-party PSI-CA with linear complexity. A sequence of two-party PSI-CA [18,19,21,22] are presented by Debnath and Dutta all having linear complexity. Recently, Freedman et al. [31] modified their work of [32] to construct a two-party PSI-CA achieving security in semi-honest environment without random oracles. This scheme also have linear complexity. Employing quantum computation [53], Shi et al. [53] designed a two-party PSI-CA protocol attaining linear complexity. More recently, Dong and Loukides [27] developed an approximate PSI-CA protocol based on the Flajolet-Martin (FM) sketch [30] with logarithmic complexity. 1.2. Our contribution. In this paper, our main focus is to design efficient MPSI-CA and extend it to MPSI. • We first give a construction of MPSI-CA employing a space-efficient probabilistic data structure (Bloom filter) along with ElGamal encryption and threshold ElGamal encryption. The security of our MPSI-CA is achieved in semi-honest environment without random oracles under the Decisional Diffie-Hellman (DDH) assumption. The communication complexity of our protocol is linear in the input sizes i.e. O ( n i=1 v i k), k being a security parameter. While the computation cost of each participant is O(v max k) except for the designated party, for which the cost is O(v 1 ). Here v max is the maximum set size of the participants and v 1 denotes the set size of the designated party. Our scheme is flexible as each party's input size is independent from the others. To the best of our knowledge, the only other existing MPSI-CA is due to Kissner and Song [46]. In [46], the authors proposed an MPSI-CA with O(n 2 v max ) and O(n 2 v 2 max ) as communication and computation overheads. Compared to [46], our MPSI-CA is more efficient in terms of both the communication and computation complexity. In particular, our MPSI-CA is the first to achieve linear complexity in the input set sizes. • We next extend our MPSI-CA to an MPSI protocol without changing the security attributes. Similar to [39], we use a star network topology instead of point-to-point fully connected network. In this setting, a single designated party, communicates individually with every other party via a variant of the two-party PSI of [13]. The crucial point of this topology is that all parties need not be online at the same time. Our MPSI does not require any broadcast channel during its execution as all the communication is performed only between the designated party and each other party at a point-to-point level. In contrast to [51,52,10,12,46], communication complexity of our protocol is linear in the input sizes i.e. O ( n i=1 v i k). Computation cost of each participant is O(v max k) except the designated party, for which the cost is O(v 1 ). Unlike the existing protocols [47,39,49,51,52,10,12,46], individual computation complexity of each participant does not depend on the number of participants n in our scheme. Similar to [49], our scheme is flexible as each party's input set size is independent from the others. 1.3. Organization. The rest of our paper is organized as follows. In Section 2, we give preliminaries. The constructions of our MPSI-CA and MPSI are described in Section 3. Security proofs and efficiency analysis of our designs are given in Section 4 and Section 5, respectively. Finally, we conclude the paper in Section 6. Preliminaries Throughout the paper, the notations κ, ⊥, x X, a ← A and {X t } t∈N ≡ c {Y t } t∈N are, respectively, used to represent "security parameter", "null string", "variable x is chosen uniformly at random from set X", "a is output of the procedure A" and "the distribution Recall that a function : N → R is said to be a negligible function of κ if for each constant c > 0, we have (κ) = o(κ −c ), for all sufficiently large κ. • Decisional Diffie-Hellman (DDH) Assumption [3]: An algorithm A for solving the DDH problem takes as input g a , g b , g ab , g c and decides whether g c = g ab , where G = g is a cyclic group of order n and a, b, c Z n . The advantage of A in solving the DDH problem is denoted by Adv DDH A and is defined as 2.1. Additively homomorphic encryption [5]. We describe below additively homomorphic encryption schemes: the ElGamal encryption [28] and the threshold ElGamal encryption [24] which are semantically secure provided DDH problem is hard in the underlying group. ElGamal encryption: The ElGamal encryption is an additively homomorphic encryption EL = (EL.Setup, EL.KGen, EL.Enc, EL.Dec), defined as follows: • EL.Setup(1 κ ) → (par). On input 1 κ , a trusted authority outputs a public parameter par=(p, q, g), where p, q are primes such that q divides p − 1 and g is a generator of the unique cyclic subgroup G of Z * p of order q. • EL.KGen(par, P i ) → (epk Pi , esk Pi ). User P i chooses a i Z q , computes y Pi = g ai , reveals epk Pi = y Pi as his public key and keeps esk Pi = a i secret to himself. • EL.Enc(m, epk Pi , par, r) → (eE epk P i (m)). The encryptor encrypts a message m ∈ Z q using the public key epk Pi = y Pi by computing the ciphertext tuple eE epk P i (m) = (α, β) = (g r , g m y r Pi ), where r Z q . • EL.Dec(eE epk P i (m), esk Pi ) → (m). On receiving the ciphertext tuple eE epk P i (m) = (α, β) = (g r , g m y r Pi ), the decryptor P i decrypts it using the secret key esk Pi = a i by computing β (α) a i = g m (g a i ) r (g r ) a i = g m and then finding m by running an exhaustive search. The threshold ElGamal encryption T EL = (T EL.Setup, T EL.KGen, DEL.Enc, T EL.Dec) is executed among P 1 , . . . , P n as follows: • T EL.Setup(1 κ ) → (par). It is the same as EL.Setup. • T EL.KGen(par) → (pk, sk). Each participant P i , i = 1, . . . , n selects a i Z q and publishes y Pi = g ai . The public key of T EL is set to be pk = h = . This implicitly sets the secret key of Note that sk is not known to anyone under the hardness of DLP in G. • T EL.Enc(m, pk, par, r) → (T EL.Enc pk (m)). The encryptor encrypts a message m ∈ Z q using the public key pk = h = g n i=1 ai and computes the ciphertext . Given a ciphertext T EL. Enc pk (m) = (α, β) = (g r , g m h r ), each participant P i shares α i = α ai . Then they recover g m as h r = g m ; otherwise outputs ⊥. By running an exhaustive search, the message m can be extracted from g m . Remark 1. Note that if the message m is 0 then g m = 1. Thus, in order to check that whether a chipertext T EL.Enc pk (m) or EL.Enc pk (m) decrypts to 0, the decryptor computes g m and checks it is 1. [2]. A Bloom filter (BF) is a data structure that represents a set X = {x 1 , . . . , x v } of v elements by an array of m bits and uses k independent uniform hash functions Bloom filter . . , m} for i = 1, . . . , k to insert elements or check the presence of elements in that array. Let BF X ∈ {0, 1} m represent a Bloom filter for the set X and BF X [i] denotes its i-th bit, i = 1, . . . , m. We describe below a variant of a Bloom filter [2] that performs three operations-Initialization, Add and Check. • Initialization: Set 1 to all the bits of an m-bit array, which is an empty Bloom filter. . Set 0 to the bit positions of the Bloom filter having indices . Repeat the process for each x ∈ X to get BF X ∈ {0, 1} mthe Bloom filter for the set X. • Check(x): Given BF X , to check whether an elementx belongs to X without knowing X,x is hashed with the k hash functions in Bloom filter parameters (optimal): A Bloom yields false positive i.e., an element y / ∈ X may pass the membership test. This is due to the fact that each of BF X [h j (y)] could be 1 for j = 1, . . . , k even if y / ∈ X. The probability that a certain bit is not set to 0 by a certain hash function during insertion of an element is 1 − 1 m . Since there are k independent uniform hash functions, the probability that a certain bit is not set to 0 by any of the hash functions is (1 − 1 m ) k . If we insert all the v elements to the Bloom filter then the probability that a certain bit is still 1 is Thus the probability that a certain bit in the Bloom filter BF X is set to If is the false positive rate of the Bloom filter BF X then according to [4], ≤ z k × (1 + O( k z ln m−ln z m )) which is negligible function in k. In practice, during the construction of Bloom filter for a set of v elements, we choose the values of k and m such that is capped at a specific low value (e.g. 2 −80 ). According to [26], performance optimality of Bloom filter is attained if k = m v ln 2 and m ≥ n ln 2 e · ln 2 1 , where e is the as usual base of natural logarithm. Thus, by minimizing m i.e., by choosing optimal m = v ln 2 e · ln 2 1 , the optimal value of k is obtained as k = ln 2 1 . In the rest of the paper, we will assume that the optimal parameters are chosen. Protocol In this section, we describe the construction of MPSI-CA followed by MPSI. 3.1. Multiparty private set intersection cardinality (MPSI-CA). The MPSI-CA protocol is executed among n parties P 1 , . . . , P n with the private input sets X 1 , . . . , X n , respectively, with |X i | = v i for i = 1, . . . , n, where one party, say, P 1 is designated to determine the intersection of its private set with the others' private sets. The protocol completes in two phases : (I) Setup and (II) Set Intersection Cardinality. In the Setup phase, the parties jointly generate a public key pk for a threshold additively homomorphic encryption such as ElGamal encryption scheme and Bloom filter parameters (m, X i | and by invoking three algorithms: MPSI-CA.request, MPSI-CA. response, and MPSI-CA.computation. On MPSI-CA.request, each party P i (i = 2, . . . , n) generates a Bloom filter BF Xi ∈ {0, 1} m of its private set X i , encrypts BF Xi using pk and sends the encrypted Bloom filter E pk (BF Xi ) to P 1 . The party P 1 then invokes MPSI-CA.response, where for each x l ∈ X 1 , l = 1, . . . , v 1 , the party P 1 extracts k ciphertexts corresponding to h 1 (x l ), . . . , h k (x l ) from E pk (BF Xi ) that contains m ciphertexts, for each i = 2, . . . , n and multiplies all these k(n − 1) ciphertexts. This yields a resulting ciphertext C l corresponding to x l ∈ X 1 , for party P 1 then publishes all these v 1 resulting ciphertexts C 1 , . . . , C v1 . We initialize, and for l = 1, . . . , v 1 , we let CT v1 } using a random permutation φ i , keeps the permutation secret to itself and broadcasts the resulting set of ciphertexts {CT l } v1 l=1 decrypting to 0. We define MPSI-CA functionality as F M P SI−CA : (X 1 , . . . , X n ) → (|X 1 · · · X n |, ⊥ , . . . , ⊥). The Setup phase of our MPSI-CA is depicted in FIGURE 1. (a i ). Then (pk, sk) pair serves as the public-secret key pair for T EL. Note that the secret key sk for T EL is not known to anyone. However, the public key pk for T EL is publicly computable from pk 1 , . . . , pk n . (ii) encrypts each entries of BF Xi using the public key pk to get E pk (BF Xi ) = (C h rij and r ij Z q for j = 1, . . . , m; (iii) sends E pk (BF Xi ) to P 1 . We refer to FIGURE 2 for the interaction among the parties in MPSI-CA. request. v1 } is the same as {C 1 , . . . , C v1 }, in some order. We assume that x λ ∈ X 1 is associated with CT Let us consider x λ ∈ X 1 is associated with CT l . In other words, x λ ∈ X 1 passes the check step for each of the Bloom filter BF Xi (i = 2, . . . , n). Therefore, x λ ∈ X i for all i = 2, . . . , n, except with negligible probability . This implies that x λ ∈ n i=1 X i , except with negligible probability . Hence, we can ensure that x λ ∈ n i=1 X i if and only if CT (n) l decrypts to 0, i.e., card is the cardinality of n i=1 X i , except with negligible probability . Multiparty private set intersection (MPSI) . Similar to MPSI-CA, MPSI involve n parties P 1 , . . . , P n with their respective private input sets X 1 , . . . , X n , where |X i | = v i . We assume that P 1 is the designated party that communicates with the rest of the parties P 2 , . . . , P n . Let us define the functionality for MPSI as F M P SI : (X 1 , . . . , X n ) → (X 1 · · · X n , ⊥, . . . , ⊥). The protocol completes in two phases: (I) Setup and (II) Set Intersection. The Setup is same as that of MPSI-CA while Set Intersection phase completes in 3 rounds and invokes three algorithms: MPSI.request, MPSI.response, and MPSI.computation. We describe below these algorithms. • MPSI.request: This algorithm is exactly the same as that of MPSI-CA.request. • MPSI.response: On receiving E pk (BF Xi ) = (C (c) evaluates µ l = β l ρ l and determines that C l decrypts to 0 if µ l = 1; (d) inserts x l in W if µ l = 1. Finally, the party P 1 outputs W as and µ l = β l ρ l = g 0 = 1. Therefore, C l decrypts to 0 if µ l = 1 i.e., if On the other hand, if C l decrypts to 0 then BF Xi [j] = 0 for all i = 2, . . . , n and j ∈ J = {h 1 (x l ), . . . , h k (x l )} by the construction of C l . In other words, x l ∈ X 1 passes the check step for each of the Bloom filter BF Xi (i = 2, . . . , n). Therefore, x l ∈ X i for all i = 2, . . . , n except with negligible probability . This implies that x l ∈ n i=1 X i except with negligible probability . Hence, we can ensure that X i if and only if C l decrypts to 0 i.e., the set W is n i=1 X i except with negligible probability . Proof. We prove the security of the MPSI-CA by considering two cases: Security analysis • Case I: a strict subset I 1 of {P 1 , . . . , P n } is corrupted, and P 1 ∈ I 1 . In each of the cases, we will show that a simulator SIM can be constructed who simulates the MPSI-CA protocol, the simulator having access to the corrupted party's input and output such that the simulated view is computationally indistinguishable from the real world view. Here, the view of an entity consists of input message of the entity, the outcome of the entity's internal coin tosses and the messages received by the entity during the protocol execution. The view in the real protocol execution consists of the input sets {X i } i∈I1 , the random coins R, the ciphertexts {E pk (BF Xi } i / ∈I1 ), {CP (i) } i / ∈I1 and the messages in T EL.Dec. In the simulated view, the input sets {X i } i∈I1 are the same as the view in the real execution, and the outcome of the internal random coins R is uniformly random, thus the distribution is the same as in the real execution. Since the threshold encryption scheme T EL is semantically secure, Moreover, the distribution of the view (χ; ξ) produced by SIM Dec 1 should be indistinguishable from the view in the real execution of T EL.Dec by the semantic security of T EL. As a consequence, the simulated view is indistinguishable from the real view. Case II (a subset I 2 of {P 1 , . . . , P n } is corrupted, and P 1 ∈ I 2 ). Let the simulator SIM be given access to the corrupted parties' input sets {X i } i∈I2 and output ⊥. The simulator SIM then proceeds as follows: • generates key pair (pk, sk) ← T EL.KGen(1 κ ) and uniformly chooses its random coins R ; ∈I2 and encrypts {BF Xi } i / ∈I2 using pk as {E pk (BF Xi )} i / ∈I2 in order to play the role of the honest parties; • generates n−|I 2 | many set of v 1 random ciphertexts as • invokes the simulator SIM Dec 2 that simulates the view of corrupted parties excluding P 1 in the threshold decryption T EL.Dec as (χ (1) ; ⊥), where χ (1) = χ (n) if P n is not corrupted; • outputs the simulated view as . The view in the real protocol execution contains the input sets {X i } i∈I2 , the random coins R, the sets of ciphertexts {CP (i) } i / ∈I2 and the messages in T EL.Dec. In the simulated view, the input sets {X i } i∈I2 and internal random coins R are indistinguishable form the counter parts in the view of the real execution. Since the threshold encryption scheme T EL is semantically secure, are indistinguishable. Consequently, the distribution of the view (χ (1) ; ⊥) produced by SIM Dec 2 is indistinguishable from the view in a real execution of T EL.Dec. Hence, the simulated view is indistinguishable from the real world view. Proof. In order to prove the security of the MPSI, we consider the following two cases: • Case I: a strict subset I 1 of {P 1 , . . . , P n } is corrupted, and P 1 ∈ I 1 . • Case II: a strict subset I 2 of {P 1 , . . . , P n } is corrupted, and P 1 ∈ I 2 . In each of the cases, we will construct a simulator SIM who simulates the MPSI protocol, and the simulator is given access to the corrupted party's input and output such that the simulated view is computationally indistinguishable from the real world view. Here, the view of an entity consists of input message of the entity, the outcome of the entity's internal coin tosses and the messages received by the entity during the protocol execution. Case I (a subset I 1 of {P 1 , . . . , P n } is corrupted, and P 1 ∈ I 1 ). Let the simulator SIM be given access to the corrupted parties' input sets {X i } i∈I1 and output n i=1 X i . Then SIM does the following: • generates (pk, sk) ← T EL.KGen(1 κ ) and uniformly chooses its random coins R; • plays the role of the honest parties by choosing random sets using pk to get {E pk (BF Xi )} i / ∈I1 ; • generates the ciphertext C l of the form T EL.Enc pk (0) for each x l ∈ n i=1 X i and the ciphertext { C l } of the form T EL.Enc pk (r l ) for each x l / ∈ n i=1 X i , where r l is uniformly chosen from Z q and X 1 = {x 1 , . . . , x v1 }. Let us consider χ = { C 1 , . . . , C v1 } and ξ as the collection of {r 1 , . . . , r v1 }, where r l is set as 0 if x l ∈ n i=1 X i , otherwise r l ∈ Z q ; • invokes the simulator SIM Dec 1 that simulates the view of corrupted parties including P 1 in T EL.Dec as (χ; ξ); • outputs the simulated view as {X i } i∈I1 ; R; {E pk (BF Xi )} i / ∈I1 , SIM Dec 1 (χ; ξ) . The view in the real protocol execution consists of the input sets {X i } i∈I1 , the random coins R, the ciphertexts {E pk (BF Xi } i / ∈I1 ), and the messages in T EL.Dec. In the simulated view, the input sets {X i } i∈I1 are the same as the view in the real execution, the outcome of the internal random coins R is uniformly random, thus the distribution is the same as in the real execution. Since the threshold encryption scheme T EL is semantically secure, {E pk (BF Xi )} i / ∈I1 and {E pk (BF Xi )} i / ∈I1 are indistinguishable. Moreover, the distribution of the view (χ; ξ) produced by SIM Dec 1 should be indistinguishable from the view in the real execution of T EL.Dec by the semantic security of T EL. As a consequence, the simulated view is indistinguishable from the real view. Case II (a subset I 2 of {P 1 , . . . , P n } is corrupted, and P 1 ∈ I 2 ). Let the simulator SIM be given access to the corrupted parties' input sets {X i } i∈I2 and output ⊥. The simulator SIM then proceeds as follows: • generates key pair (pk, sk) ← T EL.KGen(1 κ ) and uniformly chooses its random coins R ; using pk as {E pk (BF Xi )} i / ∈I2 in order to play the role of the honest parties; • generates v 1 random ciphertexts as χ = { C 1 , . . . , C v1 }; • invokes the simulator SIM Dec 2 that simulates the view of corrupted parties excluding P 1 in threshold decryption T EL.Dec as (χ; ⊥); • outputs the simulated view as {X i } i∈I2 ; R ; χ, SIM Dec 2 (χ; ⊥) . The view in the real protocol execution contains the input sets {X i } i∈I2 , the random coins R, the ciphertexts {C 1 , . . . , C v1 }, and the messages in T EL.Dec. In the simulated view, the input sets {X i } i∈I2 and internal random coins R are indistinguishable form the counter parts in the view of the real execution. Since the threshold encryption scheme T EL is semantically secure, {C 1 , . . . , C v1 } and χ = { C 1 , . . . , C v1 } are indistinguishable. Consequently, the distribution of the view (χ; ⊥) produced by SIM Dec 2 is indistinguishable from the view in a real execution of T EL.Dec. Hence, the simulated view is indistinguishable from the real world view. Remark 2. Both the schemes MPSI and MPSI-CA are secure in the semi-honest environment. However, both the schemes can be proven to be secure when the designated party P 1 is semi-honest and the remaining participants P 2 , . . . , P n are malicious by employing zero-knowledge proofs for discrete logarithm [6] and zeroknowledge argument for shuffle [33]. Efficiency The computation cost of our constructions is measured by counting the number of modular exponentiations (Exp), hash function evaluations (Hash) and modular inversions (Inv). On the other hand, the number of group elements transmitted publicly by an user incurs communication overhead. We refer to TABLE 1 for the complexity of our protocols. Note that, our MPSI does not use any kind of broadcast channel in contrast to our MPSI-CA. In TABLE 2 and TABLE 3, we give a comparative summary of our constructions with the most efficient existing protocols. Conclusion In this paper, we have constructed an MPSI-CA protocol employing a Bloom filter in semi-honest environment without random oracles. Its communication and computation overheads are linear in the input set sizes. Our MPSI-CA is more efficient than the only other existing MPSI-CA of [46]. We then extended our MPSI-CA to MPSI retaining all its security attributes. In contrast to the existing MPSI protocols, the computation complexity of each party in our construction does not depend upon the total number of participants. However, our MPSI is less efficient than that of [47] in terms of set sizes. Security Model for Semi-honest Adversary [34]: A two-party protocol, Π is a random process that computes a function f from a pair of inputs (one per party) to a pair of outputs, i.e., Let x, y ∈ {0, 1} * be the inputs of parties P 1 , P 2 , respectively. Then the outputs of the parties P 1 , P 2 are f 1 (x, y), f 2 (x, y) respectively. A protocol Π is said to be secure in a semi-honest model if whatever can be computed by a party after participating in the protocol, it could obtain from its input and output only. This is formalized using the simulation paradigm. At the input pair (x, y), the view of the party P i during an execution of Π is denoted by where w ∈ {x, y} represents the input of the party P i , r (i) is the outcome of P i 's internal coin tosses, and m (i) j (j = 1, 2, . . . , t) represents the j-th message which has received by P i during the execution of Π. Definition A.1. Let f = (f 1 , f 2 ) be a deterministic function. Then we say that the protocol Π securely computes f if there exists probabilistic polynomial-time adversaries, denoted by S 1 and S 2 , controlling P 1 and P 2 , respectively, such that In the case of a multiparty setting, the associated functionality is Let X i ∈ {0, 1} * be the input of party P i , for i = 1, . . . , n. Then the output of the party P i is f i (X 1 , . . . , X n ) for i = 1, . . . , n. 18} be the subgroup of Z * 23 of order 11, i.e., p = 23 and q = 11. Also let the secret keys of P 1 , P 2 , P 3 be 2, 3, 2, respectively, and H Bloom = {h 1 , h 2 }, i.e., k = 2. Then the steps of MPSI are described below: : MPSI.request: 1.
2020-04-16T09:15:08.899Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "5a31b3811668b20be318ea3d3877d112ef88f525", "oa_license": null, "oa_url": "https://www.aimsciences.org/article/exportPdf?id=d8371c9a-7295-45d7-b9ee-d65f73ef9bd9", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "e5dfcc927c4f3372bd79bc6e0123dd528763b187", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
142989968
pes2o/s2orc
v3-fos-license
Autonomous Learning under Study : An Annotated Bibliography from Two Studies with Different Approaches Abstrak: Konsep belajar mandiri telah mendapat perhatian besar dalam penelitian dan praktik pendidikan selama tiga dekade terakhir. Konsep ini dianggap sebagai suatu pilihan dalam dunia pendidikan modern ini (Scharle & Szabo, 2000) yang menuntut partisipasi yang lebih aktif peserta didik dan kurangnya ketergantungan kepada guru. Di bidang pembelajaran bahasa, banyak peneliti yang telah mengkaji konsep ini dalam rangka mendapatkan pemahaman yang lebih tentang teori dan praktek pembelajaran otonom. Dua pendekatan yang umum digunakan oleh para peneliti dalam pembelajaran otonom adalah pendekatan kualitatif dan kuantitatif. Kedua pendekatan dibahas dalam makalah ini untuk mendapatkan perbandingan tentan keuntungan dan kelemahan masing-masing pendekatan yang akan digunakan untuk penelitian pembelajaran otonom. Pendekatan campuran atau kombinasi pendekatan kualitatif dan kuantitatif juga dibahas dalam makalah ini untuk mendapatkan lebih banyak ide dan pemahaman tentang cara alternatif melakukan penelitian dalam ranah ini. INTRODUCTION In order to differentiate quantitative and qualitative approach for research, I draw on the explanation given by Johnson and Christensen (2004).According to them, there are several aspects that make this two approaches difference in the theory and practice.The quantitative approach is a deductive approach in where researchers begin their research process with theories or hypotheses that are going to be tested with data collected.On the other hand, the qualitative approach is known as inductive approach with the bottom up process.Usually, the new hypotheses or theories are generated after the data collected and analysed. The other major difference between these two approaches is in the nature of reality or epistemology behind them.Quantitative researchers usually hold an objective assumption for their research, while qualitative researchers tend to see reality as something constructed socially (Guba & Lincoln, 1989; cited in Johnson & Christensen, 2004).It appears that the quantitative approach comes from Objectivism epistemology and qualitative is derived from social constructivism or Subjectivism.This difference means a lot in the practice of researches using each of this approach. Other aspects that differentiate the two approaches can be noted from the data collection, analysis, and report.The quantitative approach collects data based on precise measurement using structured and validated instrument such as close-ended questionnaires, rating scale and others, and put them into categories or variables which will then be analysed to identify the statistical relationships.The finding of a quantitative study is known as statistical report.The qualitative approach, contrarily, collects the data by using open-ended questions, in-depth interview, observation, or field note and analyse them to find patterns, themes or holistic features that can explain the research problems.Unlike the quantitative approach, the finding of qualitative research is reported in narrative way with contextual description and direct quotations from research participants. Those two approaches, as widely used in researches on autonomous learning, are supported by the emergence of mix method approach which combines the quantitative and qualitative approach in research practice.The idea of using this approach is to reduce limitations of quantitative and qualitative and to produce more comprehensive and valid research finding by applying variety of data collection and analysis approach. This paper discusses two articles using two different approaches on autonomous learning.The first article in qualitative approach gives clear idea of the autonomous learning including theoretical perspective behind the concept, while the second article, with quantitative approach will give a comparative point of view for me in doing the research in experimental method.In addition, some mix method approach articles will also be discussed as alternatives for my research. STUDIES OVERVIEW The first article entitles "Learner Autonomy is written by Dimitrios Thanasoulas (2000) which is published in eltnewsletter.com.The second article is an experimental study's report published in Kastamonus Education Journal which is entitled "Fostering Learner Autonomy in EFL Classrooms" written by Cem Balcikanli (2008). The two studies concern with the same topic, language autonomous learning, but with two different approaches.The first study is conducted with qualitative while the second one with quantitative.The two approaches will be compared in order to find out the distinct on the application, the similarities or differences and the effectiveness in answering the research questions on the two studies.Since they are utilised for the same topic, it is hoped that the comparison will be clear. The first study This first article is obviously the author's point of view regarding the emergence of autonomous learning concept in educational field by referring to many theorists in autonomous and language learning.In the introduction section, the author shows his stand point in this subject that autonomous learning should be regarded as "a perennial dynamic process amenable to receive intervention in educational process rather than a static product, a state, which is reached once and for all" (Thanasoulas, 2000).The author also cites Holmes & Ramos (1991) opinion mentioning that learner must be helped to assume greater control over their own learning so that they will become more aware of it and able to identify any potential learning strategies (cited in James & Garrett, 1991: 198). In the second part, the author explains his conclusion about what the autonomous learning really is.He mentions that there are several characteristics in autonomous learning that must be matched by any learning environments to be regarded as an autonomous learning.Among other are learner needs, motivation, learning strategies, and language awareness. The author also discusses three dominant theoretical perspectives regarding the autonomous learning development.The first is Positivism theory which assumed knowledge as objective reality that is translated into leaning process as traditional classroom in where knowledge is transferred from teacher as the main source of knowledge to students as the receivers.The author concludes that this approach runs counter to the development of autonomous learning which bears active participation on students.The second one is Constructivism which regarded knowledge as something to be constructed (Candy, 1991) rather than discovered as what Positivism proposed.This approach is regarded as an applicable perspective since it can encourage and promote selfdirected study which is necessary in autonomous learning concept.The third is Critical theory which almost the same point of view as the Constructivism in regard to the idea of knowledge which is constructed rather than discovered or taught.This approach considers knowledge as a product of different social groups that bring their own interest and ideology to the knowledge (Benson & Voller, 1997).The author mentions that this approach can also be applied in autonomous learning study as it regards learner autonomy as a social character which must be aware of social context bounded it, and in the end will make learners become more independent in their learning. This article is a comparative analysis under the library or literature research.It can be concluded easily that the author stands on Social Constructionist epistemology with Constructivism perspective. This article is based on a critical research, with subjectivism epistemology, directed to many researches on adult self-directed learning.The author argues that the researches in this field have been conducted precisely and suggest the researchers to infuse self-critical scrutiny in their researches.There are four critics discussed by the author in this paper; 1) the use of middle class adult as the sampling frame, 2) the exclusive use of quantitative or quasi-quantitative measures, 3) individual exclusiveness in the study without paying attention to social context, and 4) the absence of further and extended discussion of the implications raised in the studies regarding social and political change.These arguments are backed by the author with many researches finding and theorists' opinion regarding the self-directed study.The conclusion achieved by the author is in related to the four critics proposed.The author concludes that 1) selfdirected study on adult learning should use wider sampling frame and not just middle class adult, 2) the study should use other form of measures such as qualitative approach by applying the structured and un-structured interview, 3) the study in self-directed learning should also consider the participants social context, and 4) implications raised in any studies in this field need to be given further discussion. [2] Nordlund, J. (1997).From Here to Autonomy: Autonomous Learning Modules (ALMS).Retrieved May 11th, 2010 from http://www6.gencat.cat/llengcat/ publicacions/autoapren_actesVII/docs/VII_annex1.pdf This article is a study report conducted by the author at Helsinski University language centre by using action research approachology which is applied in case study form because it relates to the implementation of a new program to certain group of participants (Creswell, 2008;476).The participants are Helsinski University students from various faculties who joining the language centre to improve their English.The author uses Autonomous Learning Modules (ALMS) with five main features; Learner awareness, Plans and contracts, Skill support groups, Counselling, and Record keeping and Evaluation, in teaching and learning process at the centre.The study objective is to find out the effectiveness of ALMS in developing autonomous learning attitude for students at the language centre.The author uses all authentic elements from the centre to collect the data for the study.The main source is counselling reports containing information about student's progress during the study conducted in interview, email and videotape.The result shows that students become more autonomous in their learning after conducted the study at the language centre. Through the study, a diploma project conducted in action research approachology with case study approach, the author wants to prove the important value of autonomous learning for language learners.The author applies three lessons in her class of fifth year English students.Those lessons are prepared and given in regard to promoting autonomous learning behaviour to the students.The author believes that the autonomous learning attitudes can be gained by students when they are exposed to learn autonomously.This perspective proves the author's stand point as Social constructionism that expects his students to change after several interactions.The study result shows that students become more autonomous upon finishing the class. [4] Morris, M. Y. (2010).Jigsaw Reading to Promote Autonomous Learning.Retrieved on May 12 th , 2010 from http://www.wfu.edu/eal/SEATJ2009/SEATJ09%20Yonezawa.pdfThis article discusses a project conducted with action research in case study approach to second year Japanese language course in a liberal arts college.The project concerns with the use of jigsaw reading as one of the steps leading to development of reading proficiency and autonomous learning.The participants' progress is then monitored during the learning process and a questionnaire is distributed at the end of the session.The result shows that during the learning process, students study by themselves, discuss the content collaboratively, and take the opportunity to monitor their performance and see models to aim for, while improving their reading skills.The questionnaire result which is analysed qualitatively shows similar finding that the degree of autonomy grows among students after finishing the session. [5].Railton, D & Watson, P. ( 2005).Teaching autonomy 'Reading groups' and the development of autonomous learning practices.Active Learning in Higher Education,6(3),[182][183][184][185][186][187][188][189][190][191][192][193] This article discusses discuss one particular approach to designing 'structured autonomy' into a first year core media studies module.The module is designed in form of reading groups that is expected to encourage learners' autonomy on study.This is a case study involving a class of university students which in the process assigned to work with group of six from the beginning of semester until the lesson completed.By observing the participant progress through the study, the author concludes that the participants develop their autonomy in learning and shift from traditional approach of teacher centre model to autonomous learning model. The second study This study focuses on fostering autonomous learning through several designed activities in EFL classroom.This is an experimental, as part of quantitative study, conducted at Gazi University, Turkey.There are two groups functioning as experimental and control group in this study.The former group is given the treatment while the latter is not (Creswell, 2008). Forty students from various faculties of Gazi University participate in this study.The participants are divided into the two groups; the experimental group with twenty participants, and the other twenty in the control group.The rigorous probability sampling strategy or simple random sampling is used by the author.With this strategy, the participants for the sample are regarded as an equal probability of being selected from the population, so that they can be a fine representative of the population (Creswell, 2008). This study uses adapted questionnaires for identifying autonomous aspect as the variables and data measurement.The same questionnaires are applied to both control and experimental groups at the beginning and end of the study.Pre-test and post-test should also be completed by both groups to see how the treatment affects the experimental group and to get a comparison result with the control group. The result shows that the development of learner autonomy can be seen from statistical analysis of pre and post test compared to one another for both group.As the conclusion, the author mentions that autonomous learning can be fostered through certain class activities or treatments.It is also suggested that teachers should take action in applying similar activities in their classes in order to make students become autonomous and independent in their learning. It is quite clear that the author is based his study on objectivism epistemology under the Positivism theory because by doing the study experimentally, he tries to see whether or not the autonomous learning behaviour can be fostered.The approach used is statistical analysis by interpreting the questionnaire and the pre and post test result with statistical tool such as SPSS. Annotated Bibliographies [1] David Gardner.( 2007).Understanding Autonomous Learning: Students' Perceptions.Article presented at Proceedings of the Independent Learning Association 2007 Japan Conference: Exploring theory, enhancing practice: Autonomy across the disciplines.Kanda University of International Studies, Chiba, Japan, October 2007.[Online] available at http://www.independentlearning.orgThis paper explains a research conducted by the author at the Centre for Applied English Studies of University of Hongkong.The participants are 30 students from engineering faculties learning at the centre to improve their ESP.The research is aimed at looking for evidence of increasing comprehension in students' definitions of self-access learning as they became more familiar with it over a period of time through exposure to explanations, peer discussion and hands-on experience.The author uses action research study with open ended questionnaires as data collecting tool.There are three steps questionnaires used in this study, before and after class orientation (Q1 and Q2), and at the end of the course (Q3).A comparison of responses for Q1 and Q2 is used to show the effect on students' perceptions and the teacher's orientation session about self-access.A comparison of responses from Q2 and Q3 is used to show the impact on perceptions, and the students' 10 week period of hands-on experience with selfaccess learning.The research results in a conclusion that there is no evidence of increasing understanding on autonomous learning among the students. [2] Wu Shao-yue. (2009).A study of network-based multimedia college English autonomous teaching and learning model. 2009, Volume 7, No.7 This article is a report of an experimental study with social constructivism approach aiming at comparing the teaching effectiveness of the network-based multimedia autonomous teaching and the traditional model.The participants are 188 freshmen of non-English majors in grade 2006 of Guangdong University of technology (157 male students and 31 female students).They were divided into two groups: Experimental Group (EG) which is given the new model of teaching, and Control Group (CG) with the traditional model.The approaches used for data collecting are paper test, questionnaires, and interview.The latter is used to strengthen the finding.The result shows that the experimental group achieve better score in language test than the other group and that the network-based multimedia autonomous learning and teaching model can successfully facilitate the needs of learners to utilise their language learning strategies to be more efficient in learning. [3] Guo, N & Willis, R. ( 2004).An Investigation of an Optimizing Model of Autonomous Learning of TEFL using Multimedia and the Internet technologies (ICT).Retrieved on May 2 nd , 2010 from: http://www.aare.edu.au/05pap/guo05086.pdf This article is a study report conducted with contrastive teaching experiment approach at Shanxi University of Finance and Economics (SUFE).The participants are students of 2004 grade which are divided into two group based on their English test conducted at the beginning of the study.The experimental group is set to a situation in which they are situated to be aware of the desirability of becoming autonomous learners, and believe that they can develop a high level of competence in listening and speaking as a result of their efforts.The control group is set in traditional approach.It is found from the two years study that most of students can manage and in charge of their own learning.Students' motivation to study is aroused and most of them volunteer to find appropriate sources or learning materials outside of their class activity. [4] Ponton, M.K., Derrick, M.G, & Carr, P.B. (2005).The Relationship between Resourcefulness and Persistence in Adult Autonomous Learning.Adult Education Quarterly. 55(116) This article explains a study investigating the tenability of a proposed pathanalytic model relating resourcefulness and persistence in the context of adult autonomous learning.The data are collected by using from a non-probability sample of 492 American adults and analyse it with valid and reliable measures for resourcefulness and persistence.The author uses ILR and ILP questionnaire model designed specifically to investigate participant's autonomous learning attitudes.The result of the study comes up with a conclusion that an adult's persistence in autonomous learning is more related to the anticipation of future rewards of present learning. [5] Murray, D. (2000).Autonomous Learning Behaviours: A fulcrum for course design, implementation and evaluation with larger classes.Retrieved on May 12 th , 2010 from: http://kuir.jm.kansai-u.ac.jp/dspace /bitstream/10112/ 1391/1/KU- 1100GI-20080331-07.pdfIn this article, the author reports the finding of his study which is conducted in order to seek for a better way of developing students' autonomy in larger classes.The author uses Task-Based Language Teaching, the Milestone and Swiss versions of the European Language Portfolio, and CALL/e-learning as the teaching approach.Participants are learners at three different years of various universities students from different background studying at a language program in several classes consist of ten to fifty students in academic year 2006-2007.In collecting the data, the author uses questionnaires with categorised set of autonomous learning behaviours questions, and one pre-test at the beginning of program and one post-test at the end.Data analysis indicates modest gains in the use of target learning behaviours; however the data is quantitative, context-dependent and based on the learners' subjective impressions that could be limiting its use in rigorous statistical analysis.From the students' interaction assigned in this study, it can be noticed that this study is an applied research with social constructivism approach. DISCUSSION The development in language learning nowadays has forced teachers and learners to modify their roles in teaching and learning practice.Autonomous learning as an emerge model in language learning also contribute to the modification.This concept demands learners to be more active and independent as well as fully responsible in their learning while at the same time it also reduces teacher's portion in teaching and learning process. Many experts have contributed to the development of autonomous learning especially in language learning at higher education level.Most of them agree in autonomous learning concept, learners are the centre and teachers should only play their roles in limited but meaningful way (Little, 1993;Dickenson, 1995;Benson;1997;Littlewood, 1999).In addition, Van Lier (1996) states that learners must be fully responsible for their learning and for deciding the choice to learn in order to gain a success in their learning.Meanwhile, Little (1995) also mentions that there must be a clear objective, good initiatives and ability to measure or evaluate the process and result of the learning in autonomous context.Chan (2001, p.285) supports the above opinion and points out autonomous learner as "being actively involved at all levels of learning, from goalsetting, defining content and working out mechanisms for assessing achievement and progress and points out that the locus of control for decision-making shifts from teacher to student".Dickenson (1995, p.330) gives conclusion on autonomous matter and describes autonomous learners as "those able to discover how to clearly identify the learning objectives of the course, formulate their own learning objectives, consciously select and implement appropriate learning strategies, identify strategies that are effective/inappropriate and substitute others, and develop a rich repertoire of effective strategies". Based on the opinions, it seems difficult to be an autonomous learner.Autonomous learning students need to be aware of their status as an adult student who must be autonomous in learning.The problem which interests me to do my research is the awareness of Polytechnics students, as part of higher degree education in Indonesia, toward autonomous learning.This problem seems to lie on most students in this institution since they are used to traditional model of teacher centred education.These students are mostly educated in traditional education background in the first and secondary level.They are trained to study according to anything designed and directed by teacher as what happens in most Asian countries so that most of students are categorized as reactive students (Littlewood, 1999). The general idea of qualitative approach is to interpret behavior and intention of participants regarding the problem being investigated, or in this case is autonomous learning behavior.Most of the researches try to portray the natural context of autonomous learning and sometimes search for larger patterns to get more understanding of the problem (Ary, Jacobs & Sorensen, 2010).The researchers and participants involve in the research for a period of time to maintain interaction between them.By doing so, the researchers, as the primary instrument in the research, hope to find enough data in order to derive proper analysis and valid finding for the research.The report of a qualitative research is usually written in descriptive and holistic language with no statistic data.At this point, the qualitative approach arrives at its primary research concept which considers reality is socially constructed (Ary, Jacobs & Sorensen, 2010) as what is applied to autonomous learning. Unlike the qualitative approach, the quantitative approach is stand on Objectivism epistemology with its positivism perspective.This perspective gives assumption that knowledge should reflect objective reality.If teachers are considered as the source of the objective reality, then learning can only occur in form of knowledge transmission from them to the learners (Benson & Voller, 1997).Congruent with this view, of course, is the maintenance and enhancement of the traditional classroom, where teachers are the purveyors of knowledge and wielders of power, and learners are seen as 'container to be filled with the knowledge held by teachers. Regarding the purpose of study, the quantitative approach usually intends to make generalization on findings, predict behavior or try to provide causal explanation of the research's problem.The study in quantitative approach is grounded by theory.The practice is known by data manipulation and variables controlled which are mostly reduced to number to find any relationship or correlation among the variables as study's conclusion.The report in quantitative data is written precisely by using abstract language (Ary, Jacobs & Sorensen, 2010).Quantitative researchers believe in objective reality which needs to be found through the study.The analysis in this approach is based on logical empiricism; therefore the inquiry in this study is conditioned as value free as possible (Ary, Jacobs & Sorensen, 2010).Regarding the autonomous learning, this concept can be seen as to find out the degree of autonomy among students as the participants without any intervention given to them.
2019-05-08T13:28:39.753Z
2012-09-01T00:00:00.000
{ "year": 2012, "sha1": "a1a703dd4ab9fcaf8fe5e38ac824e3e084197912", "oa_license": "CCBYNC", "oa_url": "http://ejournal.unp.ac.id/index.php/komposisi/article/download/3936/3169", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "a1a703dd4ab9fcaf8fe5e38ac824e3e084197912", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
118367885
pes2o/s2orc
v3-fos-license
BCS-BEC crossover and the disappearance of FFLO-correlations in a spin-imbalanced, one-dimensional Fermi gas We present a numerical study of the one-dimensional BCS-BEC crossover of a spin-imbalanced Fermi gas. The crossover is described by the Bose-Fermi resonance model in a real space representation. Our main interest is in the behavior of the pair correlations, which, in the BCS limit, are of the Fulde-Ferrell-Larkin-Ovchinnikov type, while in the BEC limit, a superfluid of diatomic molecules forms that exhibits quasi-condensation at zero momentum. We use the density matrix renormalization group method to compute the phase diagram as a function of the detuning of the molecular level and the polarization. As a main result, we show that FFLO-like correlations disappear well below full polarization close to the resonance. The critical polarization depends on both the detuning and the filling. I. INTRODUCTION Ultracold atoms provide a unique opportunity to study basic many-body problems both in equilibrium and in non-equilibrium situations [1]. A particularly appealing feature of these systems is the possibility to change the interaction strength over a wide range via Feshbach resonances. In a two-component Fermi gas, this allows one to study the crossover from BCS-pairing to a Bose-Einstein condensate (BEC) of strongly bound molecules [1][2][3]. In a situation, in which the two states involved in the pairing are equally populated, this is a smooth crossover. By contrast, in the case of an imbalanced gas, unconventional superfluid ground states such as the Fulde-Ferrell [4] or Larkin-Ovchinnikov [5] (FFLO) state with finitemomentum pairs, a Sarma phase with two Fermi surfaces [6], or a mixture consisting of a BEC of strongly bound pairs and a Fermi gas of unpaired atoms have been proposed [7][8][9][10][11]. Experimentally, spin-imbalanced two-component Fermi gases have first been realized at MIT [12][13][14] and Rice [15,16]. From the spin-resolved density profiles and, in particular, the existence of a lattice of quantized vortices in a rotating gas [13], it is possible to observe the disappearance of a conventional superfluid in the center of the cloud with increasing imbalance. Assuming that a local density approximation applies, this allows one to determine the breakdown of BCS-type pairing beyond a critical imbalance p 3D c that is close to p 3D c ∼ 0.4 for the uniform gas at unitarity in three dimensions [17,18]. Unfortunately, in the three dimensional (3D) case and in the unitary regime, where the scattering length is much larger than the average interparticle spacing, it is difficult, both experimentally and theoretically, to estab-lish unambiguously the existence of phases with unconventional pairing that are expected when the balanced (p = 0) superfluid becomes unstable. The experimentally observed density profiles [17] at the unitary point are consistent with the prediction of a first order transition from a balanced superfluid to a normal state, in which the two spin components each form a Fermi liquid [2]. This theoretical prediction is based on a variational ansatz for the ground state [18,19], which excludes unconventional superfluid phases. It is therefore of considerable interest to study models, for which the phase diagram of the imbalanced gas along the BCS-BEC crossover is accessible by methods that are sensitive to states with complex order. In the case of one dimension, such powerful numerical and analytical tools are indeed available. In fact, for both the attractive fermionic Hubbard model [20] and the associated continuum model [21,22], there is an exact solution that can be extended to the imbalanced case [23][24][25][26][27]. The ground state phase diagram consists of three phases: a balanced superfluid, a polarized intermediate phase and a fully polarized, normal Fermi gas [28]. In the weak coupling limit, both a solution of the Bogoliubov de Gennes equations [29] and bosonization [30] indicate that the polarized intermediate phase is an FFLO-like state at any finite imbalance. This prediction has been recently verified by density matrix renormalization group (DMRG) [31][32][33][34][35] and Quantum Monte Carlo (QMC) calculations [36,37]. It applies both to the continuum case and in the presence of an optical lattice, and the FFLO state exists in mass-imbalanced systems as well [38][39][40][41]. Moreover, the one-dimensional (1D) FFLO state is also stable in the inhomogeneous case that arises in the presence of a trapping potential [31,32,37]. It is important to point out that these methods give access to the regime of strong interactions as well, where the energy scale of the superfluid states is of the same order as the Fermi energy. In the context of cold atoms, this is the relevant regime because in weak coupling, nontrivial order only appears at unobservably low entropies of s ≃ T c /T F ≪ 1 per particle. As realized by both Fuchs et al. [42] and Tokatly [43], however, attractive fermion models are not sufficient to account for the full physics of the BCS-BEC crossover in one dimension. Indeed, in the strong coupling limit, they describe a Tonks-Girardeau gas of dimers. They are unable, therefore, to cover the regime of weakly interacting bosons that is reached when the size of the two-particle bound state is smaller than the oscillator length of the transverse confinement. In this limit, the hardcore constraint of the tightly bound dimers becomes irrelevant. Moreover, in models of attractively interacting fermions there is only one phase at a finite spin imbalance below saturation, namely the FFLO phase [23-25, 30-34, 36]. As we shall emphasize in this work, the generic phase diagram of a more general two-channel model is much richer, in particular, close to resonance. A description of the 1D BCS-BEC crossover that properly accounts for the coexistence of fermions and bound pairs in the imbalanced case can be achieved in the framework of the Bose-Fermi resonance model [44,45] in which two fermions in an open channel couple resonantly to a diatomic molecule in a closed channel. The associated amplitude due to the off-diagonal coupling between the open and closed channel determines the intrinsic width of the Feshbach-resonance [1]. In a continuum description, the 1D Bose-Fermi resonance model has been studied by Recati et al. [46] for the special case of a vanishing imbalance, where a smooth BCS-BEC crossover occurs. Its BCS side is described by attractively interacting fermions while on the BEC side, one has a repulsive Bose gas of dimers. In the limit of a broad Feshbach resonance, the transition between the two regimes is sharp, yet continuous. In particular, the quasi-long range superfluid order of the ground state does not change along the full BCS-BEC crossover. As realized recently by Baur et al. [47] in a study of the associated three-body problem, however, the situation is more complex and interesting in the case of an imbalanced gas. There, FFLO-physics with spatially modulated pair correlations that are present on the BCS-side of the crossover must disappear at a critical point, giving room to a Bose-Fermi mixture that is a conventional superfluid, where quasi-condensation appears at zero total momentum. At the three-body level, this critical point shows up as a change in the symmetry of the ground state wavefunction [47]. As for studies on the many-body physics of the 1D Bose-Fermi resonance model, we refer the reader to Refs. [46,[48][49][50][51]. Bosonization has been applied to the balanced case in Refs. [48,49], and Bethe ansatz results for the imbalanced case have been presented in Refs. [50,51]. FFLO correlations, however, have not been discussed in either of these studies. Experimentally, the formation of molecules in Fermi gases that are tightly confined in two transverse directions has been demonstrated by the ETH group [52], using a balanced mixture. The binding energy of molecules is finite for an arbitrary sign of the 3D scattering length a, in contrast to the situation without confinement, where the two-particle binding energy vanishes on the BCS side of negative a. The objective of this work is to study a spinimbalanced Fermi gas described by the Bose-Fermi resonance model Hamiltonian. We use a real-space representation with a finite, incommensurate filling and map out the zero temperature phase diagram by computing pair correlations as a function of polarization and detuning. We find that FFLO correlations [4,5] dominate in a wide parameter range, and we clarify how the presence of molecules affects the stability of this phase. Qualitatively, the presence of molecules binds a certain fraction of minority fermions into molecules, reducing the overall number of pairs in the FFLO channel. As a main result, we determine the critical polarization in the crossover region at which FFLO correlations disappear, and its dependence on filling and detuning. Beyond this critical polarization and below saturation, the system is a superfluid of composite bosons in the molecular channel immersed into a gas of either fully or partially polarized fermions. As a numerical tool, we employ the density matrix renormalization group (DMRG) method [53][54][55]. This exposition is organized as follows. First, in Sec. II, we introduce the model Hamiltonian and discuss its limiting cases. Further, in Sec. II B, we analytically solve the two-body problem. In Sec. III, we present our DMRG results for the pair correlations, the momentum distribution, and the number of molecules as a function of filling, polarization and detuning. We close with a summary and discussion in Sec. IV. A. Hamiltonian We use a minimal Hamiltonian for the one-dimensional (1D) BCS-BEC crossover [46,47] in a real-space version, incorporating the kinetic energies of fermions and molecules, the detuning of the molecular level, as well as the coupling between the fermions and molecules: i,σ is a fermionic annihilation (creation) operator acting on site i, while m † i creates a composite boson on site i. The boson energy is shifted with respect to that of single fermions by an effective detuning ν + 3t. It is chosen such that the energy for adding two fermions or one boson, each at zero momentum, coincide at resonance ν = 0. The amplitude for the conversion of two fermions into a closed channel molecule and vice versa is given by the Feshbach coupling constant g. For a negative detuning ν < 0 of the molecular level, it gives rise to an attractive two-particle interaction g 2 /ν < 0 between the fermions [46]. Near resonance ν ≃ 0, this dominates any direct background interaction U bg between the two fermionic species, which is therefore neglected from the outset. The hopping matrix elements for fermions and molecules are denoted by t and t mol , respectively. We further set t mol = t/2, which accounts for the mass ratio of 2:1 between molecules and fermions. L is the number of sites. Further, n i,σ = c † i,σ c i,σ , yielding the number of fermions of each species as N σ = i n i,σ , with N f = N ↑ + N ↓ and the pseudo-spin index σ =↑, ↓. The only conserved particle number is i m i . We use n = N/L to denote the filling factor and p = (N ↑ − N ↓ )/N as a measure of the polarization, which we shall also sometimes refer to as imbalance. Note that at maximum one molecule can sit on a single site, i.e., the molecules behave as hard-core bosons. B. Two-body problem and spin gap Scattering amplitude and bound state energy In this section, we calculate the effective interaction between two fermions that is mediated by the molecules at the two-body level. Following the method outlined in [46], the bound state energy ǫ b > 0 of two fermions is determined by the condition where D 0 (k, ω) is the bare molecular propagator and Π(k, ω) is the self-energy of the closed channel propagator (as usual, ω and k denote frequency and momentum, respectively). The resulting equation admits a unique, real solution ǫ b > 0 irrespective of the sign of the detuning ν. Of particular interest is the binding energy ǫ ⋆ = ǫ b (ν = 0) at resonance. Except for the scale 2t set by the bandwidth, it only depends on the dimensionless Feshbach coupling constant g ′ = g/(2t). For small coupling strengths g ′ ≪ 1, it is given by ǫ ⋆ /(2t) = g ′4/3 /2 2/3 , while ǫ ⋆ /(2t) = g ′ for g ′ ≫ 1. The ratio ǫ ⋆ /(2t) = 1/(r ⋆ ) 2 is essentially the size of the bound state (in units of the lattice spacing) at resonance. In terms of this characteristic length, the condition for a broad Feshbach resonance is simply nr ⋆ ≪ 1 [46]. Taking ǫ ⋆ (g ′ ) as a characteristic energy scale, the equation for the dimensionless binding energy Ω = ǫ b /ǫ ⋆ for an arbitrary value of the dimensionless detuning ν ′ = ν/ǫ ⋆ can be written in the form which is easily solvable for the bound state energy Ω(ν ′ ) as a function of the detuning. The definition of Ω guarantees that Ω ≡ 1 at resonance, irrespective of the value of the Feshbach coupling g ′ . In Figure 1, we show the dependence of the binding energy Ω(ν ′ ) on the detuning for three values of g/t = 0.1, 0.5, 1. As suggested by the preceding discussion, the Ω = Ω(ν ′ )-curve is practically independent of g ′ . On the BCS side, where ν ′ ≪ −1, one obtains a very small binding energy √ Ω = 4 + ǫ ⋆ /(2t)/(2|ν ′ |) ≪ 1, approaching √ Ω = 1/|ν ′ | for small values g ′ ≪ 1 of the Feshbach coupling. In the BEC regime of strongly positive detuning ν ′ ≫ 1, the binding energy follows the detuning, i.e., the energy of the molecular state to leading order. As a result, the closed channel fraction is close to one, as expected in the BEC limit. The dimensionless binding energy Ω = (r ⋆ /r b ) 2 determines the size r b of the bound state normalized to its value at resonance. For Ω ≫ 1, therefore, this size is much smaller than the lattice spacing unless g ′ ≫ 1. Spin gap In the previous section, we argued that the binding energy Ω and in particular, ǫ * , are important quantities to characterize the 1D BCS-BEC crossover on the two body level. We next discuss the relation of Ω to the spin gap ∆, which we calculate with DMRG as a function of filling, detuning, and the Feshbach coupling. The connection between the binding Ω and the spin gap has previously been pointed out by Orso [23]. The spin gap is computed from where E 0 (S z ) is the ground-state energy of a system of length L in the subspace with S z = (N ↑ − N ↓ )/2. We then extrapolate the finite-size data for ∆(L) in system size to the thermodynamic limit L → ∞. Figure 1 includes the DMRG data for the spin gap at a filling of n = 0.1 and for g = t (squares). Evidently, the spin gap coincides with the two-fermion binding energy Ω not only on the BEC-side ν ′ > 1 where this is expected, but also far into the BCS regime. Of course, for very weak coupling, this agreement must eventually be violated because the spin gap ∆ ≃ exp [−π/(2|γ|)] depends on the filling n. In particular, it is exponentially small in the dimensionless coupling constant |γ| = 1/(2n|a 1 |) ≪ 1 (a 1 is the effective scattering length in one dimension, see [42]), while the two-particle binding energy ǫ b = ǫ ⋆ /ν ′ 2 is independent of n and vanishes algebraically with the detuning in this regime. Near resonance, the spin gap is identical with the two-particle binding energy in the lowdensity limit nr ⋆ ≪ 1, as shown by Fuchs et al. [42]. With increasing values of the filling, however, the spin gap increases, as is evident from Fig. 2. The many-body spin-gap is therefore clearly distinct from the two-particle binding energy. To illustrate this behavior, we display ∆ as a function of filling at g = t in Fig. 2(a) and as a function of g at n = 0.6 in Fig. 2(b), both at resonance ν = 0. ∆ = ∆(g) at n = 0.6 also grows with the Feshbach coupling g. C. Limiting cases of the Bose-Fermi resonance model To guide the interpretation of our numerical results to be presented in the following sections, we find it useful to start with a qualitative discussion of the limiting cases of the Hamiltonian Eq. (1) in terms of the dimensionless detuning ν ′ = ν/ǫ * [see also Ref. 47, which uses the more standard opposite sign convention for the detuning]. (i) The BEC limit, ν ′ ≫ 1 -In this limit, all particles are bound in the molecular state, i.e., N mol = N/2. At filling N mol /L < 1, this realizes a superfluid lattice gas of hardcore bosons, i.e., effectively a Tonks-Girardeau gas of molecules. Its ground state is characterized by quasi-long range order in the one-particle density matrix in the molecular channel of the form |ρ mol ij | ∼ x −1/2 (x = |i − j|) [56]. As the detuning is decreased and resonance is approached, the molecules start to make virtual fluctuations into fermions. The presence of excess fermions suppresses these fluctuations, giving rise to a repulsion between fermions and molecules which is proportional to g 2 /ν [47]. Within a continuum model, this effective atom-molecule interaction on the BEC side of the resonance has been calculated exactly at the threebody level by Mora et al. [57]. They find that the interaction is repulsive in the regime where the two-body binding energy ǫ b is larger by a factor 2.2 than its value ǫ ⋆ at resonance. For smaller binding energies, on the BCS side, the effective atom-molecule interaction becomes attractive and also nonlocal, indicating that the picture of bosons that can coexist with unpaired fermions is no longer applicable [47,57]. It is instructive to compare the regime ν ′ ≫ 1 of the lattice model studied here to the corresponding continuum model studied in Ref. [46]. In the latter case, the relevant dimensionless interaction parameter γ B = g B /n B (n B denotes the density of molecules) can be tuned to values small compared to one even in the deep molecular limit because g B ∼ |ǫ b | −5/2 vanishes as the two-particle binding energy |ǫ b | becomes very large. As a result, the effective Luttinger exponent K(γ B ) is then much larger than one and one obtains a weakly interacting gas of molecules, whose one-particle density matrix ρ mol ij decays as |ρ mol ij | ∝ x −(1/2K) with an exponent 1/(2K) that is close to zero. In the continuum and for ν ′ ≫ 1, therefore, the weakly interacting molecule gas exhibits almost true long range order. This regime, however, is not reachable in the framework of the model Eq. (1), because even in the deep molecular limit ν ′ ≫ 1, where the size of the two-particle bound state r b (in units of the lattice spacing, see the definition of r b given above) is much smaller than one, we still keep only the eigenvalues 0 and 1 for the local molecule occupation number n mol i = m † i m i . In reality, however, more than one closed-channel molecule could sit on a lattice site in this limit because the lattice spacing is much larger than r b . We shall not further discuss or pursue this question in the present work. Consequently, while we will be able to see the suppression of FFLO physics due to molecule formation, which is the main focus of our present work, Eq. (1) does not describe the full BCS-BEC crossover at a finite imbalance that should feature a weakly interacting BEC in the limit ν ′ ≫ 1. (ii) The BCS limit, ν ′ ≪ −1 -Here, N mol ≈ 0. Virtual transitions into the molecular state give rise to a weak attractive on-site interaction U = g 2 /ν between fermions. At a finite polarization p > 0, we thus expect FFLO-like correlations with real-space oscillations in the modulus of the pair-pair correlations For small polarizations, these correlations are described by the sine-Gordon theory whose ground state is an array of domain walls, where the superfluid order parameter changes by π [24,29,30]. For larger polarizations, the domain walls merge and the order parameter acquires a purely sinusoidal form with a power law decay as a function of the separation |i−j| = x. The associated wave vector is fixed by the density imbalance via the difference of the Fermi-wave vectors k F,σ = πN σ /L of the majority(minority) spins. More precisely, as shown by Sachdev and Yang [58] from a generalized Luttinger theorem for Hamiltonians of the form (1), the difference k F,↑ − k F,↓ of the Fermi wave vectors of the interacting system is quite generally fixed by the imbalance p as in Eq. (9). While the N σ are not conserved separately in the case where the bosons are condensed, this theorem implies that the wave vector of superfluid order in the fermions is given by Eq. (9), independently of the detuning, i.e., the strength of the interaction. In the notation of Ref. [9], the associated FFLO state is thus commensurate. The exponent α(p) of the power-law decay has a quite interesting dependence on polarization and interaction strength, first discussed by Yang [30]. At vanishing polarization p = 0, it is fixed by the Luttinger parameter K c > 1 of the attractive 1D Fermi gas in the charge sector via α(p = 0) = 1/K c . In the limit of small polarizations, bosonization gives α(p > 0) = 1/K c + 1/2 [30], i.e., a discontinuous jump of α(p) at p = 0 + . This dependence has recently been verified in Ref. [34], using the attractive 1D Hubbard model. III. DMRG RESULTS FOR THE IMBALANCED CASE In this section, we present our DMRG results for the number of molecules, the pair correlations, the momentum distribution function (MDF) of both fermionic components, as well as the MDF of the molecules, all as a function of polarization, and detuning. As a main result we show that, while FFLO correlations are present in the BCS limit, as the number of molecules increases, the FFLO correlations disappear well below full polarization. Upon increasing the polarization at a fixed detuning and in the crossover regime, the system thus first has FFLOlike correlations, and then undergoes two phase transitions at polarizations p 1 and p 2 . For p 1 < p < p 2 , pairing at zero momentum coexists with FFLO correlations, while for p 2 < p < 1, the system behaves as a Bose-Fermi mixture with only one fermionic component, the majority spins. Therefore, the large-p phase is divided into a superfluid of molecules immersed into either a gas of partially polarized fermions or fully polarized fermions below saturation. We further establish that the molecular and pair correlations are identical for p < p 1 in the sense that first, they feature instabilities at the same wave vector and second, their highest occupied natural orbitals are identical. Our results are summarized in phase diagrams for g = t/2 and g = t that are presented and discussed in Sec. III C. A. Number of molecules To identify the crossover region characterized by a finite density of both fermions N f /L > 0 and molecules N mol /L > 0, we first calculate N mol as a function of the detuning ν at both g = 0.1t and g = t. The results are depicted in Fig. 3 We see that in the balanced case, the crossover region is between −3t ν t for g = 0.1t and in the range −4t ν 4t for g = t. Moreover, the increase of N mol as ν is moved from the BCS to the BEC side occurs over an increasingly wide range of detunings with increasing density n. This is consistent with the result that an abrupt change from a purely fermionic system (N mol ≈ 0) to a purely molecular one (N f ≈ 0) only exists in the lowdensity limit of a broad Feshbach resonance nr ⋆ ≪ 1, as discussed previously in Refs. [42,46]. An obvious, but important consequence of the off-diagonal Feshbach coupling g is that the filling n f = N f /L in the fermionic channel depends on the detuning and the Feshbach coupling, ranging from n f = n in the ν ′ ≪ −1 limit to n f = 0 in the BEC limit ν ′ ≫ 1. Therefore, the Fermi wave vectors k F,↑/↓ vary, too. This is consistent with our numerical observation from Fig. 2, Sec. II B, that the spin gap is a function of ν, n, and g. The effect of the imbalance at some generic density n [n = 0.6 in Figs. 3(b) and (d)] is to make the window in which molecules and both fermionic species coexist with comparable densities narrower. In the g = 0.1t case, the detuning, at which 2N mol ≈ N , is shifted towards the BCS regime ν < 0 as the polarization increases. Figure 4(a) shows the number of molecules 2N mol /N as a function of polarization and for several values of the detuning ν at g = t and n = 0.6. As soon as the line N ↓ = 0 is reached at some polarization p 2 , no pairing of fermions is possible anymore, and we are left with a BEC of molecules immersed into a fully polarized gas of fermions. This sets an upper limit, well below saturation N = N ↑ , for the emergence of FFLO-like correlations. In fact, in Sec. III C, we shall see that the FFLO regime actually disappears well below p 2 . It is further instructive to compare the polarization dependence of all particle densities, i.e., majority fermions N ↑ /N , minority fermions N ↓ /N , and molecules N mol /N , in the crossover region and before resonance ν = −t, shown in Fig. 4(b). The large-polarization region, in which N ↓ /N ≈ 0, is consequently characterized by a linear dependence of N mol and N ↑ on the polarization, with the slope being independent of the detuning ν. Note that from comparing L = 40 and L = 120 sites data, we conclude that finite-size effects are negligible for the parameters considered. To determine p 2 , we compute the polarization curves p = p(h) for a given detuning and filling n, where h denotes an effective 'magnetic field', coupled to the Hamiltonian through a Zeeman-like term that favors a finite imbalance p > 0. The results for g = t and n = 0.6 are displayed for ν/t = −3, −1, 0, 1 in Fig. 5. For ν = −3t, the p(h)curve has no features, and indicates the presence of a very small spin gap. At small polarization, p = p(h) increases linearly with h, consistent with recent studies of the magnetization process of attractively interacting fermions [59,60]. At ν = −t, we first identify the presence of a large spin gap (identified by 2h c ), and two kinklike features at finite polarizations p 1 and p 2 . Essentially, at p > 0, the system is a multi-component Luttinger liquid, and the presence of kinks indicates the disappearance or appearance of one component. It is thus easy to guess that the kink at larger polarizations, i.e., p 2 is associated with the depletion of the minority fermions, i.e., N ↓ ≈ 0 for p > p 2 . This is consistent with our results for the particle densities shown in Fig. 4(b) and will be further corroborated by the discussion of the momentum distribution functions (see Sec. III B 1). In view of the results for the BCS-BEC crossover of the imbalanced Fermi gas in 3D (see, e.g., Refs. [10,11]), one might speculate about the possibility that phase separation could appear also in one dimension. However, we stress that the critical fields h 1 and h 2 corresponding to p 1 and p 2 are well separated. In particular, a finite-size scaling analysis of the fields h 1 and h 2 for ν = −t shows that h 2 − h 1 > 0 remains finite in the limit of L → ∞. This rules out the possibility of a jump in p(h) and thus of phase separation in a uniform system. The nature of the first kink p 1 in Fig. 5(b) will become obvious from the analysis of the pair correlations to be discussed in Sec. III B. As we shall see, below p 1 , we have pairs at a finite momentum (i.e., the 1D FFLO state), molecules and the two fermionic components, while at p > p 1 , additional pairs at zero momentum are formed. On resonance, i.e., at ν = 0, we still identify a kink at p 2 , while on the BEC side (ν = t), the polarization curve is smooth, with p(h) ∝ √ h − h c , where the critical field h c for the onset of a finite polarization p = 0 is in fact connected to the spin gap by the simple relation 2h c = ∆ [23]. This behavior is characteristic for a band-filling transition of a single component, which in this case are the majority spins. Note that the same square-root dependence in magnetization curves has been found for a 1D Bose-Fermi mixture [51]. Momentum distribution functions for pairs, molecules, and fermions To address the key questions of (i) the existence of FFLO-like correlations and (ii) their stability against the presence of molecules, we compute the momentum distribution function of first, pairs (n pair k ) and second, the momentum distribution function of the molecules (n mol k ) by taking a Fourier transformation of the real-space data for Eq. (7) and of the one-particle density matrix of the molecules, ρ mol ij [compare Eq. (6)], respectively. In the following we focus on g = t, unless otherwise stated. The results for n pair k and n mol k and a filling of n = 0.6 are shown in Fig. 6 and Fig. 7, respectively. We choose three values of the detuning: ν = −3t [panels (a)], which is on the BCS side, ν = −t [panels (b)] in the crossover region, and finally ν = 0 [panels (c)] on resonance. It is instructive to contrast the behavior of these quantities with that of the momentum distribution functions of majority and minority spins, i.e., n ↑,↓ k , displayed in Fig. 8. n σ k is the Fourier transform of the one-particle density matrix ρ σ ij = c † i,σ c j,σ . Starting with the Fourier transform of pair correlations, we note that in the BCS limit and as the polarization is increased, we observe quasi-coherence peaks at a finite momentum Q > 0 [see Fig. 6(a)]. Yet, these peaks are weak and the pairs' MDF resemble the one of a weakly interacting two-component Fermi gas described by the attractive Hubbard model [note that the finite-Q peak is more pronounced in the molecules' MDF, Fig. 7(a)]. The rather weak peaks are probably a consequence of the fact that the pair correlations differ from a pure cosine [as suggested by Eq. (8)]. This is certainly the case at small values p ≪ 1 of the polarization (see, e.g., Ref. [29] and the discussion in Sec. II C). The position Q of the maximum in n pair k follows k F,↑ − k F,↓ , as we illustrate in the insets of panels (a) and (b) in Fig. 6. This, as usual, is a defining feature of the 1D FFLO state. The quasi-coherence peaks are way more pronounced in the crossover region, i.e., ν = −t, which is of primary interest in this work [see Fig. 6(b)]. We observe the breakdown of FFLO-like correlations at a finite polarization 0 < p 1D c < 1. This critical polarization p 1D c is smaller than the upper limit p 2 discussed above. An emergent feature of the pairs' MDF in the crossover region ν ∼ −t is the coexistence of peaks at both Q = 0 and Q > 0 at intermediate polarization [see, e.g., the dotted line in Fig. 6(b)], we find that this coincides with the first kink seen at p 1 in the polarization vs. magnetic field curves shown in Fig. 5(b). Therefore, we conclude that the first phase transition and thus the boundary of the 1D FFLO phase in the crossover regime and at p > 0 is the one at p = p 1 where pairing at Q = 0 starts to contribute, effectively adding an additional quasi long-range order parameter to the system. We can further define a crossover polarization p * > p 1 , beyond which the dominant instability is at Q = 0. In the example of ν = −t shown in Fig. 6(b), p * = 1/2. Note that slightly above p * , some modulation in the pairs' MDF survives, which shows up as a smaller maximum in n pair k at a finite momentum. Fi- nally, we note that the FFLO correlations are typically enhanced at low densities (e.g., at n = 0.2; results not shown here). To summarize, we identify p 1D c with the upper boundary of the FFLO phase, i.e., p 1D c = p 1 . Right at resonance (ν = 0), no signatures of FFLO correlations are visible any more, and the momentum distribution functions of both the pairs and the molecules feature a maximum at zero momentum [see Figs. 6(c) and 7(c)]. We observe the same behavior on the BEC side, ν > 0. For illustration, the k = 0-weight in the pair and molecular MDFs are shown as a function of polarization in the insets of Figs. 6(c) and 7(c). Quite notably, n mol k=0 exhibits features that can be related to the phase transitions the system undergoes as p increases. First, the weight discontinuously drops from its p = 0 value, as the critical field for breaking up molecules is overcome at p = 0 + . Second, n mol k=0 takes a maximum at p 2 , where the system enters into the Bose-Fermi mixture phase at p > p 2 . A similar, yet less significant behavior can be seen in the number of molecules, N mol (p)/N mol (p = 0), which we have included in the inset of Fig. 7(c) for comparison (solid line) [see also Fig. 4(b)]. An important point that should be emphasized in this context is the fact that the respective quasi-condensates of molecules and fermions are locked into each other. Indeed, they qualitatively show the same behavior concerning the position of their maxima, as is evident from comparing Figs. 6 and 7. We next discuss the MDF of the two fermion components, shown in Fig. 8. In the BCS limit, the MDFs feature a sharp edge, reminiscent of a weakly interacting lattice gas and consistent with the features observed in Fig. 6(a). As ν moves the system into the BEC regime, the p = 0 MDFs become quite broad, as expected for a strongly interacting system and and for the standard BCS-BEC crossover (see, e.g., Refs. [1,61]). Upon polarizing the system, n ↑ k develops a sharper edge [see Figs. 8(a),(b), and (c), left panels], as eventually, only the majority fermions remain. This is particularly evident in the case of ν = −t shown in Fig. 8 Simultaneously, for p > 1/2, n ↑ k changes from a smooth function seen at p ≤ 1/2 to a steep one, since for p > 1/2, there is a single fermionic component left. Thus the depletion of minority fermions characterizes the transition to the Bose-Fermi mixture phase at p ≥ p 2 . Natural orbitals To render the analysis of the locking effect [46,48,49,62] between ρ pair ij and ρ mol ij more quantitative, we compute the eigenvalues and eigenvectors of the associated one-particle density matrices, ρ pair ij and ρ mol ij (the eigenvectors are sometimes called 'natural orbitals'). In particular, the orbital φ 0 that is connected with the largest eigenvalue according to the Penrose-Onsager decomposition [63] of the density matrix reveals the real-space structure of the quasi-condensates [64]. In the presence of FFLO-type order, φ 0 is therefore a nontrivial function even for a homogeneous system. The modulus of this quantity, i.e., |φ 0 | is plotted in Fig. 9(a) for n = 0.2 and in Fig. 9(b) for n = 0.6; in both cases for p = 0, 1/6 and values of the detuning such that the system is in the crossover regime. Both at p = 0 and in the FFLO phase, the natural orbitals of molecules and pairs are fully identical, as has been shown for the limit of vanishing polarization in previous studies [46,49]. Further, in the 1D FFLO phase, the spin density follows the real-space modulation of the natural orbital, with excess majority fermions residing in the nodes of the quasi-condensate (compare Refs. 29, 31 for the case of the 1D attractive Hubbard model). In contrast to the behavior of the spin density, the density of molecules follows the modulation of the quasi-condensate. In other words, the molecular density has its maxima and minima at the same positions as the natural orbital. We should stress here that the presence of features in the densities are due to the open boundary conditions used in our simulations. In the limit of L → ∞, the density and spin profiles will become flat, while the modulations can then be detected in the respective correlation functions (compare Refs. [33,68] for the attractive Hubbard model). In the experimentally relevant situation of harmonically trapped particles, however, the density profiles themselves should have properties similar to those discussed here for finite systems with open boundary conditions, at least in parts of the particle cloud. Note that in the regime p 1 < p < p 2 , the molecular and the pair correlations still exhibit instabilities at the same wave vectors [see Fig. 9(c)], even though the natural orbitals differ in their amplitude. The locking effect (i.e., natural orbitals of pairs and molecules with the same amplitude) is re-encountered in the high-field region p 2 < p < 1. There, the molecular |φ 0 | is smooth, while the corresponding natural orbital for the pairs exhibits small oscillations. Spatial decay of pair correlations To conclude our analysis of the pair correlations, we show that the pair correlations at n = 0.6 asymptotically decay as |ρ pair ij | ∝ | cos(Qx)|/x α , x = |i − j|, in agreement with predictions from bosonization for the slowest decaying contribution to |ρ pair ij | [30]. To that end, we fit f (x) = a | cos(Qx + φ)|/x α to our numerical data, measuring j away from the center of the system (i.e., i = L/2). Considering that the system sizes are not that large, the agreement between the DMRG results and the formula from bosonization is remarkable [see Fig. 10]. In the regime, where FFLO correlations have completely disappeared, the pair correlations decay with a power law, as Fig. 10 suggests for the example of p = 5/6. Small oscillations are due to an inhomogeneous background density of pairs and molecules [compare the inset of Fig. 9(b)]. Finally, we have also verified that at p = 0 and in the BEC limit ν ′ ≫ 1, our numerical data are consistent with a power-law decay of the one-particle density matrix of the molecules |ρ mol ij | ∝ 1/x β with an exponent of β ≈ 1/2. C. Phase diagram Our results for the phase diagram of the 1D BCS-BEC crossover described by Eq. (1) are summarized in Fig. 11, for the cases of g = t [panel (a)] and g = t/2 [panel (b)]. The main panels contain the data for n = 0.6 and we present polarization p vs. dimensionless ν ′ detuning phase diagrams. We identify three regions at p > 0: (i) the BEC limit, ν ′ ≫ 1 and p 2 < p < 1. Here, molecules are immersed into a sea of fully polarized fermions. This phase is denoted as BEC+FP FG in the figures, where FP FG stands for fully polarized Fermi gas. (ii) The 1D FFLO phase at 0 < p < p 1 . In the crossover regime, FFLO is suppressed as p is in-creased. We have determined the phase boundary p 1 (open squares) from both the position of the first kink in the polarization curves and from the pair correlations. In the latter case, at p 1 , the peak at Q = 0 starts to build up in the MDF of the pairs. For instance, the 1D FFLO phase extends up to ν −0.3t at this filling and g = t. This is slightly before resonance on the BCS side, where, nevertheless, the density of molecules is already finite, i.e., N mol > 0 (compare Fig. 3). Lastly, there is a region (iii) p 1 (n, ν) < p < p 2 , beyond which we have a Q = 0 superfluid of molecules immersed into a partially polarized (PP) fermionic gas. This third phase, denoted by BEC+PP LL, is eventually replaced by the BEC+FP FG phase at p ≥ p 2 , where we determine p 2 from the analysis of p = p(h) curves (see Sec. III A). Note that the boundary of the 1D FFLO phase, p 1 , depends on the filling n. From the insets of Figs. 11(a) and (b), we infer that the larger n, the wider the crossover region is, consistent with the discussion of the number of molecules (compare Sec. III B 1). As n → 0, the critical line p = p 1 becomes quite steep and approaches ν ≈ (0.048 ± 0.002)t, or ν ′ ≈ 0.97, for g = t. The comparison of the g = t and the g = t/2 phase diagram shows that the FFLO phase disappears much faster in the case of g = t/2, well before resonance. Qualitatively, one can ascribe this to the fact that with decreasing values of the Feshbach coupling the number of molecules, or more precisely, the closed channel fraction [compare Eq. (4)] becomes larger. The presence of molecules tends to reduce the number of pairs with FFLO correlations. This can be expected to more efficiently suppress FFLO physics the smaller g is since the locking of molecules and pairs is then also weaker. These observations are consistent with our DMRG results for the number of molecules and their dependence on polarization and detuning presented in Fig. 3. In particular, the maximum number of molecules is reached at smaller values of ν the larger the polarization is. Figure 11(c) shows the data of panel (a) in the magnetic field vs. detuning plane, using the dimensionless detuning ν ′ and field, h ′ = h/ǫ * . This yields additional information on the saturation field h sat and the zero-field spin gap ∆ of the standard 1D BCS-BEC crossover of the balanced system, measured by h c . In comparison with Fig. 1, where we have shown ∆ ≃ ǫ * for n = 0.1, we repeat that the spin gap ∆ = 2h c is an increasing function of the filling n [compare also Fig. 2(a)]. In the limit of ν ′ ≫ 1, ∆ = 2h c behaves as ∆ ∝ ν ′ since there, independently of filling, the ground state of the balanced system has N ≈ 2N mol and N f ≈ 0. In a previous work on the three-body problem in the continuum limit, Baur et al. [47] have shown that the change in correlations between an oscillating behavior on the BCS side due to FFLO physics to a smooth one on the BEC is revealed in the symmetry of the threebody ground state wavefunction. The numerical value of the detuning where this change occurs is ν ′ c ≈ 0.63 [47]. It is remarkable that a similar critical value for the disappearance of FFLO correlations is also found in our many-body calculation of the phase diagram. Indeed, in the low-density limit, where a comparison makes sense, the boundary of the 1D FFLO at small polarizations is typically close to resonance, yet on the BEC side of positive detuning ν > 0. For a quantitative comparison, we have determined the critical value ν ′ c (n = 0.1) for the loss of FFLO correlations for several values of g from data taken with L = 120 sites and polarization p = 1/6, the smallest imbalance possible for this system size. The resulting values are in the range of 0.55 ν ′ c 0.91, remarkably close to the value inferred from three-body physics in Ref. [47]. In conclusion, it is evident from Fig. 11 that the best regime for observing the 1D FFLO state is (i) low density and (ii) small polarizations. The low density will favor a large weight in the quasi-coherence peaks, while the polarization needs to be kept smaller than p 1 . Moreover, the 1D FFLO phase is more stable at large Feshbach couplings g. IV. SUMMARY AND DISCUSSION In this work, we studied the Bose-Fermi resonance model in the imbalanced case as a simple model to describe the BCS-BEC crossover of a spin-imbalanced system in one dimension. Our main focus was on the existence and stability of the 1D FFLO phase. So far, manybody calculations of 1D FFLO physics were mostly concerned with models of attractively interacting fermions, which do not account for the existence of composite molecules in the closed channel, typically encountered in experiments. Using a numerically exact method, the density matrix renormalization group method, we computed several quantities to characterize the crossover, including the number of molecules, pair correlations, the momentum distribution function, as well as polarization curves. Most notably, we found that FFLO correlations are suppressed in the crossover region due to the presence of the diatomic molecules. In particular, the 1D FFLO phase gives room for a regime of molecules, quasicondensed at zero momentum. The latter is first immersed into partially polarized fermions, which is then replaced by a Bose-Fermi mixture with spinless fermions below saturation. Thus, the system undergoes two phase transitions in the crossover region at critical polarizations p 1 < p 2 < 1 as the polarization increases. While our work was concerned with the homogeneous system, in experiments, the particles typically experience a confining harmonic potential. The shell structure for attractively interacting fermion models in 1D was intensely discussed. The emerging picture for the continuum case, based on numerically or analytically exact approaches (the latter typically combined with the local density approximation) [23,24,26,37] is that one finds either fully paired wings at small polarization or fully polarized wings, while the core is always partially polarized. In the case of lattice models, DMRG calculations that take the trap into account exactly report fully polarized wings with a partially polarized core [31,32] at intermediate and large polarizations, and the latter also remains true in coupled chains at sufficiently large polarizations [65]. While we expect the behavior of trapped, attractively interacting fermions to carry over to the BCS regime of the Bose-Fermi resonance model, a finite density of molecules may lead to qualitatively different shell structures. For instance, the heavier molecules should mostly reside in the center of the trap. On the one hand, one may expect this to destabilize the FFLO phase in the core, while on the other hand, as long as the Feshbach coupling g and hence, the locking between pairs and molecules is sufficiently strong, the locking could protect the FFLO correlations. The clarification of the effect of a harmonic trap is left for future research. An important question is how the FFLO state can be detected in an experiment. Several proposals have been put forward, for instance, time-of-flight measurements [66], the analysis of noise correlations [34,67], or features in the spin density and correlations [68]. Regarding the spin correlations, one expects a peak at nonzero momentum 2Q = 0 in the presence of FFLO order [68]. In fact, the spin density follows the modulation of the natural orbitals, as has previously been demonstrated for the 1D attractive Hubbard model [31]. As we showed here, this behavior is also realized in the FFLO phase of the Bose-Fermi-resonance model (compare Fig. 9). Even if the FFLO phase was present in a 3D system, the obstacle there is that, if at all, the FFLO phase is in the wings of a 3D, trapped Fermi gas (see, e.g., Ref. [69]). This constitutes another advantage of searching for FFLO physics in a 1D system: there, the core of a trapped gas will host this phase [23,31,37], and therefore, the associated modulation in the spin density should exist in a large part of the cloud, contrary to the 3D case.
2010-02-26T15:39:25.000Z
2009-08-21T00:00:00.000
{ "year": 2009, "sha1": "51532eff326701c4b3b9722e258d9204ceb360f2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0908.3074", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "51532eff326701c4b3b9722e258d9204ceb360f2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219124237
pes2o/s2orc
v3-fos-license
Low-Cost Fiducial-based 6-Axis Force-Torque Sensor Commercial six-axis force-torque sensors suffer from being some combination of expensive, fragile, and hard-touse. We propose a new fiducial-based design which addresses all three points. The sensor uses an inexpensive webcam and can be fabricated using a consumer-grade 3D printer. Open-source software is used to estimate the 3D pose of the fiducials on the sensor, which is then used to calculate the applied force-torque. A browser-based (installation free) interface demonstrates ease-of-use. The sensor is very light and can be dropped or thrown with little concern. We characterize our prototype in dynamic conditions under compound loading, finding a mean R2 of over 0.99 for the Fx, Fy, Mx, and My axes, and over 0.87 and 0.90 for the Fz and Mz axes respectively. The open source design files allow the sensor to be adapted for diverse applications ranging from robot fingers to human-computer interfaces, while the sdesign principle allows for quick changes with minimal technical expertise. This approach promises to bring six-axis force-torque sensing to new applications where the precision, cost, and fragility of traditional strain-gauge based sensors are not appropriate. The open-source sensor design can be viewed at http://sites.google.com/view/fiducialforcesensor. A. Motivation Force-torque sensors are used extensively in both industry and research. We focus here on the use of these sensors in two examples: robotic grasping, where they are used to provide tactile feedback (e.g. detecting when contact is made), and in human computer interaction. However, commercial sixaxis force-torque sensors can be both expensive and fragile. This combination makes them tricky to use for grasping, where controlled contact is desired, but a small coding error could easily smash and overload the sensor. One of the most common types of sensors, the ATI force/torque sensor, costs tens of thousands of dollars and relies on strain gauges that are fragile and have to be surrounded in a bulky package. For these reasons, we are motivated to consider new sensor designs that could promote the use of tactile data in the robotics community through being a combination of cheaper, easier to use, and more robust. B. Related Work Multiple designs have emerged recently taking advantage of the rich information available from consumer webcams. Even low-end webcams will output 640x480 RGB images at 15 frames-per-second (fps). The webcam-based sensors are particularly easy to manufacture and wire. Notable examples include the Gelsight [1], GelForce [2], TacTip [3], the Fingervision [4], and others. These sensors rely on cameras 1 Fig. 1: Consumer webcams and a printed fiducial markers can be used to create a six-axis force-torque sensor. We used four springs to build a platform free to move in all angular directions. We affixed two printed fiducials to the platform, and then aimed a consumer camera up at them. To the right, the camera view reveals the tag location. The tags are glued to the light shield, which is removable, allowing for easy design changes. Note that cardstock, which was removed for picture clarity, was used to diffuse the LED and avoid overexposing the camera. Green bottle cap is for scale. facing markers embedded in transparent or semi-transparent elastomer (often with supplemental LED lighting). These can be used to estimate shear, slip, and force, but tend not to do well in cases where the object hits the side of the finger instead of dead on. They also require casting elastomers. Several MEMS multi-axis force-torque sensors have been developed, which use the same principle of creating a device free to deflect into multiple axes, but then measures them using capacitative [5] or piezoresistive [6] means. In [7] the deflection is measured using a camera as well, a CCD camera mounted to a microscope, however the device only measures two directions of force. Prior work used MEMS barometers to create six-axis forcetorque sensors with very low parts cost and good durability [8]. However, fabricating the sensor requires specialized lab equipment such as a degassing machine. Other work explored estimating fingertip force via video, but only for human fingers [9], [10]. Commercial sensors like the Spacemouse and the OptoForce use similar ideas, but rely on custom circuitboards for a ranging sensor inside. In contrast, our work is straightforward to fabricate even for users unfamiliar with electronics. C. Contributions In this paper, we investigate novel combinations of readilyaccessible technologies to create six-axis force-torque sensors that are inexpensive, require minimal expertise to design and build, and are easily customized for diverse applications. The proposed novel type of sensor makes six-axis forcetorque measurements by tracking position and orientation displacement using the 3D pose estimate from fiducial tags, and uses a linear fit between displacement and applied forcetorque. Fiducials are markers used to help locate objects or serve as points of reference. They can be found in robotics and augmented reality applications, where they usually take the form of printed paper markers glued onto various objects of interest. Sensors employing these fiducials operate by detecting the sharp gradients that are created between black and white pixels, such as one might find on a checkerboard. An example of two fiducials can be found in the top right of the labelled diagram of our sensor at Fig. 1. Using the known geometry of the tag (e.g. perpendicular sides of checkeboard), as well as known tag size and pre-determined camera calibration matrix, the 3D object pose (location and orientation) of the object can be estimated. This calculation is known as the solving the Perspective-n-Point (PnP) problem. We created prototypes utilizing two open-source tag protocols, AprilTags [11] and ArUco markers [12]; pictured in Fig. 1 are two ArUCo markers. In the following sections, we begin with the design and fabrication process for our sensor. We follow with a theoretical analysis of how the sensor design parameters affect resolution, sensitivity, measurement range, and bandwidth. We also present an analysis of data collected from a prototype sensor. We conclude with a discussion of the advantages and limitations of this sensor. A. Sensor Design At a high level, the sensor consists of two main parts: a base and a platform above the base. The platform is connected to the base with 4 springs and can move in all directions with respect to the base. Two fiducial tags were glued to the underside of the platform. Then, a webcam pointed up at the tags was installed at the base. As force or torque is applied to the platform, the tags translate and rotate accordingly. The camera is used to track the 3D pose of the tags. Should there be a suitably linear relationship between the displacement and the force-torque applied, a short calibration procedure using known weights can be used to collect datapoints for regression. Given a known linear fit, the sensor can then output force and torque measurements. Fig. 2 shows the principle behind this fiducial-based force sensor. B. Design Goals When designing the sensor prototype, a few considerations were made. First and foremost, the sensor needs to be sensitive to all six degrees of freedom (displacement in x, y, z and rotation in yaw, pitch, roll). For illustrative purposes, the following analysis is performed in terms of specific specification values that are appropriate for a sample robot gripper. Alternate values for other use cases such as human-computer interfaces can be easily substituted. For grasping, between ± 40 N is realistic, and sensitivity of at least 1/10 N is desirable. Qualitatively, we want the sensor to be small (for grasping applications, the sensor should be roughly finger-sized), inexpensive, and robust. The sensor should allow for rapid prototyping and easy customization with minimal technical expertise. The sensor should be not only easy to fabricate, but also easy to use. C. Fabrication 1) Physical Fabrication: The four pieces in Fig. 3 (figure includes dimensions) are 3D-printed in two to three hours on an inexpensive consumer-grade device (Select Mini V2, Monoprice). Epoxy is used to glue the springs into the camera cover and top plate. The tags are printed on paper and glued in. A small piece of white cardstock is used to diffuse the LED (in the future, this would be built into the 3D design). Conveniently, the pose estimate is relative to the camera frame, and the sensor relies only on relative measurements, so the tag placement can be imprecise. The LED is mounted in and connected to a 3.3 V power source. The heat-set thread inserts (for bolting the light shield to the platform) are melted in with a soldering iron. The camera is placed between the mounting plate and camera cover and then everything is bolted together. The springs are steel compression springs available online as part of an assortment pack from Swordfish Tools. The spring dimensions are 2.54 cm long, 0.475 cm wide, and wire width of 0.071 cm, with a stiffness of approximately 0.7 N/mm. Fabrication can be completed in a day. The actual assembly, given a complete set of hardware and tools, can be completed in 30 minutes, depending on the epoxy setting time. 2) Usage and Software: The only data cable used is the USB from the webcam to the computer. On the computer, the OpenCV Python library [13] (version 4.1.2) is used to detect the ArUco markers in the video feed. We used a commercial force-torque sensor to characterize our sensor, for which we used another freely available Python library (see [14]). The data from the commercial sensor (Model HEX-58-RE-400N, OptoForce, Budapest, Hungary) and the markers are read in parallel threads and timestamped, then recorded to CSV. Python is used for further analysis. By using a consumer webcam, sensor reading is also possible without installing Python. To demonstrate this, we developed a simple interface using a Javascript ArUco tag detector library (see [15]). Fig. 4 shows a graphical user interface (GUI) that plots the x, y, and z-axes of the 3D pose estimate for a single tag. Fig. 4: Our prototype JavaScript-based interface (modified from the Js-aruco library example) [15]. In this way, sensor data can be read just by loading a webpage. In theory, the sensor reading can be done on-the-go with a smartphone and a wireless or USB-C webcam (such as inexpensive endoscope inspection cameras found online). 3) Calibration: Although we calibrated using a commercial force-torque sensor, the same can be achieved with a set of weights and careful clamping. The sensor can be clamped sideways to a sturdy surface to calibrate the xand y-axes. A set of known weights is then attached to the center bolts on the light shield piece via a string. The same procedure can be applied to calibrate the z-axis, with the sensor clamping upside down to a tabletop. Finally, weights can be applied to the two side bolts to produce known torques while hanging upside down or sideways. III. ANALYSIS Considering the above design goals, there are a few primary concerns amenable to theoretical analysis: the sensor resolution, sensitivity, force range, and bandwidth. Here, sensor resolution is defined in bits (relative terms) and sensitivity in millimeters and degrees. A. Resolution Let us conservatively estimate the discernible resolution of the tag system to be d R = 1/4 pixel, or C = 4 counts per pixel. This factor exists because we have more than just binary information (1 bit) for every pixel. For instance, if a black/white intersection is halfway between two pixels, the pixels will be gray. (Tag algorithms also use the known grid geometry to achieve subpixel resolution -see the cornerSubPix function in the OpenCV library). In that case, we can determine the resolution of the sensor itself geometrically, by looking at the number of pixels. The fact that the tags must stay on-screen limits the sensor resolution. We can characterize an approximate y-axis resolution r y of the camera by taking the number of pixels available, multiplying by C, and converting our counts into bits. For instance, the calculations for our sensor prototype are as follows. In the y-axis, r y = log 2 (4 · (480 − 240)) (2) r y = 10 bits In the x-axis, repeating the same calculations we have In the z-axis, our limitation is the same as the y-axis, so we have r z = 11 bits. B. Sensitivity Let us now calculate the sensitivity of the sensor. We will start by looking at the minimum detectable travel in each of the x, y, and z-axes. 1) Translational Sensitivity: In the x and y directions, we can measure the mm/px at rest (the sensor resolution varies a bit since the tag gets larger or smaller depending on the z distance). Roughly, the tag measures 4.5 mm and appears as w tag = 150 pixels in the image. Assuming as above that we can discern 4 counts per pixel, the theoretical sensitivity is For the z-axis sensitivity, we consider that the tag will get smaller as it displaces in the +z direction. Using a simple geometrical model (see Fig. 5b), given that the smallest detectable change in xy plane is 1/4 pixel, we can calculate what is the resulting change in z. Using similar triangles, we see that We would like to work in mm, therefore we use the fact that the tag is 4.5 mm and appears as 150 px. 2) Rotational Sensitivity: For rotation about the z axis, we can calculate the chord length in pixels traveled when a tag is rotated 45 degrees (about its center), and use the same assumption of four counts per pixel to estimate our rotational sensitivity. Geometrically, we know that In our case, with w img = 150 px, we see that l chord = 2 √ 2 · 150/2 · sin π/4 2 = 81.18 px For rotation about the x and y-axes, the analysis becomes a matter of determining the z-axis change in mm, and using that to determine the pixels changed in the x-y plane. Consider a 45 degree rotation around the z-axis of a tag that starts out flat (facing the camera), as shown in Fig. 6b. Using w img = 150 px as before, the z sensitivity is as follows: C. Notes on z-axis measurements Intuitively, we expect that the sensor is much less reliable in the z displacement direction. For movement along the x and y-axes axes, the camera sees the entire set of black/white intersections moving left or right. For the same reason, in the single tag setup it would be easy to detect rotations about the z-axis, and difficult to detect rotations around the x and y-axes. Data collected from this initial (single-tag) design exactly reflected the aforementioned issue. Consequently, the design was enhanced with two tags oriented at 45 degrees to the camera. This proved sufficient for recovering all six force/torque axes. D. Force Range Versus Sensitivity There is a clear trade-off between sensitivity (minimum detectable change in force) and the maximum force range. As an example, for a desired force range F range = ±1 N = 2 N (close to the observed force range for our prototype), and a maximum displacement of y range = h f rame − h img , the y sensitivity s y in Newtons is as follows. Our s y is thus 2.1 mN (given our assumption of d R = 0.25). Similarly, for the x-axis we find a sensitivity s x = 1.0 mN at this force range. Now consider instead the grasping use case, with a desired force range of ± 40 N, and desired sensitivity of at least 0.1 N. If we scale the calculations in Eq. (22) by 40 to get a ± 40 N force range while keeping the other parameters the same, the sensor has 0.04 N and 0.08 N sensitivities in the x and y directions respectively. A. Linearity In order to evaluate the linearity (and therefore usefulness) of the sensor, we used a commercial force-torque sensor (Model HEX-58-RE-400N, OptoForce, Budapest, Hungary) to provide ground truth measurements. Although the OptoForce measures force and torque at a different origin than where the load is applied, the analysis of the linearity of the sensor holds. Data was collected with a Python script which used the OpenCV library to interface with the camera. The setup is shown in Fig. 7. (b) Calibration method. Fig. 7: Left, the data collection setup is shown (with the LED off -note that out-of-frame, there is an Arduino supplying 3.3 V to the LED. Later designs used a 3.3 V coin cell battery to make the sensor standalone). Right, a method to calibrate the sensor without using the commercial sensor is demonstrated. The sensor is mounted upside down and weights are hung by string from the sensor to apply force uniaxially to the +z axis. Autocorrelation was used to determine the lag between our sensor and the OptoForce. The sensor lag between the prototype sensor and the OptoForce was roughly 40 milliseconds. Next, linear interpolation was used to match our sensor data with the OptoForce data, which were output at roughly 25 Hz and 125 Hz respectively. The sensor data was smoothed with an exponential filter with weight of 0.2 to improve the autocorrelation results. For calibration, we take a dataset of displacements D and apply linear regression (with an affine term) against all six axes. θ, φ, and γ refer to rotation around the x, y, and z axes respectively. K then forms a 6-by-6 matrix as shown below. B. Bandwidth Sensor bandwidth is directly limited by the camera framerate. This must be physically measured since the Python script will output at unrealistically high framerate -the OpenCV library reads from a buffer of stale images and will return a result even if the camera has not physically delivered a new frame. The webcam is pointed at a display with high refresh rate. A script turns the screen black, and as soon as the camera detects the black color, the screen changes to white, and so forth, and the frames displayed is compared to system time to obtain the framerate of the webcam. Note that this calculates our maximum sensor bandwidth; our actual sensor bandwidth is determined by the tag detection rate. If dynamic instead of quasi-static loading is assumed, then motion blur can lead to tag detection failure. A. Linearity In multiaxial loading, the sensor was manually moved around in all directions. As shown in Fig. 8, the fits had a R 2 of 0.991, 0.996, 0.875, 0.997, 0.997, and 0.902 for the F x , F y , F z , M x , M y , and M z axes respectively. The F z axis fit is notably worse than the F x and F y fits, which was expected as explained in Section III-C. For qualitative comparison, Fig. 8 shows an example of a reconstructed dataset, where the linear fits are plotted against the original signal for qualitative comparison. This diagram shows the relatively large deviations in F z from the original signal, indicating noisiness in the tag measurements. B. Bandwidth Our maximum sensor bandwidth is experimentally determined to be 25 Hz. Additionally, the camera we used was one of three cameras bought by selecting for low cost, quick availability, and lack of external camera case. We also measured the other two cameras which, despite advertising similar framerates, exhibited noticeable differences in framerate. Operating at 640x480, we measured 25 fps, 33 fps, and 15 fps for the three cameras, as listed in Table I. VI. DISCUSSION Our prototype sensor showed mostly linear responses under dynamic loading. While the linearity is not precise, these results still validate the underlying hypothesis that with fiducials it is possible to collect data on all three axes of force and three axes of torque. Further design iterations could improve on these results, although this approach is unlikely to achieve the 0.1 sensors. A. Design Goals The sensor can now be evaluated against the goals specified previously in Section II-B. The sensor design is indeed responsive in all six axes (after our pivot from one tag to two tags, as well as using a much brighter LED). Additionally, for grasping applications, the calculations in Eq. (22) shows that if a much stiffer spring were chosen so that 40 N of load could be applied without exceeding the y range , the sensor would still have better than 0.1 N of sensitivity. The qualitative design goals were also met. The sensor is small, measuring only 3.6 cm by 3.1 cm by 5.1 cm in size. The sensor is inexpensive, with the majority of the cost being a $20 webcam. The sensor is robust and has survived multiple plane trips and the occasional throw or drop. The sensor is also easy to modify. The light shield can easily be unbolted to change the fiducials, or re-printed in an hour to accommodate different designs (e.g. a single-tag vs. dualtag design). Fabrication is easy and non-toxic, requiring no degassing machine (as with elastomer-based sensors) nor electrical discharging machines (as with custom strain-gauge based designs). The sensor by design does not suffer from thermal considerations (as in [8]) or electrical noise (as with designs based on strain gauges). B. Error Sources An important consideration is the coordinate origin around which measurements are made. As load must be applied to the spring platform on which the tags are glued, the origin around which measurements are collected may be different than desired, although a linear offset matrix should suffice to correct for this. Our six-axis measurement reflects a combination of a camera pose estimation and mechanical coupling, each of which can introduce errors. In the following section on sensor improvement, we focus on camera sensor issues. C. Sensor Improvements 1) Fiducial Changes: Unlike the standard use cases for ArUco markers, we do not care about distinguishing multiple objects and care more about the quality of the pose estimate for a tag guaranteed to be in-frame. A custom fiducial (perhaps solely a checkerboard) could improve the forcetorque measurements. 2) Noise in z-axis: The sensor is noisy in force and torque measurements along the z-axis. To address this, one possibility is to use a mirror and two tags which are laid flat on the xy plane and the yz plane respectively. The "sideways" tag (on the yz plane) has good sensitivity to z-axis displacements, and the flat xy plane tag is addresses rotations around the z-axis. A 45-degree mirror then allows the camera to also observe the "sideways" tag on the yz plane. On the downside, the small mirror could make assembly difficult. 3) Sensor Size: Closer placement of the tag, to minimize the size of the sensor, may also be desired this would necessitate a custom lens for the camera to allow for closer focus (e.g. a macro lens). Miniaturization could also be accomplished with a smaller camera, as in [3]. 4) Replacing Springs: The use of springs means that the sensor may behave poorly in high frequency domains. Replacing the springs with another mechanism, such as a Stewart platform, could allow custom tuning of the response. Another possibility would be to fill the gap between the camera and the tag with optically clear material that would be resistant to high frequency inputs. [16] used a similar idea with a magnet and hall effect sensor, for a three-axis force sensor. However, such a design would complicate fabrication and potentially make camera calibration difficult due to image warping. VII. CONCLUSION We present a novel type of six-axis force-torque sensor using fiducial tags and a webcam. The design is fast to fabricate and simple to use, and is also strong enough to survive drops and crashes common in contact-rich tasks such as robotic grasping. With only 3D-printed custom components, the design needs minimal technical expertise to adapt to applications ranging from manipulation to human-computer interaction research. The open-source design also allows for direct integration in designs for tasks such as grasping where sensor size is important. This fiducial-based sensor is less accurate than commercial force-torque sensors, but is also orders-of-magnitude less expensive -commercial sensors can cost thousands of dollars, while the parts cost of our sensor is under $50 (see Table II). These combined advantages of our prototype sensor validates the general design principle of using 3D pose estimates from printed fiducials to create a six-axis force-torque sensor. Future work on improving the F z and M z axes could allow for an inexpensive, user-friendly, and robust alternative to current commercial sensors, opening up a new range of use cases for six-axis force-torque sensors.
2020-06-01T01:00:40.529Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "fbb29ecd4a5e37d9e200ee9d73821c2131ea73e5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.14250", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "fbb29ecd4a5e37d9e200ee9d73821c2131ea73e5", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
118892824
pes2o/s2orc
v3-fos-license
Periodicity disruption of a model quasi-biennial oscillation The quasi-biennial oscillation (QBO) of equatorial winds on Earth is the clearest example of the spontaneous emergence of a periodic phenomenon in geophysical fluids. In recent years, observations have revealed intriguing disruptions of this regular behaviour, and different QBO-like regimes have been reported in a variety of systems. Here we show that part of the variability in mean flow reversals can be attributed to the intrinsic dynamics of wave-mean flow interactions in stratified fluids. Using a constant-in-time monochromatic wave forcing, bifurcation diagrams are mapped for a hierarchy of simplified models of the QBO, ranging from a quasilinear model to fully nonlinear simulations. The existence of new bifurcations associated with faster and shallower flow reversals, as well as a quasiperiodic route to chaos are reported in these models. The possibility for periodicity disruptions is investigated by probing the resilience of regular wind reversals to external perturbations. The quasi-biennial oscillation (QBO) of equatorial winds on Earth is the clearest example of the spontaneous emergence of a periodic phenomenon in geophysical fluids.In recent years, observations have revealed intriguing disruptions of this regular behaviour, and different QBO-like regimes have been reported in a variety of systems. Here we show that part of the variability in mean flow reversals can be attributed to the intrinsic dynamics of wave-mean flow interactions in stratified fluids. Using a constant-in-time monochromatic wave forcing, bifurcation diagrams are mapped for a hierarchy of simplified models of the QBO, ranging from a quasilinear model to fully nonlinear simulations. The existence of new bifurcations associated with faster and shallower flow reversals, as well as a quasiperiodic route to chaos are reported in these models. The possibility for periodicity disruptions is investigated by probing the resilience of regular wind reversals to external perturbations. Earth's equatorial stratospheric winds oscillate between westerly and easterly mean flow every 28 months. These low-frequency reversals known as quasi-biennial oscillations are driven by high-frequency waves emitted in the lower part of the atmosphere, and supported by the presence of stable density stratification [1]. It is an iconic example of the spontaneous emergence of a periodic phenomenon in a turbulent geophysical flow [2], with analogues in other planetary stratospheres [3], in laboratory experiments [4,5], as well as in idealized numerical simulations [6,7]. In recent years, increasing attention has been given to the robustness of these regular reversals to external wave forcing and perturbations. Disruptions of this type of oscillations have been observed both in the Earth's atmosphere [8,9] and in Saturn's atmosphere [10]. In addition, a variety of oscillatory regimes, including non-periodic ones, have been reported in direct numerical simulations of a stratified fluid forced by an oscillating boundary [6] or driven by an explicitly resolved turbulent convective layer [7]. Non-periodic oscillations have also been reported in global circulation model simulations of the solar interior and Giant planets [11,12]. Until now, the non-periodic nature of the reversals were interpreted as the system's response to transient external variations. For example, the non-periodic disruption of the Earth's QBO and Saturn's QBO-like oscillation have been attributed to the response of equatorial stratospheric dynamics to extratropical perturbations [8][9][10]. Also, the existence of nonperiodic regimes in direct numerical simulations of stratified flows has been related to the time variability of the underlying turbulent convective layer [7,12]. Here, we show that the non-periodic nature of the reversals is a fundamental characteristic of stratified fluids by revealing the existence of a vast diversity of oscillatory regimes obtained using a simple steady monochromatic forcing. We further demonstrate that this rich intrinsic variability effectively controls part of the system's response to a transient external variation. Periodicity disruptions are more easily triggered and are increasingly lengthened when the system approaches a bifurcation point. Model. The simplest configuration capturing the dynamics of the quasi-biennial oscillation (QBO, Fig. 1a) is given by a vertical 2D section of a stably stratified Boussinesq fluid, periodic in the zonal (longitudinal) direction, and forced by upward propagating internal gravity waves. This wave forcing is typically generated by an oscillating bottom boundary meant to represent the effect of tropopause height variations on the stratosphere. The evolution of the horizontally averaged zonal velocity, u, is governed by a simplified version of the momentum equation where ν is the kinematic viscosity, z is the upward direction, and u w is the Reynolds stress due to velocity fluctuations around the zonal average. In weakly nonlinear regimes, this stress is carried by internal gravity waves, and any process damping the wave amplitude leads to a transfer of momentum from the waves to the mean-flow through the Reynolds stress divergence. Wave properties are also affected by the mean-flow and this interplay results in a complex coupled system. To close the dynamical system, one needs to compute the Reynolds stress in (1). In this study, the wave field is simulated either by taking into account all nonlinear interactions between waves and mean-flow (hereafter "nonlinear 2D model") or by considering a simplified closure that neglects wave-wave interactions, together with a WKB approach [13]. The latter approach (hereafter "quasilinear 1D model") has proven to be successful in explaining the spontaneous emergence of low-frequency periodic flow reversals [2,13] and synchronisation with an external forcing [14]. By assuming horizontally averaged dynamics, this quasilinear model is much simpler than the original flow equations but has nevertheless a large number of degrees of freedom since an infinite number of vertical oscillatory modes are possible. The 1D model is thus a natural starting point to investigate how periodic reversals are destabilized when the forcing strength is increased. We consider a standing wave pattern with wavenumber k and frequency ω, forcing a stratified fluid with buoyancy frequency N , for which the background stratification is maintained by Newtonian cooling with damping rate γ. Together, the Newtonian cooling γ, and the viscosity ν, damp the wave amplitude over a characteristic e-folding length Λ = αkc 4 /(νN 3 ), where c = ω/k is the zonal phase speed, and where α = νN 2 /(νN 2 + γc 2 ) is the ratio of viscosity to Newtonian cooling in wave damping. Another essential parameter of the problem is the effective Reynolds number Re = F 0 Λ/ (cν), where F 0 = (u 0 w 0 ) r.m.s. is the wave forcing strength at the bottom boundary. This wave forcing strength further sets a characteristic time scale of low-frequency flow reversals T = cΛ/F 0 [2]. Supplemental Material provides details on the simulations as well as estimates of the key parameters of the the quasilinear and nonlinear models, as well as for the Earth's stratosphere. The parameter range used in the simulations is close to that used in the pioneering work on the subject [13,15], and presented in standard textbooks on geophysical fluid dynamics [2]. Notice that the effective Reynolds number is based on an eddy viscosity, meant to represent the turbulent eddy motion at scales smaller than the internal gravity waves and used as a subgridscale parameterization for turbulence in coarse-grained climate models. The actual Reynolds number of the atmosphere based on the kinematic viscosity of the air is higher by many orders of magnitude than the effective Reynolds number. A self-consistent theory for the QBO would require to infer the eddy viscosity from the knowledge of the actual Reynolds number and other problem parameters, but this conundrum has up to now be out of reach. Here we follow a common practice in geophysical fluid dynamics that amounts to: (i) use an eddy viscosity to describe bifurcations occurring under an increase in forcing amplitude, and (ii) test the robustness of these bifurcations in more complex members of the hierarchy of geophysical flow models [16]. Bifurcation diagrams. To map the bifurcation diagram of the quasilinear 1D model, we performed a large number of simulations spanning effective Reynolds numbers between Re = 2 and 330, covering roughly the relevant range for the Earth's stratosphere (Table 1). For sufficiently low values of Re, the system has only one attractor: a stable point at u = 0. A first bifurcation occurs above the critical value Re c1 ≈ 4.25/(1+α) [17], for which the zonally averaged velocities are attracted towards a limit cycle [13,17] corresponding to horizontal mean-flow reversals and downward phase propagation (Fig. 1b). This period-1 cycle arguably reproduces the salient features of the observed QBO before the disruption event of 2016 (Fig. 1a). Figure 1d shows a bifurcation diagram plotted for increasing Reynolds numbers. A second bifurcation from periodic to quasi-periodic regimes occurs above the critical value Re c2 . Additional bifurcations occur at higher Reynolds numbers, with transitions to frequency-locked regimes, and chaotic regimes. The term 'frequency-locking' is often used where a nonlinear oscillator forced at some frequency exhibits, as a dominant response, an oscillation at the forcing frequency. By extension, we use this term here to describe synchronisation between oscillating modes of the dynamical system. As Re increases, new oscillating modes appear in the vertical structure of the mean flow. For example, a unique frequency is observed at all heights for the period-1 limit cycle shown in Fig. 1b, while faster reversals are observed in the lower levels for the frequency-locked regime shown in Fig. 1c. Ultimately, in chaotic regimes, the superposition of these modes yields a fractal-like structure of nested flow reversals (Fig. S1 in Supplemental Material). Such regimes with faster reversals in the lower layers have also been reported in direct numerical simulations driven by a convective boundary layer [7,12]. The quasiperiodic regime occurring at Re > Re c2 is embedded with a complicated set of frequency-locked regimes (Fig. 1d). The global structure of the bifurcation diagrams is better appreciated by considering, in Transition to chaos in a similar 1D quasilinear model was reported in Ref. [26], which focused only on the purely viscous case, α = 1, with other boundary conditions relevant for the solar tachocline. In fact, this behavior occurs generically in nonlinear systems, with numerous examples in hydrodynamics [19]. In the case of internal gravity wave streaming, frequency locked states organized into Arnold's tongues were found when the 1D quasilinear model is coupled to an external low frequency forcing mimicking seasonal forcing [20], which is reminiscent of synchronisations phenomena in models of El Nino Southern Oscillations [21,22]. Here, by considering a simple monochromatic forcing, and by covering the full parameter space Re − α, we bring to light an unforeseen intrinsic dynamical structure of the underlying quasilinear model. The 1D quasilinear model is a highly truncated version of the original flow equations. It is thus crucial to see whether the aforementioned bifurcations occur in Navier-Stokes simulations of the fully nonlinear dynamics, including both wave-mean and wave-wave interactions. In Hovmöller diagram of monthly averaged zonal winds measured by radiosonde in the lower stratosphere above Singapore (1.4 • N) [27]. b. Hovmöller diagram of the mean-flow u(z, t) in stationary regime for α = 0.6 and Re −1 = 0.06, using the 1D quasilinear model. c. Same as b, but using Re −1 = 0.025. See Supplemental Material for similar time-height plots in other regimes. d. Bifurcation diagram for α = 0.6, obtained for each value of Re −1 by considering the value of u at two different heights z1 and z2, and then by plotting u(z2) at times when u ( z1). The dashed red line corresponds to the first bifurcation (from rest to period-1) occurring at Rec1. The dashed blue line corresponds to the second bifurcation (from period-1 to quasiperiodic) occurring at Rec2. e. Bifurcation diagram in parameter space α, Re −1 . The coloured field is an empirical estimate of the area of the attractor projected in panel d. Low values (in white) correspond to QBO-like regions (period-1) and frequency locked regions (rational numbers). The vertical black line at α = 0.6 corresponds to the bifurcation diagram plotted in panel c. two-dimensional numerical simulations to build a diagram similar to the one obtained with the 1D quasilinear model. These simulations show that the route to chaos is robust to the presence of nonlinear interactions between waves and mean-flow, with transitions from periodic solutions (Fig. 2b) to quasiperiodicity (Fig. 2c), to frequency locking (Fig. 2d) and eventually to chaos. However, significant differences from the quasilinear case are observed in the nonlinear simulations, where bifurcations occur at different effective Reynolds numbers, and where new dynamical regimes emerge. For instance, the large region of period-3 frequency locking obtained in the 1D quasilinear model (Fig. 1c) is replaced by a thin region of period-2 frequency locking for which the symmetry U → −U is broken (Figs. 2a and 2d). Response to external perturbations. By considering a fixed monochromatic wave forcing, we show above that quasi-periodicity arises naturally at steady state in the stratified fluid. This fixed forcing contrasts however with the actual QBO signal, which is driven by timevarying wave forcing and extra-tropical perturbations. In the following, we investigate how the presence of a bifurcation point influences the resilience of a given period-1 QBO-like oscillation to external variability by considering the effect of a time-dependent perturbation superimposed on its reference monochromatic wave forcing. We first consider the effect of a time-dependent pulse in wave forcing strength, F 0 , mimicking the reported sudden increase in wave activity at the equator in the winter preceding the observed periodicity disruption of 2016 (see Supplemental Material for details on the perturbation). From a dynamical point of view, this perturbation suddenly drives the system out of its limit cycle, until it eventually relaxes back to its original period-1 oscillation over a characteristic time τ . Figures 3a and 3b show examples of transient recovery periods for two values of the effective Reynolds number using the nonlinear model. In each case, the time evolution of the mean-flow displays short eastward-flow structures sandwiched between broader westward wind patterns. These higher vertical modes of oscillations, frequently excited in transient disrupted regimes, share qualitative similarities with the periodicity disruption observed in 2016. Figure 3d shows that the characteristic timescale for recovery diverges as the system approaches the bifurcation point Re c2 . We found similar responses to a pulse in zonal mean momentum, and for the spin-up of the system from a state of rest. The recovery time's divergence is observed both in the quasilinear model and the nonlinear model. The swift increase in recovery timescale observed as the system approaches a bifurcation point is a generic feature of dynamical systems often referred to as "critical slowing down" [23,24]. In the climate system context, critical slowing down has proven to be useful in detecting early warnings of a bifurcation point [25]. Conclusions and perspectives. Our study demonstrates that erratic mean flow reversals are recovered with a simple monochromatic wave-forcing, provided that the forcing strength is sufficiently large. This suggests that similar states previously observed with more complex forcing [7,12] can partly be attributed to the intrinsic dynamics of stably stratified fluids, rather than to fluctuations of the forcing itself. The quasiperiodic route to chaos found both in our quasilinear and fully nonlinear simulations reveal that increasing the forcing strength leads to the excitation of fast and shallow bottom-trapped modes nested in deeper and slower vertical modes. These fast bottom-trapped reversals are also excited during the transient response to an external perturbation. Most important, our results have crucial implications for the interpretation of the variability of a QBO-like oscillation: (i) the existence of a second bifurcation is robust in a hierarchy of models (suggesting that it may exist for the atmosphere), (ii) the proximity to this second bifurcation has a strong effect on the response of the oscillation to external perturbations, and consequently (iii) the intrinsic variability of a given oscillation is key to interpret its response to external perturbations. Several aspects of actual planetary flow such as seasonal forcing, rotation, meridional circulation and twoway coupling between stratospheric and tropospheric dynamics are omitted in the simplified flow models considered in this letter. The interplay between intrinsic modes of variability and these additional features will need to be addressed in future work, but we expect that the the existence of a second bifurcation as well as the critical slowing down approaching this bifurcation will be robust through the whole hierarchy of geophysical flow models with stable stratification. Exploratory 3D simulations with rotation -presented in supplementary materialscomfort our 2D results. * antoine.venaille@ens-lyon.fr METHODS AND DETAILS ON NUMERICAL SIMULATIONS 1D quasilinear simulations. Using a static Wentzel-Kramers-Brillouin (WKB) approximation to compute the wave field for a given mean-flow [S4], the wave-induced Reynolds stress u w is parametrised by the formula given in Eq. (S1). This formula is derived under the hydrostatic balance assumption (valid in the limit k|c ± u|/N → 0) and the weak damping assumption (valid in the limit γ/(k|c ± u|) 1 and νN 2 /(k|c ± u| 3 ) 1). Assuming that the characteristic vertical length for u is Λ, then the small parameter needed in the WKB approach is the Froude number F r = c/(ΛN ) → 0. In practice, the different assumptions are most certainly violated. However, this set of equations has long been recognized as a useful model to probe the salient features of QBO reversals. We solve numerically using a centered second-order finite difference method with grid size δ z = H/60, and a second-order Adams-Bashforth scheme with time-step δ t = 0.005T ; T = cΛ/F 0 . A no-slip condition is used at the bottom boundary, z = 0, and a free-slip condition is used at the upper boundary, z = H. Singularities in Eq. (S1) appear when u = c (critical layers). These singularities are treated as follows: at a given height z = z c , if the absolute value of u reaches locally a value higher than c, then the corresponding exponential is set to zero for all z ≥ z c . The definition and value of each of the model's dimensionless numbers are given in table I. 2D nonlinear simulations. The fully nonlinear simulations are conducted using the MIT general circulation model [S1] solving the 2D Navier-Stokes equations under the Boussinesq and hydrostatic approximations where u = uî + wk is the velocity field, u h is its projection on the horizontal plane (x, y); b = g (ρ 0 − ρ) /ρ 0 is the buoyancy; ρ is the density and ρ 0 is a reference density; g is the gravitational acceleration; φ = P/ρ 0 + gz; P is the pressure; ν is the viscosity coefficient; κ is the buoyancy diffusion coefficient; γ u and γ are the rates at which the momentum and buoyancy are linearly restored to the reference profiles u 0 and b 0 , respectively. The domain is a Cartesian grid, periodic in the zonal direction, with zonal length L x = 2π/k and height H . The horizontal and vertical resolutions are respectively δ x = L/26 and δ z = H/200. A free-slip condition is used at the bottom boundary, while a free-surface condition is used at the top. The zonal momentum equation is forced at the bottom boundary using a linear velocity relaxation γ u = δ b /τ u , where τ u is a relaxation timescale, and δ b is a delta function equal to 1 for the bottom grid-point and 0 for all other vertical levels. In this last grid-point, velocity is relaxed to a zonally periodic standing wave pattern where F 0 controls the wave momentum flux amplitude at the bottom. This forcing is thought to generate a standing internal gravity wave field while enforcing an effective no-slip condition for the mean flow u. Buoyancy is relaxed to the linear profile b 0 = N 2 z. To avoid any wave reflection at the upper free-surface, the vertical grid spacing and the For the stratospheric values, we considered F0 = 3 − 10 × 10 −3 m 2 s −2 , c = 25 ms −1 , γ = 0.5 − 1.5 × 10 −6 s −1 and N = 2.2 × 10 −2 s −1 based on [S3]. We considered Λ ∼ 10 km based on the QBO observations (see fig. 1a in the letter). We consider ν = 0.01 − 0.3 m 2 s −1 corresponding to turbulent vertical eddy-diffusivity measured in the lower stratosphere (see e.g. [S2, S5]). The closure used in the 1D quasilinear model reduces the number of dimensionless parameters down to three by assuming ω/N → 0 (hydrostatic approximation) and a low Froude number limit F r → 0 (WKB approximation). Newtonian cooling are both increased in the 20 upper grid layers. Poincaré sections. For each combination of parameters (Re, α), experiments are first spun-up over a time t e = 1500T where T = cΛ/F 0 . This time is sufficient for the system to reach its attractor. To combine the information of more than 10 6 simulations into a single bifurcation diagram, we first select two vertical levels: z 1 near the surface and z 2 aloft. Resuming the simulation at statistical equilibrium (t > t e ), we store the values of u(z 2 ) that intersects u(z 1 ) = 0 in the set The simulations are stopped once 200 values are stored (i.e. after 200 reversals of the lower-level mean-flow u (z 1 )). For each simulation associated with couples of parameters (Re, α), we build an histogram of the values stored in (S4), using 1000 bins in the range [−c, c]. Histograms corresponding to all values of Re for a fixed α = 0.6 are drawn horizontally in figure 1c using a binary colour-map. To collapse the information of the Poincaré sections into a 2D bifurcation diagram (α, Re −1 ), we compute the ratio of populated bins to the total number of reversals for each histogram in the set (S4). This ratio with values in [0, 1] provides an empirical estimate of each histogram's distribution and allows for an extensive classification of the different dynamical regimes (see Fig. 1d in the letter) Recovery from a perturbation. We consider perturbations to a given period-1 QBO-like oscillation. Three types of external perturbations are considered at t = t p : (i) a pulse in wave amplitude (representing a sudden increase of the underlying tropical convection) (ii) a body force acting directly on the mean flow (representing a reorganization of the mean flow due to extratropical perturbations) (iii) a reboot of the oscillations from a state of rest (the recovery time is then equivalent to the spin-up time). Perturbation (i) is modeled using a time-dependent momentum flux amplitude F 0 in Eq. (S1) for the quasilinear model and in Eq. (S3) for the nonlinear simulations: where F 0,p is a constant forcing amplitude corresponding to a periodic regime with period T qbo . Perturbation (ii) is represented in the quasilinear model by an additional body forcing term F bulk in the r.h.s. of the mean flow equation (1): where z p = 0.2z max sets the height. Results are insensitive to the specific choices of z p . For all three types of external perturbations, the system is driven away from its steady state period-1 limit cycle and then freely recovers back to the cycle. To estimate the recovery timescale, we first introduce the running mean-square where T qbo is the period of the limit cycle. At steady state equilibrium, this running mean-square has a constant value u 2 ∞ . Assuming a pulse shorter than the period of the limit cycle (see Fig. 3c in the letter), occurring at time t p , the recovery timescale is then defined by We reproduced figure 3d of the letter in logarithmic scale in order to exhibit the power-law like scaling of the recovery timescale as the system approaches Re c2 . It proved very difficult to deduce a precise value for the critical exponent as the uncertainty on the value of Re c2 echoes on it. However, the critical exponent remains close to −1. VERTICAL FLOW STRUCTURE IN DIFFERENT REGIMES AND EFFECT OF RESOLUTION Bifurcations in the quasilinear model. In order to develop intuition for the underlying dynamics of the bifurcation diagrams of Fig. 1d, we show in Fig. S2 phase space trajectories (panels b-e) and hovmöller diagrams of the mean-flow (panels f-i) for four selected values of the Reynolds number. Shown are examples for a period-1 limit cycle (panels b and f), a quasiperiodic oscillation (panels c and g), a frequency locked oscillation with frequency ratio 1/3 (panels d and h), and a chaotic oscillation (panels e and i). Additional bifurcation diagrams obtained for different values of α are shown in Fig. S3. Although sharing a common qualitative structure, each bifurcation diagrams show distinct interesting features. For example, Fig. S3b shows that the upper quasiperiodic region vanishes almost entirely when α approaches 1/3. Fig. S3d (α = 1) shows a spontaneous breaking of the symmetry U ↔ −U occurring in one of the frequency locked states (Re −1 ∼ 0.045), while all the frequency-locked regimes preserve this symmetry at lower values of α in panels a, b and c. Effect of the resolution in the quasilinear model. In order to test the robustness of the quasilinear model results to resolution, we show in Fig. S4 five bifurcation diagrams for which the vertical resolution has been successively doubled from δ z = 3.5/15 to 3.5/240. Panel c corresponds to the reference resolution used in Fig. 1d. Results show a strong dependence on vertical resolution, in particular for the structure of the embedded frequency locked regimes. However, the essential feature relevant to the periodicity disruption is the second bifurcation point Re c2 , marking the FIG. S2. Bifurcations in the 1D quasilinear model. a. A Poincaré section is shown for varying values of Re −1 and α = 0.6 (see Methods). b. Projection of phase-space trajectory in a 3D space (u1, u2, u3) = (u(z = 0.1Λ), u(z = 1.5Λ, t), u(z = 3Λ, t)) for Re −1 = 0.059. c. Same for Re −1 = 0.045. d. Same for Re −1 = 0.025. e. Same for Re −1 = 250. f. Hovmöller diagram of the mean-flow u (z, t) for Re −1 = 0.059. Time is rescaled by T = cΛ/F0. The velocity u ranges from to −c (blue) to +c (red). The horizontal dotted lines highlight the height z = 0.1Λ, z = 1.5Λ and z = 3Λ, associated with the 3D projections plotted in panels b to e. g. Same for Re −1 = 0.045. h. Same for Re −1 = 0.025. i. Same for Re −1 = 0.0004. transition from periodic to quasiperiodic oscillations. Results of Fig. S4 show that the value of Re c2 is converging for a resolution δ z = 3.5/60, corresponding to the reference resolution used in this work. As far as critical slowing down is concerned, the response of the system approaching the bifurcation from periodic to quasi-periodic state will be robust to higher resolutions, as the threshold Re c2 and the nature of the bifurcation remains the same. However, the details of the response, including the vertical structure of transient oscillations and the prefactor of the recovery time power law may be affected by a change in resolution. 3D NONLINEAR SIMULATIONS WITH ROTATION In this section, we solve the 3D Navier-Stokes equations with rotation approximated by an equatorial beta-plane. The horizontal momentum equation in (S2) now writes where the velocity vector field is now 3D with u = uî + vĵ + wk. β is the Rossby parameter. Let us denote L y the length of the added meridional dimension and δ y the associated resolution. We consider an horizontal aspect ratio L y /L x = 1 and resolution ratio δ y /δ x = 1, with free-slip lateral boundary condition at y = ±L y /2. We explore a weak rotation case, for which the equatorial Radius of deformation L d = N Λ/β is much larger than the meridional extension of domain: L d /L y = 96. All other parameters are identical to the 2D nonlinear simulations, including the forcing, constant along the y direction. The chosen initial condition breaks the meridional invariance.
2019-01-15T22:27:47.000Z
2019-01-15T00:00:00.000
{ "year": 2019, "sha1": "9a8136e2786e3b55f36f2b8d6a939e4f89fc3aa3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1901.05076", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9a8136e2786e3b55f36f2b8d6a939e4f89fc3aa3", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
232296402
pes2o/s2orc
v3-fos-license
Hemagglutination Inhibition (HAI) antibody landscapes after vaccination with H7Nx virus like particles Background A systemic evaluation of the antigenic differences of the H7 influenza hemagglutinin (HA) proteins, especially for the viruses isolated after 2016, are limited. The purpose of this study was to investigate the antigenic differences of major H7 strains with an ultimate aim to discover H7 HA proteins that can elicit protective receptor-binding antibodies against co-circulating H7 influenza strains. Method A panel of eight H7 influenza strains were selected from 3,633 H7 HA amino acid sequences identified over the past two decades (2000–2018). The sequences were expressed on the surface of virus like particles (VLPs) and used to vaccinate C57BL/6 mice. Serum samples were collected and tested for hemagglutination-inhibition (HAI) activity. The vaccinated mice were challenged with lethal dose of H7N9 virus, A/Anhui/1/2013. Results VLPs expressing the H7 HA antigens elicited broadly reactive antibodies each of the selected H7 HAs, except the A/Turkey/Italy/589/2000 (Italy/00) H7 HA. A putative glycosylation due to an A169T substitution in antigenic site B was identified as a unique antigenic profile of Italy/00. Introduction of the putative glycosylation site (H7 HA-A169T) significantly altered the antigenic profile of HA of the A/Anhui/1/2013 (H7N9) strain. Conclusion This study identified key amino acid mutations that result in severe vaccine mismatches for future H7 epidemics. Future universal influenza vaccine candidates will need to focus on viral variants with these key mutations. Introduction Avian-origin influenza A hemagglutinin subtype 7 viruses (H7 AI viruses) circulate primarily in avian hosts. Humans are dead-end hosts for these virus infections and the H7 epidemics rarely persist among humans. However, some H7 influenza viruses may mutate in the human respiratory track and cause severe recurring epidemics [1]. There have been six epidemics caused by Asian H7N9 influenza viruses between 2013-2018 and this raises concern that this subtype may have the potential to cause influenza virus pandemics [2][3][4]. H7N2 influenza viruses caused epidemics in 2002 and 2003 and silently circulated among feline species and/or unknown reservoirs for fourteen years [5]. In the northeastern U.S., H7N2 influenza viruses have high affinity for the mammalian respiratory tract and are highly adapted to mammalian species with increased affinity toward α2-6 linked sialic acid [6]. In 2016, the feline H7N2 influenza viruses resulted the transmission from shelter cats to an attending veterinarian [7]. Even without adaptation, H7 influenza virus strains have caused at least five human epidemics since 2000: 1) the H7N1 influenza viruses infected people in Italy, 2) the H7N2 influenza viruses infected people in Northeastern U.S., 3) two distinct H7N3 influenza viruses infected people in North American and Eurasian countries, 4) one H7N4 infection case in China in 2018, and 5) people in Europe were infected with H7N7 influenza viruses [8]. These epidemics warrant that another avian influenza virus of the H7 subtype may infect and begin transmitting between humans to initiate the next H7 influenza virus pandemic. For prompt production and distribution of vaccines during a pandemic emergency, the World Health Organization (WHO) has stockpiled candidate vaccine viruses (CVVs) for all H7 influenza viruses [9]. However, the antigenic differences of stockpiled CVVs have not been investigated, especially for the H7N9 viruses isolated after 2016 [10]. To prepare for the next H7 influenza virus epidemics, it is imperative to identify the antigenic differences of co-circulating H7 HA proteins and clarify the target coverage by the antigen. There have been a small number of studies that investigated the antigenic differences of multiple H7 strains. Vaccination with divergent H7 HA immunogens isolated in 2009 from North American or Eurasian H7Nx viruses elicit immune responses that protect against Asian H7N9 influenza viruses [11]. Anti-H7 HA antiserum recovered from humans vaccinated with A/Anhui/1/13 H7 HA recombinant protein has broad binding activity to diverse H7 strains, including A/feline/New York/16-040082-1/2016 (H7N2) and to H7 HA from the A/turkey/ Indiana/16-001403-1/2016 (H7N8) virus [12]. There were strong two-way cross-reactivity among H7N9, H7N2, H7N3 and H7N7 influenza viruses [13]. However, it is difficult to draw conclusions about the overall antigenic differences of co-circulating H7 influenza strains since each study used different representative reference strains and used antigens in different formats. In addition, these H7 HA antigens were isolated prior to 2016 and did not represent the current H7 HA variants. In this study, we aimed to investigate the antigenic differences of H7 influenza HA proteins that co-circulated in human over the last two decades. Study design Overall study design was summarized in Fig 1 Briefly, genetic analyses was performed to select representative H7 strains between 2000 and 2020. Selected H7 HA sequences were expressed as virus like particles (VLPs) and subjected for the antigenic landscape analysis. Since it was not plausible to conduct cross-challenge studies across all eight viruses, cross-HAI assay was chosen for the antigenic landscaping. The HAI cut-off for protection was determined based on a mouse challenge study, which was described in prior to the cross-HAI titer analysis. A mutagenesis study was followed to identify the critical mutation responsible for major antigenic changes. Alignment of HA amino acids sequences and virus like particle preparation The H7 HA amino acid sequences uploaded on Global Initiative on Sharing All Influenza virus Data (GISAID) from 2000 to 2020 were downloaded. The sequences were aligned using Geneious software (Auckland, New Zealand). The amino acids 20-300 (HA1) region were extracted and partial or duplicate sequences were eliminated. The sequences were divided into three time periods/searches (2000-2012, 2013-2020 and 2013-2020 non-H7N9 sequences). The trimmed HA1 sequences of each group was aligned using the MUSCLE algorithm and clustered by 97% identity. Each cluster was illustrated as a pie chart using PRISM GraphPad Software (San Diego, CA, USA) and a panel of nine H7 strains of each cluster was selected. Total of nine H7 HA sequences were expressed on the surface of virus like particles (VLPs), as previously described [14]. Briefly, the full-length H7 HA amino acid sequences were subjected to codon optimization for expression in a human cell line (Genewiz, Washington, DC, USA) and inserted into the pTR600 expression vector. The HEK 29T cells were transiently cotransfected (Lipofectamine™ 3000, Thermo Fisher Scientific, Waltham, MA USA) with plasmids expressing H7 HAs, HIV-1 Gag (optimized for expression in mammalian cells; Genewiz, Washington, DC, USA), and NA (A/Thailand/1(KAN-1)/2004 H5N1) (optimized for expression in mammalian cells; Genewiz, Washington, DC, USA). The cells (were incubated for 72 h at 37˚C (Medigen Inc., Rockville, MD, USA). Supernatant was centrifuged in low speed and filtrated through a 0.22-μm sterile filter. Filtered supernatant was purified via ultracentrifugation (100,000 g through 20% glycerol, weight per volume) for 4h at 4˚C. The pellets were subsequently resuspended in PBS (pH 7.2) and stored in single-use aliquots at 4˚C until use. The HA content of H7 VLPs was determined as previously described with slight modification [15]. Briefly, a high-affinity, 96-well flat bottom enzyme-linked immunosorbent assay (ELISA) plate was coated with 5 to 10 μg of total protein of VLPs and serial dilutions of a recombinant H7 antigen (A/Anhui/1/2013 HA generated in house as previously described {Jang, 2020 #487}) in ELISA carbonate buffer (50 mM carbonate buffer, pH 9.5), and the plate was incubated overnight at 4˚C. The next morning, plates were washed in PBS with 0.05% Tween 20 (PBST), and then nonspecific epitopes were blocked with 1% bovine serum albumin (BSA) in PBST solution for 1 h at room temperature (RT). Buffer was removed, and then stalk-specific group 2 antibody (CR8020 {Tan, 2014 #488}) was added to the plate and incubated for two hours at 37˚C. Plates were washed and probed with goat anti-human IgG horseradish peroxidase-conjugated secondary antibody at a 1:3000 dilution and incubated for 2 h at 37˚C. Plates were washed 7 times with the wash buffer prior to development with 100 μL of 0.1% 2,2'-azino-bis(3-ethylbenzothiaozoline-6 -sulphonic acid; ABTS) solution with 0.05% H 2 O 2 for 40 min at 37˚C. The reaction was terminated with 1% (w/v) sodium dodecyl sulfate (SDS). Colorimetric absorbance at 414 nm was measured using a PowerWaveXS (Biotek, Winooski, VT, USA) plate reader. Background was subtracted from negative wells. Linear regression standard curve analysis was performed using the known concentrations of recombinant standard antigen to estimate the HA content in VLP lots. Mouse study C57BL/6 mice (Mus musculus, females, 6 to 8 weeks old) were purchased from Jackson Laboratory (Bar Harbor, ME, USA) and housed in microisolator units. The mice were allowed free access to food and water and were cared for under USDA guidelines for laboratory animals. All procedures were reviewed and approved by the Institutional Animal Care and Use Committee (IACUC). Mice (8 mice per group) were intramuscularly injected twice at four-week intervals with each VLPs (HA content = 3 μg) with AddaVax TM adjuvant (Invivogen, San Diego, CA, USA). Mice were bled at week 8. Mice were transferred to a biosafety level 3 (BSL-3) facility at the earliest availability (week 12), For viral challenge, mice were briefly anesthetized and infected with a 100 LD 50 dose of A/Anhui/1/2013 H7N9 via intranasal route (1X10 3 PFU/0.05 ml). At 4 days post-challenge, three mice in each group were randomly selected and sacrificed to harvest lung tissue. Remaining mice were monitored for the weight loss and euthanized at 14 days post-challenge. Weight loss more than 25% was used as a primary measurement for determination of humane endpoint. Also, dyspnea, lethargy, response to external stimuli and other respiratory distress was closely monitored for the determination of humane endpoint. All procedures were in accordance with the NRC Guide for Care and Use of Laboratory Animals, the Animal Welfare act, and the CDC/NIH Biosafety and Microbiological and Biomedical Laboratories (IACUC number A2017 11-021-Y3-A11). Hemagglutination-Inhibition (HAI) assay To evaluate the humoral response to each vaccination, blood was collected via submandibular bleeding using a lancet and transferred to a microfuge tube. Tubes were incubated at room temperature for at least 30 min prior to centrifugation, sera were collected and frozen at −20˚-C ± 5˚C. A hemagglutination inhibition assay (HAI) assay was used to assess receptor-binding antibodies to the HA protein to inhibit agglutination of turkey red blood cells (TRBCs). The protocol is taken from the CDC laboratory influenza surveillance manual. To inactivate nonspecific inhibitors, mouse sera was treated with receptor destroying enzyme (RDE, Denka Seiken, Co., Japan) prior to being tested. Three parts of RDE was added to one-part sera and incubated overnight at 37˚C. The RDE was inactivated at 56˚C for 30 min; when cooled, 6 parts of sterile PBS was added to the sera and was kept at 4˚C until use. RDE treated sera was two-fold serially diluted in v-bottom microtiter plates. Twenty-five μl of VLPs or virus at 8 HAU/50 μL was added to each well (4 HAU/25 μL). Plates were covered and incubated with virus for 20 min at room temperature before adding 0.8% TRBCs in PBS. The plates were mixed by agitation and covered; the RBCs were then allowed to settle for 30 min at room temperature. HAI titer was determined by the reciprocal dilution of the last well which contained non-agglutinated RBC. Negative (serum from naïve mouse) and positive serum controls (serum from H7 VLPs vaccinated mouse from previous study) were included for each plate. All mice were negative (HAI < 1:10) for pre-existing antibodies to currently circulating human influenza viruses prior to study onset. Plaque Forming Assay (PFA) Viral titers were determined using a plaque forming assay using 1 × 10 6 Madin-Darby Canine Kidney (MDCK) cells, as previously described [16]. Briefly, lung samples collected at 4 days post challenge were snapped frozen and kept at −80˚C until processing. Lungs were serially diluted (10 0 to 10 5 ) with sterilized phosphate buffered saline (PBS) and overlayed onto confluent MDCK cell layers for 1 h in 200 μl of DMEM supplemented with penicillin-streptomycin. Cells were washed after 1-hour incubation and DMEM was replaced with 3 mL of 1.2% Avicel (FMC BioPolymer; Philadelphia, PA)-MEM media supplemented with 1μg/mL TPCKtreated trypsin. After 48 h incubation at 37˚C with 5% CO2, the overlay was removed and washed 2x with sterile PBS, cells were fixed with 10% buffered formalin and stained for 15 mins with 1% crystal Violet. Cells were washed with tap water and allowed to dry. Plaques were counted and the plaque forming units calculated (PFU/mL). Determination of HAI cut-off to predict protection against challenge The receiver operating characteristic (ROC) curve analysis between HAI titer and protection against Anhui/13 challenge, as previously described [17]. The ROC curve illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied. To define protection as in binary format, we considered that individual mouse which maintained bodyweight between 90-100% of the original body weight during entire challenge study. The sensitivity and specificity of four cut-off values (VLP HAI titer = 40, 80,160, and 320) was analyzed for all each body weight cut-off. The sensitivity was calculated as "number of mice which showed hemagglutination inhibition (HAI) titer � cut-off and was protected from the challenge study/ number of all protected mice". The Specificity was calculated as "number of mice which showed hemagglutination inhibition (HAI) titer < cut-off and unprotected from the challenge study/ number of all unprotected mice". The ROC curve was generated by connecting plots of sensitivity% versus 100-specificity% (false positive). The area under the curve (AUC) and Youden's index (Sensitivity + Specificity -1) was calculated by Prism (Graphpad software). The optimal cut-off was determined based on highest AUC or Youden's index to be used as a surrogate of protection. Site directed mutagenesis The H7 HA numbering was based on a previous report [18]. The amino acids at residues 167 to 170 were changed from NAAF to NATF in the putative antigenic site B of A/Anhui/1/2013 H7N9 HA. The NATF amino acids are located at this position in the A/Turkey/Italy/589/2000 H7N1 HA molecules. By the single amino acid substitution, it is expected to introduce N-glycosylation site to the antigenic site B, located nearby the receptor binding site. The site directed mutagenesis was conducted with QuikChange II Site-Directed Mutagenesis Kit (Agilent, Santa Clara, CA, United States) in accordance with the manufactur's instructions. The Primer3 program (v. 0.4.0) was used to design mutagenesis primers. The plasmid was expressed as VLPs as described above. Expressed mutant VLPs were electrophoresed on a 10% Bris-Tris sodium dodecyl sulfate-polyacrylamide gel (SDS-PAGE) and stained by Comassie blue (Biorad, CA, USA)). The molecular weight for HA0 and HA1 estimated based on previous report {Alvarado-Facundo, 2016 #432} and online Peptide and Protein Molecular Weight Calculator (https://www.aatbio.com/tools/calculate-peptide-and-protein-molecular-weight-mw). The Anhui/13 A169T H7 VLP was used to immunize eight C57B/L6 mice at day 0 and week 4. We measured the antigenic breath of the antisera collected at week 8. At week 8, all mice were challenged with Anhui/13 H7N9 wild type virus, as described above, and looked for weight loss, survival, and lung viral titer at 4 days-post-challenge. Statistical analysis The difference in serum HAI titer and lung viral titer among groups was analyzed by ordinary one-way ANOVA, followed by Tukey's multiple comparison test. The difference in body weight loss of each time point was tested by Repeated Measures one-way ANOVA followed by Tukey's multiple comparison test. All statistical analysis was performed using Prism GraphPad Software. Prior to the Asian H7N9 influenza virus outbreaks, the Eurasian and North American lineages represented the majority of H7 HA sequences in the database (53.14% and 45.95%, respectively) (Fig 2A). Interestingly, most of the Eurasian H7Nx influenza viruses isolated between 2000 to 2020, had high HA amino acid similarity (95% or more) to the oldest strain in our panel, A/Mallard/Netherland/12/2000 H7N3 ( Table 1). Instead of a slow drift of HA1 amino acid sequences, genetic diversification of the H7Nx influenza viruses was driven by genetic reassortment that resulted in each cluster sharing unique neuraminidase subtypes (N1, N3, N7, N9). The North American lineage influenza viruses isolated between 2000-2012 were further subdivided into two distinct clusters that shared 92.5% amino acid similarity to each other (green and yellow segments in Fig 2A). During this 12-year period, the North American H7N3 influenza viruses had less genetic drift (<3%) and did not evolve into divergent subtypes (Teal pie in Fig 2A, 2B). The North American H7N2 influenza viruses spiked only in epidemics in early 2000s (2000)(2001)(2002)(2003) and were not detected thereafter (yellow pie in Fig 2A). The majority of viral sequences isolated from 2013-2020 were Anhui/13-like H7N9 influenza viruses (Fig 2B). Approximately 5.12% of the HA1 sequences had 3-5% difference in the amino acid sequence and represented as a separate clusters from Anhui/13-like HA sequences ( Fig 2B). This small cluster of HA sequences consisted of the A/Guangdong/17SF003/2016 H7N9 (Guangdong/16)-like viruses, which evolved from Anhui/13 and clustered into a separate lineage in 2016-2017. Another separate phylogenetic cluster of Asian H7N9 viruses was the A/Shanghai/1/13 H7N9 (Shanghai/13)-like viruses. The Shanghai/13 was one of the earliest human H7N9 isolates in spring 2013, which evolved into a separate phylogenetic cluster from Anhui/13-like viruses [19,20]. In this sequence analysis, the Shanghai/13 virus itself belonged to Anhui/13-like virus due to high homology (98.39% 9 AAs difference in HA1) of the HA amino acid sequences. However, the derivatives of Shanghai/13 had divergent sequences (<96% AAs homology, >17 AAs difference) to form a separate cluster that occupies~1% of the overall HA sequences (Fig 2B). The majority of non-Asian H7N9 influenza strain sequences uploaded on GSAID database between 2013 and 2020 were North-American H7N3 influenza virus derivatives, which repre-sented~26% of the HA amino acid sequences prior to the 2013 Asian H7N9 influenza virus outbreaks (Fig 2C). Most of the North American H7 influenza viruses were H7N3 viruses designated into four distinct HA sequence clusters. The A/American green-winged teal/CA/2015 H7N3 virus, which is the representative strain of the second largest cluster, is most likely derived from the H7N3 A/Bluewingteal/Ohio/658/2004 (Ohio/04) isolate. Interestingly, the northeastern U.S H7N2 strains have been rarely detected since 2004, except for one incident at an animal shelter in 2016 [7]. There are only 10 isolates that belong to the Eurasian lineage, but this is most likely due to the sampling bias for Asian H7N9 isolated in most Asian countries during that time period. All ten isolates had high homology to the NL/00 (H7N3) influenza virus. Selection of H7 panel strains The panel of H7 influenza strains were selected to represent the antigenic diversity of H7Nx viruses during the last two decades. Asian H7N9 strains that are known to be antigenically distinct from each other were selected [9]. For non-Asian H7N9 strains, three Eurasian strains and two North American strains were selected based upon remoteness in geography and time of isolation (Table 1 and Fig 3). The amino acid difference ranged between 1.61-5.14%, among Eurasian strains despite of dispersed isolation and time points of collection. The North American strains shared~81-86% amino acid homology with Eurasian strains. Even though the Ohio/04 and New York/03 strains were isolated within a year from geographically similar regions, they shared 92.5% of the same HA amino acids. It was interesting that only few of mutations were observed from the putative antigenic site of nine strains isolated during two decades ( Table 2). Of note, the hallmark mutation that causes N-linked glycosylation in antigenic site B was observed from Italy/00 (Table 2, blue-color coded and asterisk). Determination of HAI cut-off for protection Mice were vaccinated with virus-like particles expressing the panel H7 HA sequences and challenged with Anhui/13 H7N9 virus. This challenge study was conducted to determine HAI cutoff for protection. All vaccinated mice had high titer antibodies with HAI activity to the Anhui/13 H7N9 virus except those vaccinated with the NY/02 virus (Fig 4A). The HAI titer against live Anhui/13 virus showed similar pattern, albiet with lower titers (Fig 4B). The level of cross-HAI reactivity did not directly correlate with the antigenic similarity (Table 1 and Fig 4). Following challenge with Anhui/13, mice were observed for clinical signs and mortality (Fig 5). To determine the protection, average body weight loss 5% or less was considered as minimal body weight loss (Dotted line in Fig 5A). Mock vaccinated mice lost greater than 15% body weight by day 7 post-infection, which was similar to mice vaccinated with NY/02 VLPs (Fig 5A) with 60% of the mice reaching clinical endpoints and were sacrificed (Fig 5B). Mice vaccinated with Jiangxi/09 or Guangdong/16 lost 12% body weight. Mice vaccinated with the other VLPs lost between 5-8% body weights, except for mice vaccinated with Hunan/16 that maintained their average body for the entire challenge period. Most mice survived challenge ( Fig 5B). One mouse died in the Jiangxi/09 group and 2 mice died in the Guangdong /16 group. Little to no virus was detectable in the lungs of mice vaccinated with Anhui/13 or Shanghai/13, and only one mouse in the Hunan/16 group had detectable virus (Fig 5C). The ROC curve analysis was conducted between HAI titer and protection data following Anhui/13 challenge study (S1-S4 Tables). Protection to the Anhui/13 H7N9 challenge was determined if individual could maintain % body weight between 90% and 100%. The selection of the cut-off was determined by two criteria: maximizing sensitivity (AUC of the curve) and maximized the summation of sensitivity and specificity (Youden's index) [21]. As a representative data for ROC analysis, S1 Fig. Illustrated the ROC curve of each HAI cut-off to predict protection defined as 5% or less body weight loss. The highest sensitivity of the prediction was observed as the maximum area under the curve when the VLP HAI cut-off was 1:80 (S1B Fig). The Youden's index (specificity + sensitivity -1) was highest when the HAI cut-off was 1:160 (S1C Fig). Thus, we used the range 1:80 as the cut-off of HAI titer that can provide protection against a stringent challenge by each H7 influenza virus in panel. The absolute protection is expected if the VLP HAI titer is higher than 160, while HAI titer between 80-160 is expected to provide marginal protection. When applying the cut-offs determined by the ROC analyses, the pre-challenge HAI titer appears to correctly predict the level of protection in weight loss (Figs 4A and 5A) in a stringent Anhui/13 challenge. Cross-reactiveness amongst all H7 panel strains For a comparison of cross-reactive HAI activity, the cut-off 80 was also applied. The HAI antibodies elicited by each H7N9 VLPs had a broad-range of cross-reactive antibodies (Fig 6). The cross-reactivity of each antisera did not correlate with the amino acid sequence similarity of the HA (Table 1 and Fig 6). Mice vaccinated with the four Asian H7N9 strains (Anhui/13, Shanghai/13, Guangdong/16, and Hunan/16) had cross-reactivity to each other (Fig 6A-6D), but did not recognize Jiangxi/09, Italy/00 or Ohio/04 (Fig 6E-6G). Antisera to the Jiangxi/09 or Ohio/04 showed broad cross-reactive HAI activity against all the H7 viruses in the panel, except to Italy/00 (Fig 6). In contrast, anti-Italy/00 sera had broad HAI activity against all the viruses in the panel, except against Jiangxi/09 and Ohio/00 (Fig 6). Mice vaccinated with NY/ 02 VLPs elicited antibodies with HAI activity against the homologous NY/02 virus, but did not recognize any of the other H7 viruses (Fig 6). Influence of glycosylation site With regard to the unique antigenic profile of Italy/00, we found that there was a putative glycosylation site at HA 169 (H7 numbering from our own sequence alignment) ( Table 2). Since the location of putative N-linked glycosylation was located in antigenic site B, we hypothesized that glycosylation at this location may be responsible for the unique antigenic profile of Italy/ 00. To test the hypothesis, we introduced a mutation into the HA nucleotide sequence of Anhui/13 (HA A169T) ( Fig 7A) and looked for the change in reactivity elicited antisera by each VLP vaccine (Fig 7B). Interestingly, the reactivity of VLP expressing the Anhui/13 HA A169T mutation elicited antibodies with a significant decrease in HAI activity against Anhui/ 13 and Hunan/13, but no change against the other 6 viruses (Fig 7B). According to the predicted trimeric structure (Protein data base number = 4N5J), the glycosylation site appear to be located on the antigenic site B, and next to the receptor binding site (Fig 7C). The VLPs expressing WT-, and A169T-Anhui/13 H7 HAs were characterized by Comassie blue stained In presence of the PNGaseF, which removes the N-linked glycans, the HA0 and HA1 band or both WT and A169T VLPs was observed at similar level (left two lanes). But without the PNGaseF treatment, the HA0 band for A149T VLPs (red arrow head in S3 Fig) showed slightly higher molecular weight than WT VLPs, which suggests the addition of glycosylation to the mutant VLPs. We also immunized C57B/L6 mice with the Anhui/13 A169T VLPs and looked for the antigenic breath of the antisera and protection efficacy against Anhui/13 WT H7N9 challenge ( Fig 8). Interestingly, the HAI titer to the Anhui/13 A169T VLPs (homologous antigen) was significantly lower and showed bigger standard deviation than the HAI titer to the Anhui/13 WT (Figs 7C and 8A). The HAI activity to the Shanghai/13 VLPs was similar with the titer to the Anhui/13 A169T VLPs (Fig 8A). High reactivity to the New York/02 VLPs (Fig 8C), which was also observed from other antisera for all 8 panel strains (Fig 6). The HAI reactivity to the Hunan/16, Guangdong/16, Jiangxi/09, Italy/00, and Ohio/03 H7 VLPs was significantly lower than the titer to the Anhui/13 WT and New York/02 H7 VLPs. In consistent with the high HAI titer to the Anhui/13 WT H7 VLPs, the mice were completely protected from weight loss and onset of any clinical symptom by the lethal challenge with the Anhui/13 WT H7N9 virus (Fig 8C and 8D). There was no detectable infectious viral titer in the lung collected at day 4 post challenge, which was clearly contrasted with the naïve control mouse (Fig 8B). Discussion This study investigated the antigenic differences of selected H7 panel influenza HA proteins. Since most available H7 HA sequences originated from major human infections, the selected H7 panel strains were similar with the list of candidate vaccine viruses (CVVs) from the WHO [10]. There was a high similarity of amino acid sequences in the putative HA antigenic sites ( Table 2). In addition, antibodies elicited by these HA antigens had HAI activity to most of these H7 viruses (Fig 6). It was consistent with previous findings showing that broad cross- reactivity among H7 influenza viruses isolated from both North American and Eurasian countries [12,22]. Before this study, Joseph et al conducted similar study with ten H7 influenza viruses isolated between 1971 and 2004 [23]. The selection of panel strains was based on phylogenetic relations and geographic locations. The cross-reactive neutralizing antibody response was observed similar with our study. For example, despite of phylogenetic heterogenicity, the antisera for two H7N3 viruses isolated from American and Eurasian countries (A/chicken/Chile/ 4322/02 (H7N3) and A/turkey/England/63 (H7N3), respectively) were cross reactive each other. The antisera for A/turkey/VA/55/02 (H7N2) was poorly cross-reactive to other H7 viruses, while the H7N2 antigen could be recognized by other antisera. Our study extended the analyses into more recent H7 strains, and identified a major mutation which could significantly alter the antigenic profile. From both our study and the work of Joseph et al, the H7N2 viruses isolated from northeastern U.S. in early 2000 showed unique antigenic profile. In phylogenetic analysis, the H7N2 viruses were uniquely clustered from other H7 viruses due to the large truncation at the putative receptor binding site (H7 HA) (S2 Fig). The unique structure of HA appear to ease the binding of antibodies from other antisera, while the antisera for the H7N2 was lack of major epitope. Meanwhile, The HAI titer against Italy/00 and Ohio/04 VLPs was observed low from all antisera, even to the homologous antisera. Only anti-Italy/00 antibodies against Italy/00 VLP were above the cut-off, and only anti-Ohio/04 antibodies against Ohio/04 VLPs were above cut-offs. It seemed that in comparison to other VLPs, the access to the two VLPs were much restricted. The presence of glycosylation on the receptor binding site also significantly impair the reactivity to the homologous antisera; even the antisera collected from mice vaccinated with the Anhui/13 A169T H7 VLPs detected the Anhui/13 WT H7 VLPs better (Fig 8A). We can explain that the Italy/00 has glycosylation site near the receptor binding site, so even homologous antisera showed relatively lower access to the VLP. We could not find plausible explanation for the Ohio/04 VLPs, but suspect that the structure of Ohio/04 expressing VLP might have hindered the access of the antibodies. The level of cross-HAI activity among H7 HA proteins did not follow phylogenetic similarity or geographic origin. Instead, mutations that altered the glycosylation pattern around the receptor binding site (RBS) played a critical role in shaping the antigenic profile. A single amino acid substitution (HA A169T) caused a significantly reduce the reactivity to antisera specific for Asian H7N9 strains. The mutation did not significantly influence on reactivity to other anti-sera, which suggests that such antigenic site was not dominant recognition site by such antibodies. The mutations were based on the distinctive antigenic profile of Italy/00 H7 HA. This protein has an N-linked glycosylation site (NATF) at residue 167-170 of the HA molecule ( Table 2). The putative location of the N-glycosylation is adjacent to the receptor binding site of the trimeric form of HAs (Fig 6C). Spontaneous occurrence of the N-linked glycosylation sites at the same location in H7 HA proteins was previously reported during the H7N1 epidemics in Italy in the early 2000's [24]. The study used reverse genetics to generate virus which has the corresponding mutation A149T (A169T by our numbering) and showed that the single mutation alone resulted in glycosylation by electrophoresis [24]. Also, the mutation was spontaneous and stable during the passage of the H7N1 viruses in turkeys, which suggests that the mutation can naturally occur during circulation in poultry species [24]. There was no significant influence of the glycosylation site on host tropism, however, the potential change in antigenicity was not investigated [24]. The latest study published in 2020 also verified that the corresponding mutation A151T (A169T by our numbering) occurred in one of the escaping mutants and proved that the mutation results in glycosylation [25]. But both studies did not investigate its influence on cross-reactivity to other H7 strains. The closest finding to our study was a study conducted by Zost that demonstrated a lysine to threonine mutation at residue 170 of H3 HA (corresponding to H7 HA169) resulted in a significant change in the glycosylation pattern at antigenic site B and antigenic mismatch to the parental virus [26]. This was not limited to residue 169, the glycosylation at a separate location (H7 HA 141T), which also naturally occurs, hindered the access of the epitope to neutralizing antibodies [18]. This motif was initially found at seven amino acids upstream to antigenic site A in the A/Netherlands/219/2003 H7 HA [18]. Similar to this study, introduction of the corresponding mutation into the A/Shanghai/2/2013 H7 HA (identical HA sequence of Anhui/13) decreased the binding of specific monoclonal antibodies and facilitated HA-mediated entry of the virus [18]. Our study identified that single amino mutation could significantly reduce the reactivity to the homologous strains, and it seems that there could be more signature mutations on H7 HAs, which can results in vaccine mismatch. H7 HA vaccine strategies should aim to identify more of such mutations and to cover such variants to prevent severe vaccine mismatches. Serum HAI assay has been known to be best surrogate for protection {Dunning, 2016 #484}. Since the human challenge study conducted in the 1970s, the 1:40 HAI titer has been used to predict vaccine effectiveness when an appropriate challenge study is not plausible, such as the annual flu vaccine approval process [27][28][29]. While the 1:40 1:40 HAI titer cut-off is sufficient to provide a rough prediction, the specificity of this prediction can be improved by increasing the HAI titer cut-off [28,30]. This is particularly true for subjects with higher revaccination risks, such as the elderly population [28,30]. Also, the cut-off should be optimized based on the format of testing antigen. The VLPs expresses same HA amino acid sequences with wild type viruses, but their three dimensional structure or surface distribution of HA peptide cannot be identical with wild type virus {McCraw, 2018 #485}, Thus, the HAI titers determined by VLPs flatform has to be differed from HAI titers determined by wild type viruses. Particularly in our study, we used VLPs as the immunization antigens. So the HAI titers were higher when using the same platform (VLPs) for the assay than using live virus ( Fig 3). Thus, we applied ROC analysis to optimize the H7 VLP HAI titer cut-off to predict protection of antibodies elicited by H7 HA vaccinations [30]. The adjusted cut-off, 1:80 HAI unit, was more useful to predict protection against weight loss following Anhui/13 challenge than the 1:40 HAI titer. Our analysis based on optimized HAI cut-off for VLPs can be applied to predict protection efficacy of vaccines against multiple avian influenza variants, which could be difficult to obtain or propagate for animal challenge studies. Serum HAI titer only reflects the protection mediated by the receptor binding antibodies. Influenza virus vaccines confer protection via diverse mechanisms, such as non-HAI antibodies or CD8+ cytotoxic T cells [12,31]. Lung viral clearance may require multiple immune mechanisms, including antibodies, cytokines, dendritic cells and different T cell populations [32]. Blocking viral infection is known to be mediated by diverse mechanisms, such as neutralizing antibodies targeting non-receptor binding sites [33]. Until clear correlates of protection by non-HAI neutralizing antibodies or cell-mediated immune responses become available, the serum HAI titer will remain the most reliable indicator to evaluate influenza vaccine effectiveness. One inherent limitation of this study was that the mouse model was used to extrapolate human antibody response to H7 HA immunization. Recent studies used ferrets as an alternative considering its high susceptibility to influenza virus, similar lung physiology and patterns of binding to sialic acid with human [34,35]. Still, for the antibody research, ferret model might not be as useful considering that the ferret immunology has not well identified and there is no evidence that the ferret antibody can emulate the epitope recognition by human's. Rather, mouse model has advantages in antibody research, such as better availability, genetic homogenicty (inbred), and availability of diverse immunologic assay tools. Future study on broadly reactive H7 HA as a vaccine candidate should be evaluated for its efficacy in ferret challenge model. In conclusion, the data presented in this study demonstrated that the cross reactive antibodies are elicited among H7 HA proteins, but the HA sequences are not correlated with the phylogenetic proximity or geographic orientation of the influenza HA antigens. Key amino acid mutations at putative antigenic sites in the H7 HA proteins are important for elicitation of broadly H7-reactive antibodies. Future studies will focus on developing vaccines to cover all known H7Nx influenza virus strains and future variants with key mutations. Supporting information S1 Fig. Determination of HAI cutoff using Receiver Operating Characteristic (ROC) curve analysis. The plots of sensitivity% versus false positive rate (100-specificity%) of each cut-off were connected to form the ROC curve. Sensitivity = number of mice which showed hemagglutination inhibition (HAI) titer � cut-off and was protected from the challenge study/all protected mice, Specificity = number of mice which showed hemagglutination inhibition (HAI) titer < cut-off and unprotected from the challenge study/ number of all unprotected mice, Youden's index = Sensitivity + Specificity -1.
2021-03-22T17:19:20.898Z
2021-03-18T00:00:00.000
{ "year": 2021, "sha1": "233580aea3cb6565a269b65746b0ed386fdb913c", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0246613&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "70d09503abd637a0cd17e55af608170374d3d94a", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261193609
pes2o/s2orc
v3-fos-license
Systematic Review for Risks of Pressure Injury and Prediction Models Using Machine Learning Algorithms Pressure injuries are increasing worldwide, and there has been no significant improvement in preventing them. This study is aimed at reviewing and evaluating the studies related to the prediction model to identify the risks of pressure injuries in adult hospitalized patients using machine learning algorithms. In addition, it provides evidence that the prediction models identified the risks of pressure injuries earlier. The systematic review has been utilized to review the articles that discussed constructing a prediction model of pressure injuries using machine learning in hospitalized adult patients. The search was conducted in the databases Cumulative Index to Nursing and Allied Health Literature (CINAHIL), PubMed, Science Direct, the Institute of Electrical and Electronics Engineers (IEEE), Cochrane, and Google Scholar. The inclusion criteria included studies constructing a prediction model for adult hospitalized patients. Twenty-seven articles were included in the study. The defects in the current method of identifying risks of pressure injury led health scientists and nursing leaders to look for a new methodology that helps identify all risk factors and predict pressure injury earlier, before the skin changes or harms the patients. The paper critically analyzes the current prediction models and guides future directions and motivations. Introduction "A pressure injury (PI) can range from skin erythema to injured muscle and underlying bone, depending on the impacted tissue layer's size and degree" [1]. It is also known as a pressure ulcer, decubitus ulcer, or bedsore. Depending on the impacted tissue layer's size and degree, it can range from skin erythema to injured muscle and underlying bone [1]. A pressure injury is a significant issue in providing healthcare and maintaining patient safety, with a global prevalence of 12.8% and hospital-acquired pressure injuries (HAPI) of 8.4% [2]. Moreover, 2.5 million patients in the United States of America (USA) develop pressure injuries annually in acute care settings [3]. 95% of pressure injuries are preventable, and the expenditures for measures to prevent pressure injuries are lower than the treatment expenditures. This led to pressure injuries being a vital quality indicator in healthcare organizations [4]. Pressure injuries impact patients' quality of life, morbidity, and mortality and increase the burden on healthcare expenditures [1]. In addition to the harm that affects the patient who seeks help and care, it affects patients' safety negatively and extends the hospitalization period [5]. Many factors are associated with PIs, like age, gender, hospital length of stay, limited mobility, disease severity, skin condition, medications, anesthesia, type of surgery, diagnosis, and nursing workload [4,6,7]. In addition, patients who suffer from PI complain Figure 1. Stages of pressure sores [18] classify into four main stages: stage I, where the pressure injury affects tissue perfusion or circulatory or skin erythema (a); stage II, where the pressure injury affects the thickness of the tissue and causes loss of dermis (b); stage III, where the pressure injury causes necrosis to the tissue or loss of the deep layer of tissue (c); and stage IV, where the pressure injury affects the full thickness of the tissue and destroys the tissue layer and subcutaneous fat (d). This paper presents an Introduction in Section 1; Materials and Methods (research design and protocols, search strategy, and inclusion and exclusion criteria) in Section 2; Results (the risk factors and biomarkers, predictive risk factors, the prediction models of pressure injury with their features, and summaries of the studies that discussed the prediction models) in Section 3; and discussion of the findings in Section 4. Finally, conclusions and motivations for future directions are in Section 5. Materials and Methods The systematic review has been utilized to review the articles that discussed constructing a prediction model of pressure injuries using machine learning in hospitalized adult patients. Machine learning assists in predicting the risk of pressure injury by utilizing vast amounts of data embedded in the electronic medical record, and it may also help nurses identify pressure injuries earlier and promote patient safety. The methodology used in this paper is divided into five sections, namely: (1) research design and protocols; (2) search strategy; (3) study selection method; (4) inclusion and exclusion criteria; and (5) quality assessment. [18] classify into four main stages: stage I, where the pressure injury affects tissue perfusion or circulatory or skin erythema (a); stage II, where the pressure injury affects the thickness of the tissue and causes loss of dermis (b); stage III, where the pressure injury causes necrosis to the tissue or loss of the deep layer of tissue (c); and stage IV, where the pressure injury affects the full thickness of the tissue and destroys the tissue layer and subcutaneous fat (d). This paper presents an Introduction in Section 1; Materials and Methods (research design and protocols, search strategy, and inclusion and exclusion criteria) in Section 2; Results (the risk factors and biomarkers, predictive risk factors, the prediction models of pressure injury with their features, and summaries of the studies that discussed the prediction models) in Section 3; and discussion of the findings in Section 4. Finally, conclusions and motivations for future directions are in Section 5. Materials and Methods The systematic review has been utilized to review the articles that discussed constructing a prediction model of pressure injuries using machine learning in hospitalized adult patients. Machine learning assists in predicting the risk of pressure injury by utilizing vast amounts of data embedded in the electronic medical record, and it may also help nurses identify pressure injuries earlier and promote patient safety. The methodology used in this paper is divided into five sections, namely: (1) research design and protocols; (2) search strategy; (3) study selection method; (4) inclusion and exclusion criteria; and (5) quality assessment. Research Design and Protocols A systematic review was conducted on pressure injury risk factors and prediction models and reported according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [19]. Search Strategy We conducted a systematic review of five different health science databases used in this research: Cumulative Index to Nursing and Allied Health Literature (CINAHIL), PubMed, Science Direct, Institute of Electrical and Electronics Engineers (IEEE), Cochrane, and Google Scholar search. The keywords utilized in this research were pressure ulcer, pressure injury, pressure sore, decubitus ulcer, decubitus sore, bedsore, machine learning, and adult hospitalized patients. Two Boolean operators were used (OR and AND), and the search period included the studies relevant to the topic and research purpose between 2017 and 2023. Study Selection Method Two independent researchers used the eligibility criteria to evaluate the titles and abstracts. The entire texts of all possible publications were then obtained, and they were independently examined. Any disagreement regarding the study's inclusion was handled or discussed with a third researcher. Inclusion and Exclusion Criteria This review includes the studies that met the inclusion criteria for this search, including those using machine learning to predict pressure injuries in inpatients and adult patients. The language of the literature is English. In addition, the study excluded patients younger than 14 years, patients with pressure injuries acquired from outside the hospital, and papers that did not recruit machine learning algorithms to predict pressure injuries. Quality Assessment The quality assessment was performed according to the Joanna Briggs Institute's (JBI) critical appraisal checklist by two independent reviewers (E.D.B. and A.Y.O.) to assess the risk of bias in the included studies [20]. Any disagreement regarding the judge was handled or discussed with a third researcher. The JBI checklist consists of 11 items; each item was scored by yes, no, unclear, and not applicable, and the overall score is assessed for each study and sorted by risk of bias (high, moderate, low) as per the JBI checklist. The score was categorized as high risk if the total of each item was less than 50, moderate if it was between 51 and 80, and low if it was between 81 and 100. Results The existing literature has focused on different aspects of pressure injury and the prediction model of pressure injury. Overall, 494 studies appeared in the literature search (485 from the databases and 8 from the Google search), and 19 were removed due to duplicate records. From those, 426 studies were removed due to the fact that the articles were not related to the prediction model of pressure injury, and 48 studies were reviewed to assess the inclusion criteria; out of the 48 studies, two were excluded due to the lack of reports available, and only 46 articles were screened to assess if the studies matched the inclusion criteria. Of the 46 studies, 2 were not developing models but only protocols for review; 7 were for pediatric patients; and 10 were for community-acquired pressure injuries. Finally, 27 studies were included in the systematic review due to the availability of free full texts and the complete matching of the inclusion criteria. The findings of searching and the screening method were explained in the PRISMA for systematic review, as shown in Figure 2, and the PRISMA was utilized to improve the accuracy of reviewing studies and be more helpful. The findings of searching and the screening method were explained in the PRISMA for systematic review, as shown in Figure 2, and the PRISMA was utilized to improve the accuracy of reviewing studies and be more helpful. Figure 2. PRISMA for the systematic review conducted in this research [19]. Characteristics of Included Studies The utilization of machine learning to predict pressure injuries was discussed in many studies described in Table 1, and they concluded that machine learning had a promising future in detecting pressure injuries. Table 1 summarizes the different study designs and sample sizes of the studies included in this review that discussed the prediction models for pressure injuries. Those studies used different designs, such as prospective (five studies), retrospective (sixteen studies), experimental (one study), case study (one study), prospective and retrospective (one study), and systematic review and meta-analysis (three studies). Also, those studies utilized various types of data sources, such as databases for electronic medical records, patient observation, and reviewing medical records. The dataset size of the patient medical records included in those studies ranged from 149 to 237,397. The sample size and datasets utilized in the reviewed studies ranged from small to large data sets; this variation is considered one of the challenges in machine learning. Furthermore, the data imbalance problems affect the proposed models' training, as reported in [21]. The reviewed studies utilized different approaches in the data balances Characteristics of Included Studies The utilization of machine learning to predict pressure injuries was discussed in many studies described in Table 1, and they concluded that machine learning had a promising future in detecting pressure injuries. Table 1 summarizes the different study designs and sample sizes of the studies included in this review that discussed the prediction models for pressure injuries. Those studies used different designs, such as prospective (five studies), retrospective (sixteen studies), experimental (one study), case study (one study), prospective and retrospective (one study), and systematic review and meta-analysis (three studies). Also, those studies utilized various types of data sources, such as databases for electronic medical records, patient observation, and reviewing medical records. The dataset size of the patient medical records included in those studies ranged from 149 to 237,397. The sample size and datasets utilized in the reviewed studies ranged from small to large data sets; this variation is considered one of the challenges in machine learning. Furthermore, the data imbalance problems affect the proposed models' training, as reported in [21]. The reviewed studies utilized different approaches in the data balances (Random Oversampling, Synthetic Minority Oversampling, and Undersampling), as illustrated in Table 1. Most of the reviewed studies included in this paper used Random Oversampling at 34%, Synthetic Minority Oversampling at 14%, and Undersampling at 7%. Finally, about 45% have not reported the balance method or said it is not applicable. Different studies discussed the use of machine learning in constructing a prediction model for pressure injury; 27 studies were reviewed in the literature in terms of using machine learning to predict pressure injury [4,9,11,13,14,17,[20][21][22][23]26,27,[29][30][31][32][34][35][36]43]. Those studies focused on different aspects of pressure injury and the department or specialty when the patient developed pressure injury, as Ji-Yu et al. [32] developed a prediction model for patients undergoing cardiovascular operations, and the model predicts pressure injury based on the clinical data; Walther et al. [36] studied the power of risk factors related to pressure injury by utilizing machine learning technology; the data were collected retrospectively from 2014 to 2018. Most studies were conducted to track the intensive care unit patients (14 out of 27). Most of the data sets utilized in the reviewed papers are generated from the electronic medical records conducted in that hospital (20 studies), followed by a national database (3 studies), an international database (1 study), and systematic reviews (3 studies). The sampling of the reviewed studies included in the paper is free of limitations related to the ethnic background, socioeconomic status, and gender of the patients that participated in the studies. For the age group, most of the studies determined the adult age for the participants at 70% of the included studies, followed by free of limitations (adult and pediatric) at 19%, not reported in the manuscript at 11%, and elderly patients above 65 years at 4%. Finally, most reviewed papers determined the pressure injury rate for the patients admitted to the intensive care units to be 52%. Risk of Bias Assessment One item in the JBI checklist was judged low risk; others ranged from high to low risk. However, they were either not included in the manuscripts or had inadequate information, and there were certain items where the risk of bias was unclear. Figure 3 illustrates the risk of bias across all items on the JBI checklist. for the patients admitted to the intensive care units to be 52%. Risk of Bias Assessment One item in the JBI checklist was judged low risk; others ranged from high to l risk. However, they were either not included in the manuscripts or had inadequate formation, and there were certain items where the risk of bias was unclear. Figure 3 ill trates the risk of bias across all items on the JBI checklist. Risk Factors and Biomarkers of Pressure Injury The studies conducted to identify the risk factors and biomarkers of pressure inju were in six articles [8,14,[43][44][45][46] summarized in Table 2. The incidence and prevalence pressure injuries were discussed worldwide in many studies, such as [47][48][49]. A stu was conducted in 2019 by Qaddumi et al. [48] to assess the incidence rate of pressure jury and its related variables through a prospective design for 140 admitted adult tients to the ICU and assessing them by the Braden scale to identify the risk of press injuries during a stay at the ICU. The findings of the study were that 30% of patients veloped pressure injuries; for other variables, the frequency of bed repositioning and f Risk Factors and Biomarkers of Pressure Injury The studies conducted to identify the risk factors and biomarkers of pressure injury were in six articles [8,14,[43][44][45][46] summarized in Table 2. The incidence and prevalence of pressure injuries were discussed worldwide in many studies, such as [47][48][49]. A study was conducted in 2019 by Qaddumi et al. [48] to assess the incidence rate of pressure injury and its related variables through a prospective design for 140 admitted adult patients to the ICU and assessing them by the Braden scale to identify the risk of pressure injuries during a stay at the ICU. The findings of the study were that 30% of patients developed pressure injuries; for other variables, the frequency of bed repositioning and folly's catheter are not significant but protective factors for pressure injury in ICU patients. The limitations faced in this study are the small sample size and the data collection depending on the nurses in those hospitals. Various risk factors affect pressure injuries, some of which are predictor variables [43]. Those factors may include but are not limited to age, gender, body mass index, length of stay, medications, vital signs, anesthesia, the Braden scale, the Braden subscale (sensory perception, moisture, activity, mobility, nutrition, and friction and shear), and diagnoses such as cancer, cardiovascular disease, diabetes mellitus, renal failure, and respiratory disease [14,43]. Table 2 summarizes the risk factors and biomarkers identified in those articles. The visual skin assessment (VSA) to predict pressure injury relies on assessment tools that cannot be reliable prediction methods [8], and these methods are limited and problematic because pressure injury develops from the deep tissue; it cannot be noticed until it reaches the skin layer [44]. Objective measures to predict pressure injury called biomarkers, defined as the normal reaction to physiological skin irritation [8], have significant potential to identify the risks of pressure injury through identifying inflammation activated by the inflammation biomarkers such as keratinocytes before the skin changes and skin injury [46]. The research of Schwartz et al. [45] identifies the correlation between pressure injury and biomarkers for the patient after spinal cord injury. It was shown that the circulatory biomarkers and muscle-based biomarkers could identify patients with a high risk for recurrent pressure injury, which found that muscle quality is an effective biomarker and the biomarkers of Fatty Acid-Binding Protein (FABP4) circulator inflammatory factor had a significant level for recurrent pressure injury after spinal cord injury. The potential of biomarkers for the early detection of pressure injuries was assessed by [44,46]. Those studies focus on inflammatory biomarkers. The first study [44] investigates IL-1a (total protein) with sub-epidermal moisture (SEM) and finds a weak correlation between Interleukin-1 Alpha (IL-1a) and SEM [44]. In contrast, the other research [46] explores the creatine kinase (CK), heart-type fatty acid binding protein (H-FAB), and myoglobin (Mb) for the control and spinal cord injury (SCI) groups. It concludes that the two groups (control and SCI) have a positive relationship between the CK and heart-type fatty acid binding protein (H-FAB) and between the Mb and H-FAB. Only H-Fab and CRP had higher concentrations than other subjects [46]. A systematic review study conducted by Wang et al. in [8] discusses the biomarkers that may detect pressure injury and the role of the biomarkers in early detection, which involved Alb, Waterlow score, hemoglobin (Hb), C-Reactive Protein (CRP), age, gender, H-FABP, granulocyte-macrophage colony-stimulating factor (GM-CSF), IL-15, TNF-α, and Interferons-Alpha (IFN-a) in urine). The study [8] concludes that the combination of gender, age, Hb, albumin (Alb), and CRP is the most significant biomarker [8]. Predictive Risk Factors and Biomarkers of Pressure Injury The pressure injury risk factors are vast, and the staff cannot predict all cases due to the unique patient differences [30]. In addition, the pressure injury harms the patients, affecting outcomes and treatment plans, which may cause significant harm in severe cases before the staff detects the pressure injury [14]. So, the prediction model of pressure injury was studied to assess the applicability and benefits of identifying the pressure injury earlier and alarming the system with the risk of pressure injury for the admitted patients due to certain factors and biomarkers [30]. According to Sir William Osler [50], "Medicine is a science of uncertainty and an art of probability". This is the evolution of a new approach to medicine, indicating the importance of machine learning in the healthcare industry and forming the promising future of artificial intelligence [50]. The utilization of machine learning to construct a prediction model for pressure injury differs in the predictive risk factors that resulted from the prediction models; the subsequent studies present the predictive risk factors that resulted from the prediction models. The Xu et al. study [29] found that the predictive risk factors were the reason for admission, clinical laboratory results, patients' demographics, medical history, and Braden scale. Another study by Shui et al. [31] found that the predictive risk factors were patients' demographics, medications, diagnosis, ventilation, and incidence of HAPI. Also, the Cramer et al. study [26] found that the predictive risk factors were age, gender, weight, mean arterial pressure, consciousness, medications, diagnoses, laboratory, and incidence of PI. Another study by Alderden et al. [14] found that the predictive risk factors were vasopressor, temperature, blood pressure, sedation, severity of illness, oxygenation, and confusion level. Moreover, a study by Tang et al. [23] found that the predictive risk factors were age, gender, weight, body mass index, albumin, Hb, and comorbidities. Another study by Choi et al. [33] found that the predictive risk factors were oral mucosal, endotracheal tube (ETT), vasopressor, albumin, hematocrit (HCT), and steroids. A study by Anderson et al. [39] examined age, gender, diagnoses, length of stay, comorbidities, the severity of illness, and the Braden scale. Furthermore, a study by Nakagami et al. [30] found that the predictive risk factors were age, gender, diagnoses, diet, pain, paralysis, level of consciousness, skin condition, comorbidities, the severity of illness, and department type. Another study by Sun et al. [24] found that the predictive risk factors were age, gender, diagnosis, cancer, anti-cancer therapy, Waterlow score, laboratory results, medications, length of stay, mechanical ventilation, acute physiology and chronic health evaluation (APACHE) II score, and blood purification. A study by Ladios-Martin [13] found that the predictive risk factors were gender, age, place of birth, hospital, diagnosis, and APACHE II score. Deschepper et al. [32] found that the predictive risk factors were age, gender, diagnosis, Braden score, body mass index (BMI), heart rate, mean arterial pressure, temperature, laboratory results, and immunocompromised status. Table 3 summarizes the predictive risk factors identified in those articles. The common risk factors investigated in most of the studies were diseases and comorbidities, laboratory results, Braden scale, use of medications, age, vital signs, gender, body mass index, length of stay, duration of surgery, and critical conditions, such as the following factors that correlate with pressure injuries; age > 74 years, female ASA ≥ 3, BMI < 23, Braden score, anemia, respiratory disease, and HTN were studied by Aloweni et al. [22]; High FBS, vasoactive drugs, and the duration of surgeries were studied by Tang, Li, and Xu [23]; the critical condition and high Barden were studied by Sun et al. [24]; length of the patient's stay was studied by Šín et al. [28]; prolonged length of stay in the ICU, DM, male gender, BMI, and maximum lactate were studied by Deschepper et al. [32]; mechanical ventilation, anesthesia, and age were studied by Walther et al. [40]. Table 3. Predictive risk factors for pressure injury. References Predictive Risk Factors [22] Age, female, ASA score, body mass index, Braden score, anemia, respiratory disease, and hypertension. [23] Braden score, preoperative fasting blood glucose level, emergency surgery, and types of vasoactive drugs. [15] Age Three hundred twenty-four features were discussed among the 25 studies. [35] Duration of surgery, patient weight, duration of the cardiopulmonary bypass procedure, patient age, and disease category. Those studies recruit many types of machine learning algorithms; some of those studies recruit one kind of algorithm, and some of the studies recruit more than one type of algorithm, as described in Table 4 Different approaches evaluated the machine learning prediction models of pressure injury, and most of those models utilized the Area Under the Curve (AUC) in addition to the other performance metrics such as accuracy, sensitivity, specificity, precision, and recall. Finally, not all studies report all the attributes of the performance metrics. Discussion This section will discuss the results obtained from all reviewed papers (risk factors and biomarkers of pressure injury, characteristics of included studies, and prediction models of pressure injury) and the research implications, limitations, and recommendations for future directions. Discussion of Results We examined 494 journal articles and picked 27 that offered information about the machine learning prediction model of pressure injury utilized in hospital settings to identify the pressure injury earlier in our systematic review. Furthermore, the studies discussed using machine learning to predict pressure injuries in adult inpatients. In total, 27 articles were included in the review, and the following themes were considered: risk factors and biomarkers of pressure injury; characteristics of the included studies; and prediction models of pressure injury. We notice that no studies look for all risk factors and biomarkers of pressure injury. The reviewed studies utilized different design approaches to predict pressure injuries. In addition, the prediction model of pressure injury provides clear evidence that machine learning algorithms will assist healthcare providers by identifying the pressure injury earlier and with a high accuracy rate. We offer a complete overview of the reviewed articles on the prediction model of pressure injury utilized in hospital settings. Utilizing the balance methods will improve the prediction model results [51]. The studies conducted in [28,36] showed excellent performance of the proposed predictive models due to the use of balancing methods. Moreover, utilizing approaches to overfitting in the development of new models is highly recommended, especially for models with low performance. Characteristics of Included Studies In this review, we focus on the studies that use the machine learning prediction model of pressure injury and those that utilize different research designs to predict pressure injury. Most of those studies utilized retrospective studies that enabled the researchers to obtain the data from data warehouses (14 out of 27 studies). It is worth mentioning that there is one study that utilized the experimental design to construct the prediction model of pressure injury. This approach matches a content analysis study by Kamiri and Mariga [52] to discuss the research methodology in machine learning, which revealed that all studies included in the analysis used machine learning in experimental designs. Risk Factors and Biomarkers of Pressure Injury Pressure injury (also called pressure ulcers) has many risk factors and biomarkers that may affect patients and potentially affect the incidence of pressure injury. These factors are not standardized for all patient categories. The factors that affect pressure injury identified in the prediction model studies included in this review and those identified as predictive factors. This means that those factors are significant and correlated with a pressure injury. Machine Learning Prediction Models of Pressure Injury The main objective of this review is to identify the studies that utilized machine learning in the prediction model of pressure injury and to summarize the approaches and algorithms used in these studies. In addition to summarizing those models' performance metrics and evaluation methods and the results obtained from the prediction models. All studies rely on data available in the data warehouses of electronic medical records. Furthermore, most studies do not describe the preprocessing of the data or data cleaning. Most studies did not mention the selection process of algorithms, and LR was the most frequent machine learning algorithm, followed by RF, SVM, NN, and DT. These results match the study by Kamiri and Mariga [52]. The hospital management needs to provide their hospitals with models to assist their staff with prediction models to detect pressure injuries earlier. The prevalence of pressure injuries in hospitals is still high, and the health system and policymakers may need to recruit new methods that identify the risks of pressure injuries. Furthermore, the prediction model of pressure injury needs to be implemented on a different level and provided to healthcare facilities with this model that helps the healthcare providers identify the risk of pressure injuries, or the patient may develop pressure injuries during the hospital stay. Research Implication The prevalence of pressure injuries in hospitals is still high, and the health system and policymakers may need to recruit new methods that identify the risks of pressure injuries. Furthermore, the prediction model of pressure injury needs to be implemented on a different level and provided to healthcare facilities with this model that helps the healthcare providers identify the risk of pressure injuries, or the patient may develop pressure injuries during the hospital stay. Limitations of the Research The systematic review includes articles from the most reputable five databases and a Google Scholar search, which may consist of only some relevant articles from all databases. Also, the research reviewed the articles in English only. Recommendation The findings of this review suggest that nurses, physicians, physiotherapists, and dieticians may benefit from models that predict pressure injuries in hospital settings; these models can provide a valid tool in addition to implementing evidence-based practices that will mitigate and prevent pressure injuries. The hospital management needs to provide their hospitals with models to assist their staff with prediction models to detect pressure injuries earlier. Finally, the literature review from the previous work on the prediction model of pressure injury shows that the prediction model predicts which patients may develop pressure injury based on the risk factors that belong to the patients but does not predict when the patients may acquire the pressure injury. Furthermore, the prediction model recommends tracking changes in the patient's status, condition, or biomarkers resulting in pressure injury to identify whose patients may acquire the pressure injury during hospitalization. Also, one of the gaps in the previous works is that no one has studied, investigated, or mentioned accreditation status as a variable or feature in the prediction model developed in those studies. The accreditation status means an accreditation body or agency acknowledges the hospital's implementation of the accreditation standards [55]. Conclusions The number of ML approaches utilized in the reviewed studies was 21; the top five were logistic regression, random forest, decision tree, support vector machine, and neural network. Logistic regression was the dominant approach with 28% of all used models, followed by random forest with 20%, decision tree with 11%, support vector machine with 9%, and neural network with 7%. This means that 75% of the reviewed studies used the top five models, whereas 25% used other ML models. It is worth mentioning that, according to the findings, logistic regression and random forest were the best models to predict pressure injury. The common risk factors were investigated in most of the studies, and we found that those factors are diseases and comorbidities (which present (15%) of the predictive risk factors), laboratory results (12%), Braden scale (11%), use of medications (10%), age (8%), vital signs (7%), gender (6%), body mass index (3%), length of stay (3%), duration of surgery (3%), critical condition (3%), and other factors that present (19%) of the risk factors. The reviewed papers discussed different domains related to the prediction models of pressure injury, including nursing care, the impact of nursing care on pressure injury, pressure injury, risk factors of pressure injury, biomarkers of pressure injury, machine learning algorithms, and the prediction model of pressure injury. However, although the results obtained from these studies are promising, none of them successfully utilized a fused multi-channel prediction model of pressure injury. We recommend including all pressure injury biomarkers, risk factors, and organizational-related factors in future studies.
2023-08-27T15:28:47.976Z
2023-08-23T00:00:00.000
{ "year": 2023, "sha1": "3e150c0a5001299d4ca3899d98778d974b9b0743", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/17/2739/pdf?version=1692795990", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "0c6613a7eafaa231d3ad9c43711763861e7ee7a4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
210859431
pes2o/s2orc
v3-fos-license
Rankin-Selberg periods for spherical principal series By the unfolding method, Rankin-Selberg L-functions for ${\rm GL}(n)\times{\rm GL}(m)$ can be expressed in terms of period integrals. These period integrals actually define invariant forms on tensor products of the relevant automorphic representations. By the multiplicity-one theorems due to Sun-Zhu and Chen-Sun such invariant forms are unique up to scalar multiples and can therefore be related to invariant forms on equivalent principal series representations. We construct meromorphic families of such invariant forms for spherical principal series representations of ${\rm GL}(n,\mathbb{R})$ and conjecture that their special values at the spherical vectors agree in absolute value with the archimedean local L-factors of the corresponding L-functions. We verify this conjecture in several cases. This work can be viewed as the first of two steps in a technique due to Bernstein-Reznikov for estimating L-functions using their period integral expressions. Introduction To a pair of automorphic forms f on GL(n) and g on GL(n ′ ) one can associate the Rankin-Selberg L-function L(s, f × g), which is given by a Dirichlet series involving the Fourier-Whittaker coefficients of f and g. Rankin-Selberg L-functions generalize the standard Godement-Jacquet L-function and are holomorphic/meromorphic functions of s ∈ C satisfying an explicit functional equation. One of the fundamental problems concerning the analytic aspects of L-functions is to bound L(s, f × g) along the critical line 1 2 + iR in terms of s and/or the Langlands parameters of f and g. Using the functional equation and the Phragmén-Lindelöf convexity principle one obtains the so-called convexity bound for L(s, f × g) on the critical line. Any bound improving the convexity bound is called a subconvexity bound. Subconvexity bounds for Rankin-Selberg L-functions have been obtained in various aspects (see e.g. [6,7,23,24,25,28,36,37] and references therein), but so far mostly for n, n ′ ≤ 3. The methods used involve deep techniques from analytic number theory such as trace formulas, the circle method or the delta method, and often require delicate analytical tools such as stationary phase approximation. After this paper was finished, certain subconvexity bounds for the case n ′ = n − 1 were obtained by P. Nelson [29] using different methods. There is another way of approaching Rankin-Selberg L-functions, that is through period integrals. Integrating the automorphic forms over a certain locally symmetric space, involving an Eisenstein series in the case n ′ = n, defines a period integral Λ(s, f × g). By the unfolding method, this period integral Λ(s, f × g) essentially reduces to the product of an archimedean local L-factor G λ,ν (s), which only depends on the Langlands parameters λ of f and ν of g, and the Rankin-Selberg L-function L(s, f × g). This makes it possible to bound Rankin-Selberg L-functions by bounding the corresponding period integrals Λ(s, f × g). In a series of seminal papers [2,3,4,5,30], Bernstein and Reznikov were able to obtain subconvexity bounds for triple product L-functions of PGL(2) by bounding the corresponding period integrals. Their method involves representation theory of the group PGL(2, R); more precisely they study in detail invariant trilinear functionals on products of the corresponding automorphic representations. The key ingredient is the multiplicity one property, which asserts that invariant trilinear functionals on products of irreducible representations of PGL(2, R) are unique up to scalar multiples. They proceed essentially in two steps: (I) Explicit invariant trilinear functionals for principal series representations are constructed and evaluated at the spherical vectors. These explicit functionals are by the multiplicity one property proportional to the functionals given by the period integrals. (II) The proportionality scalar is bounded above by different methods (analytic continuation of representations, bound for L 4 -norms of K-types, estimates for Hermitian forms on automorphic representations). In this paper we study step (I) in the framework of Rankin-Selberg L-functions. In the same way as for PGL (2), the period integrals Λ(s, f × g) related to Rankin-Selberg L-functions for GL(n) × GL(n ′ ) give rise to invariant functionals on the product of the automorphic representations corresponding to f and g. The multiplicity one property in this framework was established by Sun-Zhu [35] for n ′ = n−1 and by Chen-Sun [9] for general 1 ≤ n ′ ≤ n and it allows one to relate these invariant functionals to explicit invariant functionals on principal series representations. Our main results are: • The construction of explicit invariant functionals on tensor products of spherical principal series representations in terms of their integral kernels (see Proposition 3.2 for the case n ′ = n and Proposition 3.3 for the case 1 ≤ n ′ ≤ n − 1). • The conjecture that the special value of our invariant functionals at the spherical vectors equals in absolute value the local L-factor G λ,ν (s) up to a constant (see Conjecture 3.6). • Verification of our conjecture for (n, n ′ ) = (2, 2), (2,1), (3,2) and (3, 1) (see Sections 4 and 5). The construction and study of invariant functionals on principal series is itself an interesting topic in representation theory, which has recently received much attention in the framework of branching problems (see e.g. [20,21,12] and references therein). More precisely, the Gan-Gross-Prasad conjectures can be viewed as a generalization of Rankin-Selberg theory. For instance, the corresponding model periods for rank one orthogonal groups have recently been studied by Kobayashi-Speh [22]. To use our results in order to obtain bounds for the corresponding Rankin-Selberg Lfunctions, one needs to carry out step (II) as well. We hope to return to this in a subsequent paper. At this moment it is not clear to us, which of the different methods in [2,3,5] to estimate the proportionality scalars generalizes to our situation. The most straightforward technique seems to be estimating Hermitian forms on automorphic representations (see e.g. [3]), but it requires that the quotient Γ\G is compact which is not the case for Γ = SL(n, Z). We remark that there is a much simpler way of constructing explicit invariant functionals on tensor products of generic representations of general linear groups using Whittaker models. More precisely, a generic representation π of GL(n, R) admits a realization on a subspace where ψ is a non-degenerate character of the subgroup N n ⊆ GL(n, R) of unipotent upper triangular matrices. If W(τ, ψ) denotes the corresponding model for a generic representation τ of GL(n ′ , R), embedded in GL(n, R) as the upper left corner, where ψ is the restriction of the complex conjugate character to the maximal unipotent subgroup N n ′ = N n ∩ GL(n ′ , R), then defines an invariant functional in the case n ′ < n, and a similar construction can be carried out for n = n ′ . (Here ⊗ denotes the completed projective tensor product.) The method of Bernstein-Reznikov relies on an explicit description of the group action, the representation space and the invariant inner product in order to construct test vectors and estimate their invariant norms when acted upon by group elements close to the identity element. In the Whittaker model W(π, ψ) the group action is the right regular representation and hence very explicit. However, the invariant inner product and the explicit description of smooth vectors in the representation space are only accessible when restricting W ∈ W(π, ψ) to GL(n − 1, R), also referred to as the Kirillov model. In the Kirillov model functions with compact support modulo N n−1 are smooth vectors and the invariant inner product is given by integration over N n−1 \ GL(n − 1, R), so both the representation space and the invariant inner product are somehow explicit. But it is non-trivial to recover W from its restriction to GL(n − 1, R) and hence find an expression for the group action. This is the reason we are interested in invariant functionals on tensor products of principal series whose representation space is explicitly given as a space of sections of a vector bundle over the flag variety, the group action being given by the right regular representation and the invariant inner product simply being an L 2 -inner product. Structure of the paper. In Section 1 we recall the definition of Rankin-Selberg L-functions and convolutions, including known results about the archimedean local L-factors. Section 2 is about interpreting the period integrals as special values of invariant forms on automorphic representations. In Section 3 we construct explicit invariant forms on tensor products of principal series and conjecture their values at the spherical vectors. These values are explicitly computed for (n, n ′ ) = (2, 2) and (2,1) in Section 4 and for (n, n ′ ) = (3, 2) and (3,1) in Section 5. Finally, in the Appendix A we collect the integral formulas needed in Section 4 and 5 to evaluate the relevant integrals. 1.1. Maass forms on GL(n, R). Let G = GL(n, R) (n ≥ 2), Z(G) = R × its center and K G = O(n) the standard maximal compact subgroup. We fix the lattice Γ G = SL(n, Z). Further, let g = gl(n, R) denote the Lie algebra of G, U (g) the universal enveloping algebra of its complexification g C and Z(g) the center of U (g). Then G acts unitarily on L 2 (Γ G ·Z(G)\G) by right-translation and this induces an action of U (g) on C ∞ (Γ G · Z(G)\G) by differential operators. is called a Maass form if it has the following properties: A Maass form f is furthermore called cusp form if it decays rapidly at the cusp of the locally symmetric space Γ G · Z(G)\G/K G . In view of property (1), Maass forms can also be viewed as Γ G -invariant smooth functions on the semisimple Riemannian symmetric space G/K G ·Z(G), which can be identified with the generalized upper half plane. For this let A G denote the subgroup of diagonal matrices with positive diagonal entries and N G the unipotent subgroup of upper triangular matrices. In view of the Iwasawa decomposition G = N G A G K G , the quotient G/K G · Z(G) = GL(n, R)/ O(n) · R × can be identified with the generalized upper half plane h n , which is defined to be the set of all products z = xy with where x ij ∈ R and y i ∈ R + . For α = (α 1 , . . . , α n−1 ) ∈ C n−1 , the function I α on h n given by is a joint eigenfunction of all differential operators in Z(g). A Maass form f is said to be of type α if its eigenvalues λ D , D ∈ Z(g), as defined in (2), coincide with the eigenvalues of I α (z), i.e. Note that if f is a Maass form of type α, then its complex conjugate f is a Maass form of type α. For m = (1, . . . , 1) we will write ψ = ψ 1,...,1 for short. Then Jacquet's Whittaker function of parameter α ∈ C n−1 is defined for Re α i > 1 n (i = 1, . . . , n − 1) by the convergent integral These sums converge for sufficiently large Re(s) and L(s, f ×g) has a holomorphic continuation to s ∈ C except in the case n ′ = n where there might occur a simple pole at s = 1 (see [14,Theorem 12.1.4] for details). In the special case n ′ = 1 the Rankin-Selberg L-function reduces to the standard Godement-Jacquet L-function of f , which is defined by 1.4. The period integral. The relation between the Rankin-Selberg L-function L(s, f × g) and a period integral of f and g differs in the cases n ′ = n, n ′ = n − 1 and 1 ≤ n ′ ≤ n − 2, so we treat these cases separately. · · · y s n−1 . Following [14,Chapter 10.4], we define the degenerate Eisenstein series attached to the standard maximal parabolic subgroup P G,max of G corresponding to the partition n = (n − 1) + 1 for sufficiently large Re s by the absolutely convergent sum The degenerate Eisenstein series E s (z) has a meromorphic continuation to s ∈ C. The Rankin-Selberg convolution of two Maass forms f and g on GL(n, R) is for s ∈ C with sufficiently large Re s defined by the period integral where d * z denotes a (suitably normalized) measure on Γ G \h n , which is locally given by an SL(n, R)-invariant measure on h n (see [14,Chapter 1.5] for details). As in [14,Chapter 12.1] one shows that if f is of type α ∈ C n−1 and g is of type β ∈ C n−1 the Rankin-Selberg convolution can be expressed in terms of the Rankin-Selberg L-function as follows: with y = diag(y 1 · · · y n−1 , . . . , y 1 y 2 , y 1 , 1) and d * y = n−1 j=1 y −1−j(n−j) j dy j . 1.4.2. The case n ′ = n − 1. Let f be a cuspidal Maass form on GL(n, R). Then its restriction f | GL(n ′ ,R) to GL(n ′ , R) ⊆ GL(n, R), embedded as a block in the upper left corner of GL(n, R), decays rapidly on GL(n ′ , R) ∩ A G and therefore the following integral converges for all Maass forms g on GL(n ′ , R) and all s ∈ C: where d * z denotes a (suitably normalized) GL(n ′ , R)-invariant measure on SL(n ′ , Z)\ GL(n ′ , R). Following [14,Chapter 12.3] we consider for a Maass form f on GL(n, R) the projection The Rankin-Selberg convolution of a cusp form f for SL(n, Z) with a Maass form g for SL(n ′ , Z) is for s ∈ C defined by the period integral where d * z is a (suitably normalized) GL(n ′ , R)-invariant measure on SL(n ′ , Z)\ GL(n ′ , R). As in [14,Chapter 12.3] one shows that if f is of type α ∈ C n−1 and g is of type β ∈ C n ′ −1 the Rankin-Selberg convolution can be expressed in terms of the Rankin-Selberg L-function as follows: with y = diag(y 1 · · · y n ′ , . . . , y 1 y 2 , y 1 ) and In general, no explicit formula for G α,β (s) is known. However, for the special cases n ′ = n and n ′ = n − 1 the integral was evaluated explicitly by Stade as a product of gamma factors (see [33,34]). Moreover, for n ′ = n − 2 the integral can be simplified to a one-dimensional Barnes integral (see Ishii-Stade [17]). We also refer to Jacquet [18] for the case (n, n ′ ) = (2, 2), Stade [32] for the case (n, n ′ ) = (3, 3), Bump [8] for the case (n, n ′ ) = (3, 2) and Hoffstein-Murty [16] for the case (n, n ′ ) = (3, 1). where γ is a contour from −i∞ to +i∞ such that all poles of the integrand are on its left. We remark that the correction factor 2 n−1 appears in the denominator since integration is over R + rather than R × . The correction factor Γ R (ns) appears in the denominator when n ′ = n, because the integral is over A G modulo the center instead of A G . The correction factors Γ R (λ j − λ k + 1) and Γ R (ν j − ν k + 1) appear in the denominator, because the Whittaker functions W J have been L 2 -normalized (in contrast to Stade's W (n,α) , see below). In this sense, the relevant terms for the completion of the L-function are the factors Γ R (s + λ j + ν k ). Automorphic Rankin-Selberg periods In this section we explain how the Rankin-Selberg period integrals define invariant forms on tensor products of automorphic representations. Automorphic representations. Under the unitary action of any Maass form f on G generates an irreducible subrepresentation of L 2 (Γ G · Z(G)\G) that is spherical, the function f being the unique (up to scalar multiples) K G -spherical vector. We write V f ⊆ L 2 (Γ G · Z(G)\G) for the subspace of smooth vectors and π f for the corresponding . If further f is a cusp form, it was shown in [26] that all functions in V f decay rapidly along A G . Similarly, we denote by (τ g , W g ) the smooth vectors of the irreducible unitary subrepre- that is isomorphic to a subrepresentation of a degenerate principal series of G (see Section 3.3 for a more detailed description). Here E s is viewed as a K G -invariant and Z(G)-invariant function on G. We write σ s for the corresponding G-action on U s . Note that for s ∈ C outside a certain discrete set, the degenerate principal series is irreducible, so that σ s is isomorphic to the full degenerate principal series. 2.2. Automorphic Rankin-Selberg periods. Let f be a Maass cusp form for SL(n, Z) and g a Maass form for SL(n ′ , Z). The period integral Λ(s, f × g) can be extended to an invariant linear form on the tensor products of the corresponding automorphic representations. For this we again distinguish between the cases n ′ = n, n ′ = n − 1 and 1 ≤ n ′ ≤ n − 2. 2.2.1. The case n ′ = n. We have where E s is viewed as a K G -invariant function on Γ G · Z(G)\G. This integral makes sense if we replace f and g by arbitrary functions in V f and W g and E s by an arbitrary function in U s and defines a G-invariant linear form i.e. ℓ aut f,g,s ∈ Hom G (π f ⊗τ g ⊗σ s , C). The period Λ(s, f × g) can be recovered from ℓ aut f,g,s as the special value at the tensor product of the spherical vectors: Λ(s, f × g) = ℓ aut f,g,s (f ⊗ g ⊗ E s ). It follows from [9, Theorem B] that the space Hom G (π f ⊗τ g ⊗σ s , C) is at most one-dimensional, so that ℓ aut f,g,s is proportional to any other non-zero period in Hom G (π f ⊗τ g ⊗σ s , C). To keep the notation uniform we put H = G in this case. 2.2.2. The case n ′ = n − 1. Let H = GL(n − 1, R) and Γ H = SL(n − 1, Z). Similar to the case n ′ = n we can write Λ(s, By [35,Theorem B] this space is at most one-dimensional, so ℓ aut f,g,s is proportional to any other non-zero period in Hom H (π f | H ⊗τ g ⊗χ s , C). where M (n ′ ×(n−n ′ ), R) denotes the space of real n ′ ×(n−n ′ )-matrices, M (n ′ ×(n−n ′ ), R) 0 the subspace of those matrices with first column equal to zero, and N n−n ′ the group of unipotent upper triangular matrices of size n − n ′ . We note that Here we extend g trivially to H by putting g(h) := g(h 1 ). Moreover, dh denotes the rightinvariant measure on Γ H \H given by (Note that H is not unimodular.) The integral defining Λ(s, f × g) makes sense even if we replace f by an arbitrary function in V f and g by an arbitrary function in W g , which leads to the map which is defined due to the rapid decay of u ∈ V f . It respects the action of the subgroup H in the sense that ℓ aut f,g,s intertwines the representation π f | H ⊗τ g ⊗χ s on V f ⊗W g and the trivial representation on C, i.e. ℓ aut f,g,s ∈ Hom H (π f | H ⊗τ g ⊗χ s , C). In [9, Theorem A] it was shown that dim Hom H (π f | H ⊗τ g ⊗χ s , C) ≤ 1 for all s ∈ C, so that the automorphic period ℓ aut f,g,s is proportional to any other period in Hom H (π f | H ⊗τ g ⊗χ s , C). Model Rankin-Selberg periods on spherical principal series In this section we explicitly construct Rankin-Selberg periods on tensor products of spherical principal series representations. 3.1. Parabolic subgroups. Let P G denote the standard minimal parabolic subgroup of G consisting of all upper triangular matrices. The parabolic subgroup P G has a Langlands decomposition P G = M G A G N G with M G the subgroup of diagonal matrices with entries in {±1} and A G and N G as in Section 1.1. In the case n ′ = n − 1 the group P H = P G ∩ H is the standard minimal parabolic subgroup of H and For n ′ = n − 2 we also put P H = P G ∩ H, which has a similar decomposition 3.2. Principal series representations. For λ ∈ C n and a = diag(a 1 , . . . , a n ) ∈ G let a λ = a λ 1 1 · · · a λn n . This defines a character e λ : A G → C × , a → a λ . Let , where we use smooth normalized parabolic induction, i.e. π λ can be realized as the rightregular representation of G on . . , 1−n 2 ) corresponding to the half sum of all roots of a G in n G . In particular, π λ is unitary for λ ∈ (iR) n , the invariant inner product being The representation π λ is K G -spherical and we normalize the spherical vector φ λ ∈ I λ such that φ λ (e) = 1. To give an explicit formula for φ λ we define for 1 ≤ k ≤ n and 1 ≤ i 1 , . . . , i k ≤ n a polynomial p i 1 ,...,i k on M (n × n, R) by and note that they satisfy Φ k (gxh) = g n,n · · · g n−k+1,n−k+1 h 1, We remark that Φ k also can be expressed as the determinant of a matrix product: where w represents the longest Weyl group element (see Section 1.2). Lemma 3.1. φ λ can be expressed as 4) where w is a representative of the longest Weyl group element (see Section 1.2). Proof. φ λ is the unique smooth function on G such that φ λ (nak) = a λ+ρ G , so it suffices to show that the right hand side of (3.4) has the same properties. Since gg ⊤ is positive definite, its principal minors are strictly positive. This implies that g → Φ k (gg ⊤ w) is a smooth nowhere vanishing function on G. Hence, the right hand side of (3.4) is smooth. We further have Φ k (nak) = Φ k (nakk ⊤ a ⊤ n ⊤ w) = Φ k (na 2 w(w −1 n ⊤ w)) = a n,n · · · a n−k+1,n−k+1 Φ k (w) (3.5) for all k ∈ K G , a ∈ A G and n ∈ N G by (3.3). Note that Φ k (w) = ±1, depending on the choice of the representative w of the longest Weyl group element. (One possible choice is w i,j = δ i,n−j+1 , then Φ k (w) = 1 for all k = 1, . . . , n.) Applying (3.5) to every factor in (3.4) shows the claim. Similarly, for ν ∈ C n ′ we write τ ν for the corresponding spherical principal series representation of GL(n ′ , R) and realize τ ν on a subspace J ν ⊆ C ∞ (GL(n ′ , R)). For n ′ = n and n ′ = n − 1 this defines a representation of H = GL(n ′ , R), and in the case 1 ≤ n ′ ≤ n − 2 we extend τ ν trivially to Degenerate principal series representations. Let P G,max ⊆ G be the standard maximal parabolic subgroup of G corresponding to the partition n = (n − 1) + 1, i.e. For r ∈ C let ξ r A b 0 d := | det A| r |d| −(n−1)r and induce this character of P G,max to G (smooth normalized parabolic induction): ς r := Ind G P G,max (ξ r ), realized on the space Note that ς r is unitary for r ∈ iR. Write f r for the unique K G -spherical vector in L r with f r (e) = 1. We claim that f r = I s for s = r + 1 2 . In fact, let g = xyk ∈ N G A G K G , then On x ∈ N G the character ξ r+ 1 2 is obviously trivial, and on y ∈ A G it is given by It follows that for s = r + 1 2 the G-intertwining operator ζ : maps the spherical vector f r to the degenerate Eisenstein series E s . An explicit expression for f r is given by 3.4. Invariant forms on principal series. We construct invariant forms on tensor products of principal series representations. 3.4.1. The case n ′ = n. An invariant form ℓ ∈ Hom G (π λ ⊗τ ν ⊗ς r , C) is a G-invariant continuous linear operator I λ ⊗I ν ⊗L r → C and hence given by a distribution kernel K ∈ D ′ (G×G×G) that satisfies the following equivariance conditions (see e.g. [20, Sections 3.2 and 3.5] or [12, Section 3.2] for details on this matter): (1) K(g 1 g, g 2 g, g 3 g) = K(g 1 , g 2 , g 3 ) for all g ∈ G, If K is a locally integrable function, then the corresponding invariant form is given by otherwise the integral has to be understood in the sense of generalized functions. Using the integral formula [19, formula (5.25)] this integral can be rewritten as where N G resp. N G,max denotes the nilradical of the parabolic subgroup of G opposite to P G resp. P G,max . Non-zero invariant forms on π λ ⊗τ ν ⊗ς r can only exist if the center of G acts trivially, which implies λ 1 + · · · + λ n + ν 1 + · · · + ν n = 0. 3.4.2. The case 1 ≤ n ′ ≤ n−1. As in the case n ′ = n invariant forms ℓ ∈ Hom H (π λ | H ⊗τ ν ⊗χ s , C) correspond to distribution kernels K ∈ D ′ (G × H) such that (1) K(gk, hk) = K(g, h) for all k ∈ H, Recall the polynomials Φ k (x) on M (n × n, R) from (3.2). We further define where p i 1 ,...,i k (x) denote the polynomials from (3.1). Note that where w i,j denotes the permutation matrix associated to the transposition (i j). The following equivariance properties for g ∈ P G and h ∈ P H are easy to verify: Ψ k (gxh) = g n,n · · · g n−k+1,n−k+1 h 1,1 · · · h k−1,k−1 Ψ k (x), Ξ k (gxh) = g n,n · · · g n−k+1,n−k+1 h 1, For λ ∈ C n , ν ∈ C n ′ and s ∈ C we define the following kernel function: . By the above equivariance properties for Φ k , Ψ k and Ξ k it is easy to verify that the function K λ,ν,s satisfies the desired equivariance properties. Further, for Re(s i ), Re(t j ) ≥ 0 it is is locally integrable and hence defines a distribution in D ′ (G × H). defines a meromorphic family of invariant forms ℓ mod λ,ν,s ∈ Hom H (π λ | H ⊗τ ν ⊗χ s , C). Proof. Using resolution of singularities the meromorphic continuation can be reduced to that of the distributions for fixed k 1 , . . . , k n ∈ Z, which is discussed in [11, Section 4.6 and 4.10]. 3.5. Relation between automorphic and model periods. Let f be a cusp form on GL(n, R) and g a Maass form on GL(n ′ , R). Then ℓ aut f,g,s ∈ Hom G (π f ⊗τ g ⊗σ s , C) for n ′ = n and ℓ aut f,g,s ∈ Hom H (π f | H ⊗τ g ⊗χ s , C) for 1 ≤ n ′ ≤ n − 1. By the Multiplicity One Theorems [9, Theorems A and B] these space are at most one-dimensional. Now π f ≃ π λ and τ g ≃ τ ν , where λ ∈ C n and ν ∈ C n ′ are the Langlands parameters of f and g. Let θ : I λ → V f and η : J ν → W g be equivariant unitary isomorphisms and recall the equivariant isomorphism ζ : and for 1 ≤ n ′ ≤ n − 1 ℓ aut f,g,s • (θ ⊗η) ∈ Hom H (π λ | H ⊗τ ν ⊗χ s , C). Using the Multiplicity One Theorems we can therefore relate the automorphic periods to the model periods ℓ mod λ,ν,s constructed in the previous section. There exists a proportionality constant b f,g,s ∈ C such that for n ′ = n and for 1 ≤ n ′ ≤ n − 1 By the equivariance of θ and η, the spherical vectors φ λ ∈ I λ and ψ ν ∈ J ν are mapped to scalar multiples of the spherical vectors f ∈ V f and g ∈ W g . If we assume that f and g are normalized to have L 2 -norm one, then the respective scalars are of modulus one. Using the fact that ζ maps the spherical vector f s− 1 2 to the Eisenstein series E s , it follows that for n ′ = n: and for 1 ≤ n ′ ≤ n − 1: |Λ(s, f × g)| = |b f,g,s | · |ℓ mod λ,ν,s (φ λ ⊗ ψ ν )|. To estimate Λ(s, f × g) it therefore suffices to estimate the special values of the model periods at the spherical vectors and the proportionality constants b f,g,s . In this work we focus on the special values of the model periods and hope to come back to the proportionality scalars in a subsequent paper. Special values of invariant forms. The special values of the model periods are given by integrating the previously constructed distribution kernels against the spherical vectors. In this section we explain how to reduce the number of variables in the integrals and simplify the distribution kernels. 3.6.1. The case n ′ = n. We have the following expression for the special value of ℓ mod λ,ν,s : Lemma 3.4. For all λ, ν ∈ C n and r ∈ C we have Proof. Let r = s − 1 2 for short. Since Hom G (π λ ⊗τ ν ⊗ς r , C) ≃ Hom G (π λ ⊗ς r , τ −ν ) we can write the invariant form ℓ mod λ,ν,s as where A λ,−ν,r ∈ Hom G (π λ ⊗ς r , τ −ν ) is given by K λ,ν,s (n 1 , g, n 3 )v(n 1 )u(n 3 ) d(n 1 , n 3 ). Now, since φ λ ∈ I λ and f r ∈ L r are both K G -invariant, their tensor product φ λ ⊗f r ∈ I λ ⊗L r is also K G -invariant and is therefore mapped to a K G -invariant vector in J −ν by the equivariant map A λ,−ν,r . The space of K G -invariant vectors in J −ν is one-dimensional and spanned by φ −ν , so that for any k 0 ∈ K G since φ −ν | K G = 1. It follows that Since φ ν | K G = φ −ν | K G = 1 the latter integral is equal to 1 and we have To have a simple expression for K λ,ν,s (n 1 , k 0 , n 3 ) we choose k 0 = w, a representative of the longest Weyl group element. It is easy to see that for n 1 ∈ N G we have Further, a short computation reveals that for n 3 = 1 n−1 3.6.2. The case 1 ≤ n ′ ≤ n − 1. Making use of the isomorphism Hom H (π λ | H ⊗τ ν ⊗χ s , C) ≃ Hom H (π λ | H ⊗χ s , τ −ν ) and proceeding as in the previous section shows: Lemma 3.5. For all λ ∈ C n , ν ∈ C n ′ and s ∈ C we have p n,...,n−k+2,n−k (n) dn. Corollary 3.7. If Conjecture 3.6 holds for a pair (n, n ′ ), 1 ≤ n ′ ≤ n, then there exists a constant C = C n,n ′ > 0 such that for all Maass forms f and g and all s ∈ 1 2 + iR: |L(s, f × g)| = C · |b f,g,s |. Remark 3.8. In a similar situation with Γ H ⊆ H cocompact, Bernstein-Reznikov [3] apply (3.6) resp. (3.7) to test functions in order to estimate the proportionality constants. This method was also applied in [13,27]. In our case SL(n ′ , Z) is not cocompact in SL(n ′ , R), so that this method does not easily generalize. However, we do believe that a more detailed analysis of the geometry of the locally symmetric subspace Γ H \H/K H ⊆ Γ G \G/K G does provide a way to modify the ideas of . Special values of model periods for GL (2) In this section we verify Conjecture 3.6 for (n, n ′ ) = (2, 2) and (n, n ′ ) = (2, 1). By (3.4) we have: The formula for f r is the same with λ = (r, −r). We compare this expression to and observe that they agree in absolute value up to a constant for s = r + 1 2 ∈ 1 2 + iR and λ, ν ∈ (iR) 2 . We compare this expression to and observe that they agree in absolute value up to a constant for λ ∈ (iR) 2 and ν ∈ iR with λ 1 + λ 2 = ν = 0 and s ∈ 1 2 + iR. Matching of the model period for GL(3) × GL(1). By Lemma 3.4 we have We first note that the integrand is invariant under the transformation (x, y, z) → (x, −y, −z), so that we may replace R dy by 2 ∞ 0 dy. We then substitute z → z + xy, write 1 + y 2 + (z + xy The inner integral can be evaluated using (A.3). Note that the second summand does not contribute to the integral, because it is an odd function of z whereas the remaining terms are even functions of z. We therefore obtain By the integral representation (A.4) for the hypergeometric function this equals Rearranging terms this can be written as The integral over z can be computed using (A.7): and the integral over t can be computed using (A.8): −λ 1 +λ 2 +λ 3 +ν+s−1 2 Reducing the hypergeometric function 3 F 2 to 2 F 1 , applying the transformation formula (A.6) and replacing R e −2π √ −1x dx by 2 ∞ 0 cos(2πx) dx we finally get The integral can be evaluated in terms of a G-function by (A.11): Writing the G-function as a Mellin-Barnes type integral and shifting the contour shows π −z dz. Appendix A. Integral formulas We collect some integral formulas for the hypergeometric function and Meijer's G-function. By (A.11) and Euler's reflection formula this equals the claimed formula.
2017-08-03T09:42:15.000Z
2017-06-16T00:00:00.000
{ "year": 2017, "sha1": "aea5bf8ec99c7091ecc2fa3aea6cda4d289f7256", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1706.05167", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "bc2ddd0c8cda84c23d604fae26f6207b19b40d8e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
269034447
pes2o/s2orc
v3-fos-license
Comparison of health-related quality of life in rheumatoid arthritis, psoriatic arthritis and psoriasis and effects of etanercept treatment Objectives To compare health-related quality of life (HRQoL) before and after treatment with etanercept in patients with moderate to severe rheumatoid arthritis (RA), psoriatic arthritis (PsA) and psoriasis using spydergram representations. Methods Data from randomised, controlled trials of etanercept in patients with RA, PsA and psoriasis were analysed. HRQoL was assessed by the medical outcomes survey short form 36 (SF-36) physical (PCS) and mental (MCS) component summary and domain scores. Baseline comparisons with age and gender-matched norms and treatment-associated changes in domain scores were quantified using spydergrams and the health utility SF-6D measure. Results Mean baseline PCS scores were lower than age and gender-matched norms in patients with RA and PsA, but near normative values in patients with psoriasis; MCS scores at baseline were near normal in PsA and psoriasis but low in RA. Treatment with etanercept resulted in improvements in PCS and MCS scores as well as individual SF-36 domains across all indications. Mean baseline SF-6D scores were higher in psoriasis than in RA or PsA; clinically meaningful improvements in SF-6D were observed in all three patient populations following treatment with etanercept. Conclusions Patients with RA, PsA and psoriasis demonstrated unique HRQoL profiles at baseline. Treatment with etanercept was associated with improvements in PCS and MCS scores as well as individual domain scores in patients with RA, PsA and psoriasis. ▶ An additional supplementary fi gure is published online only. To view this fi le please visit the journal online (http://ard.bmj.com/ content/71/7.toc). Health-related quality of life (HRQoL) has been shown to be profoundly impaired in patients with bone and joint diseases, including rheumatoid arthritis (RA) 1 and psoriatic arthritis (PsA). [2][3][4][5] HRQoL is also impaired in psoriasis but to a different degree, as refl ected by changes in the dermatology life quality index and the Euro-QoL. [6][7][8][9][10][11][12] A prominent benefi t of treatment with tumour necrosis factor (TNF) antagonists in patients with these diseases has been an improvement in patientreported outcomes, including HRQoL. 1 12-22 The medical outcomes survey short form 36 (SF-36) is a generic patient-reported measure of HRQoL that has been validated for use in most rheumatic diseases, including RA 1 23 and PsA 24 25 as well as psoriasis. 26 27 It includes 36 questions EXTENDED REPORT Comparison of health-related quality of life in rheumatoid arthritis, psoriatic arthritis and psoriasis and effects of etanercept treatment Vibeke Strand, 1 Veronika Sharp, 2 Andrew S Koenig, 3 Grace Park, 4 Yifei Shi, 4 Brian Wang, 5 Debra J Zack, 6 David Fiorentino 7 combined into eight domains, which are summarised into physical component (PCS) and mental component (MCS) summary scores. 1 SF-6D is a health utility score based on mean scores across all eight domains of the SF- 36, which has been demonstrated to be sensitive to change in rheumatic diseases. 1 28-30 Importantly, SF-6D facilitates comparisons of baseline values and post-treatment changes in HRQoL. The presentation and interpretation of HRQoL data from SF-36 is complex, and the impact of patterns of disease and treatment-associated effects can be diffi cult to evaluate. 'Spydergrams' provide an intuitive visual method to examine multiple domains of HRQoL simultaneously in a single fi gure. 30 The objectives of this study were to use spydergrams to compare the impact on HRQoL of three different immune-mediated diseases, specifi cally in terms of how each disease differentially affects aspects of mental and physical wellbeing, and to use spydergrams to determine how etanercept therapy impacts changes in domains of the SF-36 across these diseases. Methods Data for these analyses were obtained from randomised controlled trials that have been published previously on patients with early RA (combination of methotrexate and etanercept in early RA; COMET), [31][32][33] PsA (Study 160030) 34 and psoriasis (Study 160042). 17 Patients in COMET with early moderate to severe RA for a mean of 9 months were randomly assigned to receive etanercept (50 mg a week) in combination with methotrexate or methotrexate alone for 52 weeks. 31 Patients with PsA for a mean of approximately 9 years were randomly assigned to receive etanercept (25 mg twice a week) or placebo for 24 weeks in Study 160030. 34 Patients with active, clinically stable plaque psoriasis for a mean of 20.5 years in Study 160042 were randomly assigned to treatment with etanercept 25 mg twice a week or 50 mg twice a week or placebo for 12 weeks (randomised controlled trial portion of the trial). 17 Patients in all arms of the psoriasis study received etanercept 25 mg twice a week open label for weeks 13-24. In all studies, patients completed SF-36 questionnaires at baseline (before treatment) and at various protocol-specifi ed times during treatment. 29 The PCS and MCS component scores of SF-36 were initially assessed in each of the clinical trials and if the results of either were statistically signifi cant, mean changes in domains were assessed for statistical signifi cance without p value corrections, as customary, and for improvements meeting or exceeding the minimum clinically important difference (MCID) of 5-10 points for domain scores and 2.5-5 points for PCS and MCS scores. 1 These MCID values (which were established for RA patients) apply similarly to PsA and psoriasis. 37 38 Similarly, MCID values have been estimated in psoriasis for PCS (0.51-3.91, best estimate at 2.5) and MCS (3.89-6.61), which are in the range of the defi ned MCID we utilised. 39 SF-6D MID was calculated based on the derivation by Ara and Brazier; 28 29 recent data also indicate that MID for RA and PsA are similar for the SF-6D. 40 Baseline and treatment-associated improvements were quantifi ed across all eight domains using spydergrams and health utility measure SF-6D. To generate the spydergrams, domain scores were plotted from 0 (worst) at the centre to 100 (best) at the outer edge. Demarcations along each axis/domain represent changes of 10 points, an estimated one to two times the MCID. 30 Baseline and endpoint domain scores in each study were compared with age and gender-matched normative SF-36 data from the USA. 41 Results Data for these analyses were obtained from 528 patients with RA, 205 patients with PsA and 583 patients with psoriasis (table 1). The majority of patients in all trials were white, most patients with RA were women, and most patients with psoriasis were men. All patients in the PsA trial also had psoriasis. In RA patients enrolled in COMET, baseline PCS scores were low, approaching 2 SD below normative values of 50; MCS scores were more than 0.5 SD less than norms (table 2). After 52 weeks of treatment, improvement in the PCS score was greater with the etanercept plus methotrexate combination therapy than with methotrexate alone (p=0.0031). Improvements in MCS with both treatments were large, exceeded the MCID, and approached normative values. Large reductions in domain scores were reported by RA patients at baseline and were largest in RP and BP (fi gure 1A,B). After 52 weeks of treatment, statistically signifi cant (p<0.0001) and clinically meaningful improvements (≥MCID) across all domains were evident with both treatments. Greatest improvements were seen in both treatment groups in domains with the lowest scores at baseline: RP (improvements of 46.5 vs 40.8 points in the etanercept plus methotrexate vs methotrexate arms), BP (37.2 vs 29.6 points) and RE (32.7 vs 25.9 points). Etanercept plus methotrexate therapy was associated with greater improvements in PF, BP and VT domains compared with methotrexate therapy alone (fi gure 1C). Mean SF-6D scores in patients with RA at baseline (0.529) were considerably lower than age and gender-matched norms (0.822). Clinically meaningful improvements (≥MID) were observed in SF-6D scores in patients receiving etanercept plus methotrexate (mean score 0.658) and methotrexate alone (mean score 0.635). Improvements in SF-6D scores were signifi cantly greater in patients receiving etanercept plus methotrexate compared with methotrexate alone (p=0.05). In patients with PsA enrolled in Study 160030, baseline PCS scores were low in all patients, approaching 1.5 SD below the normative value of 50 (table 2). MCS scores approximated available online only) include: limitations in physical activities because of health problems (physical function (PF)); limitations in usual role activities because of physical health problems (role physical (RP)); bodily pain (BP); general health perceptions (GH); vitality (VT); limitations in social activities because of physical or emotional problems (social functioning (SF)); limitations in usual role activities because of emotional problems (role emotional (RE)); and psychological distress and wellbeing (mental health index (MH)), scored from 0 (worst) to 100 (best). 35 Domain scores were normalised and z-transformed into PCS and MCS summary scores. PCS positively weights fi ve domains (PF, RP, BP, GH and VT) and negatively weights the remaining three domains (SF, RE and MH); MCS positively weights the four mental domains (VT, SF, RE and MH) and negatively weights the four physical domains (PF, RP, BP and GH). The normative value for the PCS or MCS summary score is 50 with a SD of 10. 1 SF-6D estimates health utilities from SF-36 data to derive a single index score that ranges from 0 (death) to 1 (full health). SF-6D was initially based on individual patient data using answers to 11 items from the SF-36 questionnaire. 36 SF-6D has recently been calculated based on group data using mean changes in each of the eight domains. 28 29 This SF-6D calculation The mean baseline SF-6D score was higher in patients with psoriasis (0.739) than in patients with RA or PsA, but was still less than age and gender-matched norms (0.854). Clinically meaningful improvements (≥MID) were observed in patients receiving active treatment but not in patients receiving placebo; in addition, during the open-label phase of the trial, improvements were also reported in patients initially treated with placebo (table 2). Discussion Our data confi rm that HRQoL is much diminished in patients with PsA and RA compared with age and gender-matched norms. A very interesting observation from these data is that the baseline patterns of HRQoL reductions appear to be different in each disease, as illustrated by the spydergrams. Patients Mean SF-6D scores in patients with PsA (0.651) were considerably lower than age and gender-matched norms (0.848). After 24 weeks of treatment, clinically meaningful improvements in SF-6D scores were reported by patients receiving etanercept (mean score 0.767) but not placebo (mean score 0.659). In patients with psoriasis enrolled in Study 160042, baseline PCS scores were high in all patients and approximated normative values of 50; MCS scores were within less than 0.5 SD of norms (table 2) 19 42 In addition, comparison of all diseases by spydergrams reveals characteristic 'notches' in domains for a given disease. These domain defi cits unique to a particular disease should focus future research on the variables associated with these changes and how these might be more effectively addressed. A limitation in comparing HRQoL across diseases includes the duration of disease in the populations in these analyses; the mean duration of disease at baseline was 9 months, 9 years and approximately 20 years for patients social functioning (RE and SF), and to a lesser degree on physical function (RP) and pain (BP) domains. This is consistent with data obtained from large clinical trials in psoriasis. This suggests that the impact of physical wellbeing on mental health might well depend on the nature of the physical impairment (eg, skin versus joint), as skin disease may have a disproportionally large effect on mental function and ensuing quality of life. Another explanation for the apparently small diminution in physical domains in psoriasis is that the SF-36 is not a particularly sensitive measure of skin disease in psoriasis-although changes in the physical domains correlate with changes in differ from some earlier studies in which the control population was not appropriately matched to the study group. For some domains (eg, BP), post-treatment values actually exceeded those of the age-matched controls, suggesting that some aspects of with RA, PsA and psoriasis, respectively, with broad ranges. Clearly, this variability may have a signifi cant impact on perceptions of disease and therefore HRQoL reported before and after treatment. Because the age and gender-matched norms were very similar to one another in all three diseases, we were able to compare all baseline SF-36 domain scores in one spydergram (fi gure 4). The magnitude of reduction in disease compared with age and gender-matched normative values in domain scores was greatest in RA followed by PsA, and of overall lesser magnitude in psoriasis. While the physical health domain scores were much lower in PsA than in psoriasis, mental health domain scores were very similar in the two diseases except for VT, which was substantially lower in PsA. This is consistent with the theory that much of the negative mental impact in PsA is caused by skin disease. 43 Given that reports in the literature support the connection between educational status and negative mental impact of disease in arthritis, this might have implications for educational interventions to prevent the same negative mental impacts in psoriasis. While it is easy to understand the diminished social and emotional functioning in arthritis (RA and PsA) caused by physical limitation and pain, it may be diffi cult immediately to rationalise why patients with psoriasis have diminished physical health. However, pain and itching from skin disease are known to result in inability to function in the workplace, use one's hands (as palm involvement is common in psoriasis), and can result in sleep disturbances that characterise psoriasis and result in inability to participate in various physical activities related to work or leisure. In addition, it is known that psychological comorbidities such as depression and anxiety are associated with psoriasis. Our data suggest that the physical and mental domains of HRQoL are connected. Diminished mental health in psoriasis may result from concern over the appearance of the psoriatic skin, leading to anxiety, a sense of stigmatisation and embarrassment, reduction in participation in work and leisure activities and diminishment of a patient's psychological wellbeing. The mental and physical domains in psoriasis might also be uniquely connected by bodily pain. BP is consistently found to be among the most affected physical domains in psoriasis, which is likely to be due to skin pain from psoriatic plaques, as objective measures of skin disease tend to correlate well with the BP subscale. 4 Pain and itching of the skin can cause both emotional and physical distress. Physical health may be affected by the itching and burning sensation of the skin and by joint pain in those with concomitant arthritis. 9 12 Interestingly, depression also impacts BP in psoriasis. 21 In addition, skin pain (and itching) probably contributes to sleep disturbance, which is an important component of mental HRQoL in psoriasis. Sleep disturbance could directly contribute to fatigue and social function. BP has often been seen to be prone to greatest improvements of all SF-36 domains following therapy in psoriasis patients, and this is consistent with our data. 20 It is of note that this improvement has been seen with a number of therapies, including anti-TNF therapies, ustekinumab (which targets interleukins 12 and 23) 42 and efalizumab (which targets CD11a). 44 It has also recently been shown that depression impacts heavily on the BP domain in psoriasis, 21 and this may be more uniquely associated with psoriasis than with RA or PsA. Following treatment with etanercept, the largest improvements were seen in physical domains as well as RE in RA patients. In patients with psoriasis, despite potential ceiling effects, treatment-associated changes could still be demonstrated. Our fi ndings were similar to those of Revicki et al 12 but chronic skin disease/pain might extinguish other pain stimuli to some degree so that when treated, patients actually feel less pain than norms. A unique aspect of the SF-36 is that lower domain scores represent worse disease. This paper demonstrates that these lower domain scores are often associated with the largest treatmentassociated changes towards improvement. The use of spydergrams made this evident, as treatment was associated with loss of 'notching' and more even 'rounding' of the spydergram patterns for both RA and PsA (compare fi gure 4A,B). In contrast, improvements are less easily demonstrated using the health assessment questionnaire disability index that also assesses physical function probably because higher scores indicate worse outcome. In addition, SF-36 data are normalised; this scoring method inherently leads to fewer fl oor (or ceiling) effects, as is evident in both RA and PsA datasets. SF-6D scores are a means for quantifying changes illustrated in the spydergrams. As in patients with RA and PsA, SF-36 is a useful tool for assessing HRQoL in patients with psoriasis. Questions in the SF-36 may be less sensitive to the impact of skin disease in the absence of arthritis; however, statistically signifi cant and clinically meaningful treatment-associated improvements were demonstrated in psoriasis patients. Furthermore, our results illustrated less potential for 'ceiling effects' with this instrument, as refl ected in improvements that exceeded normative values even with high domain scores at baseline as in psoriasis. Several domains from SF-36, a generic HRQoL assessment tool, have been shown to correspond with items from diseasespecifi c tools, such as the dermatology life quality index for psoriasis 26 38 39 and the health assessment questionnaire for RA 45 46 and PsA. 37 In both research and clinical settings, visual representation of SF-36 data in spydergram format would be most useful for measuring improvements or changes in response to treatment in a cohort of patients. In clinical practice, spydergrams would also be useful for routine monitoring of the mental domains of HRQoL. In addition to normal physical examinations of joints and/or skin, patient-reported information from SF-36 would complement physical fi ndings to provide the clinician with a more comprehensive evaluation of each patient. Conclusions In this study we directly compared HRQoL in patients with RA, PsA and psoriasis and showed that each disease had a unique profi le among physical and mental health domains. There were differences in the magnitude of change between the three diseases, but not in the ability of RA, PsA and psoriasis patients to achieve HRQoL improvements. Similar to other studies using anti-TNF agents, we showed that improvements are achieved in response to therapy in both physical and mental health components of the SF-36 in all three diseases.
2016-05-12T22:15:10.714Z
2012-01-17T00:00:00.000
{ "year": 2012, "sha1": "8b570e2e134baf2f9f21b3ccde4d67fbf1fe0786", "oa_license": "CCBYNC", "oa_url": "https://ard.bmj.com/content/annrheumdis/71/7/1143.full.pdf", "oa_status": "BRONZE", "pdf_src": "Anansi", "pdf_hash": "27b2f8a5cb00838601c374748267ce57d458bcb5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23376721
pes2o/s2orc
v3-fos-license
Manganese Activation of Superoxide Dismutase 2 in the Mitochondria of Saccharomyces cerevisiae* Manganese-dependent superoxide dismutase 2 (SOD2) in the mitochondria plays a key role in protection against oxidative stress. Here we probed the pathway by which SOD2 acquires its manganese catalytic cofactor. We found that a mitochondrial localization is essential. A cytosolic version of Saccharomyces cerevisiae Sod2p is largely apo for manganese and is only efficiently activated when cells accumulate toxic levels of manganese. Furthermore, Candida albicans naturally produces a cytosolic manganese SOD (Ca SOD3), yet when expressed in the cytosol of S. cerevisiae, a large fraction of Ca SOD3 also remained manganese-deficient. The cytosol of S. cerevisae cannot readily support activation of Mn-SOD molecules. By monitoring the kinetics for metalation of S. cerevisiae Sod2p in vivo, we found that prefolded Sod2p in the mitochondria cannot be activated by manganese. Manganese insertion is only possible with a newly synthesized polypeptide. Furthermore, Sod2p synthesis appears closely coupled to Sod2p import. By reversibly blocking mitochondrial import in vivo, we noted that newly synthesized Sod2p can enter mitochondria but not a Sod2p polypeptide that was allowed to accumulate in the cytosol. We propose a model in which the insertion of manganese into eukaryotic SOD2 molecules is driven by the protein unfolding process associated with mitochondrial import. Manganese-dependent superoxide dismutase 2 (SOD2) in the mitochondria plays a key role in protection against oxidative stress. Here we probed the pathway by which SOD2 acquires its manganese catalytic cofactor. We found that a mitochondrial localization is essential. A cytosolic version of Saccharomyces cerevisiae Sod2p is largely apo for manganese and is only efficiently activated when cells accumulate toxic levels of manganese. Furthermore, Candida albicans naturally produces a cytosolic manganese SOD (Ca SOD3), yet when expressed in the cytosol of S. cerevisiae, a large fraction of Ca SOD3 also remained manganese-deficient. The cytosol of S. cerevisae cannot readily support activation of Mn-SOD molecules. By monitoring the kinetics for metalation of S. cerevisiae Sod2p in vivo, we found that prefolded Sod2p in the mitochondria cannot be activated by manganese. Manganese insertion is only possible with a newly synthesized polypeptide. Furthermore, Sod2p synthesis appears closely coupled to Sod2p import. By reversibly blocking mitochondrial import in vivo, we noted that newly synthesized Sod2p can enter mitochondria but not a Sod2p polypeptide that was allowed to accumulate in the cytosol. We propose a model in which the insertion of manganese into eukaryotic SOD2 molecules is driven by the protein unfolding process associated with mitochondrial import. Superoxide dismutase (SOD) 1 enzymes represent a family of metalloproteins that have evolved to catalytically remove toxic superoxide anions. Most eukaryotes express two distinct forms, a copper-and zinc-containing enzyme (SOD1) that largely resides in the cytosol (1) but is also found in the intermembrane space of mitochondria (2)(3)(4) and a second SOD that contains manganese (SOD2) and is typically localized in the mitochondrial matrix (3,5). In both cases, enzymatic activity is reliant on the redox cycling of the bound copper or manganese ion cofactor. Hence, the post-translational insertion of the metal represents a key step in controlling enzymatic activity in vivo. Much is known about the mechanism by which SOD1 acquires copper in vivo. Copper is transported and trafficked to the site of SOD1 by the concerted action of cell surface and intracellular copper transporters and a copper chaperone known as CCS (6 -12). CCS can insert copper into a pre-existing apopool of SOD1 with no need for new protein synthesis (13,14). The copper chaperone can also act on newly synthesized molecules of SOD1 (14). In either case, oxygen is required for CCS activity, providing a means for regulating SOD1 activity in response to oxygen status (15). The delivery of manganese to SOD2 should also involve a carefully controlled trafficking system. Using yeast genetics, we have identified two membrane proteins that help deliver manganese to the enzyme. One is the divalent metal transporter Smf2p that localizes in intracellular vesicles (16,17). Saccharomyces cerevisiae cells lacking Smf2p accumulate very low levels of manganese and show defects in manganese requiring enzymes of the Golgi and in Sod2p of mitochondria (18). A second transporter that affects yeast Sod2p is Mtm1p, a member of the mitochondrial carrier family of proteins (19). Although the precise substrate for transport by Mtm1p is not known, Mtm1p is needed for proper insertion of manganese into mitochondrial Sod2p (19). Despite the identification of these components, the mechanistic details of the post-translational events associated with activation of eukaryotic SOD2 are still poorly understood. The protein is encoded in the nucleus, transported across two mitochondrial membranes and once inside the mitochondrial matrix, and the polypeptide folds into a tetrameric enzyme. The stage at which manganese is introduced is not known. For example, can manganese be inserted into a pre-existing pool of apoSOD2, as is the case with copper containing SOD1? In addition, it is not clear whether SOD2 requires a mitochondrial location to acquire its metal cofactor. In certain organisms, Mn-SODs can be activated outside the mitochondria. The pathogenic fungi Candida albicans (20) and the blue crab Callinectes sapidus (21) both express Mn-SODs in the cytosol. When the Mn-SODs from either C. albicans or from the bacterium Bacillus stearothermophilus were targeted to the cytosol of S. cerevisiae, they exhibited certain activity (20,22,23). These results alone would suggest that a mitochondrial location may not be essential for manganese activation of SOD2. In this study, we explored the pathway for inserting manga-nese into mitochondrial SOD2 using S. cerevisiae Sod2p as a model. We found that efficient metalation of Sod2p requires a mitochondrial localization of the protein; a cytosolic version of Sod2p is poorly activated with the metal. Furthermore, only newly synthesized molecules of Sod2p that are freshly imported into mitochondria can acquire the metal in vivo. Manganese cannot be readily inserted into a pool of Sod2p that is apo for manganese. We provide a model in which manganese insertion into Sod2p is driven by the protein unfolding process associated with mitochondrial import. Plasmids-The pEL111 vector was constructed by subcloning the BamHI-SalI fragment of pEL101 (18) containing Ϫ557 to ϩ894 of the S. cerevisiae SOD2 gene into the pRS415 vector (25) digested with the same enzymes. To construct the vector pEL1G1 bearing a GAL1-SOD2 fusion, a SacI site (GAGCTC) was first introduced in the SOD2expressing vector pEL101 (18) replacing the sequence TAAAAA 15 bp upstream of the SOD2 start codon by site-directed mutagenesis. This plasmid was subsequently digested with SacI and XhoI, and the resulting 910-bp fragment containing the SOD2 open reading frame and its transcriptional terminator was subcloned downstream of a GAL1 promoter in the pYES2/CT vector (Invitrogen) using the same restriction sites. Sequence integrity was confirmed by DNA sequencing analysis (Core Facility, Johns Hopkins Medical Institutions). The multicopy expression vector for cytosolic Sod2p (amino acids 27 to the stop codon) pEL124 was created by subcloning the BamHI-SalI fragment of pEL104 (18) into pRS425 (25) at the same restriction enzyme sites. The resulting construct for cytosolic Sod2p contains the SOD2 gene promoter (Ϫ558 to Ϫ1) and terminator (ϩ703 to ϩ889) but lacks the mitochondrial presequence. The C. albicans SOD3 expression vector pVTSOD3 was described previously (20). Biochemical Assays-For preparation of cell lysates, S. cerevisiae strains were inoculated in 50 ml of YPD at a starting A 600 of ϳ0.05 and allowed to grow without shaking at 30°C for ϳ15 h. In general, whole cell lysates were prepared by glass-bead agitation as described previously (18). Where mitochondrial fractionation was required, yeast cells were converted to spheroplasts, lysed with a Dounce homogenizer, and fractionated into mitochondrial and post-mitochondrial cell fractions by differential centrifugation as previously described (26). Whole cell lysates or cellular fractions were analyzed for SOD activity by native gel electrophoresis and staining with nitroblue tetrazolium (27). S. cerevisiae Sod2p and C. albicans SOD3 polypeptide levels were monitored by subjecting whole cell lysates or cell fractions to denaturing gel electrophoresis and immunoblotting with an antibody directed against S. cerevisiae Sod2p (18) that cross-reacts with C. albicans SOD3. Where needed, antibodies directed against Mas2p (gift from Dr. Rob Jensen, Johns Hopkins University) and cytosolic Pgk1p (Molecular Probes, Eugene, OR) were used as described (18). S. cerevisiae Sod2p (containing the N-terminal mitochondrial targeting sequence) and C. albicans SOD3 were purified as recombinant proteins as described previously (19,20,28). Molar concentrations of these SOD molecules were determined by amino acid hydrolysis analysis (Protein Chemistry Laboratory, Texas A & M University). To monitor the rate of manganese activation of Sod2p, smf2⌬ mutants deficient in manganese were grown in YPD medium for ϳ15 h to an A 600 of ϳ3.0. 10 M MnSO 4 was then added and after various time intervals, 25 ml of culture samples were removed for preparation of whole cell lysates by glass bead homogenization and for monitoring Sod2p activity and polypeptide levels as above. Where needed, 100 g/ml cycloheximide was added to cultures just prior to manganese supplementation. In a duplicate set of cultures, lysates were prepared from spheroplasts, and crude mitochondria were isolated (26). The mitochondria fractions were analyzed for manganese content as described in Ref. 19 by atomic absorption spectroscopy using a PerkinElmer AAnalyst 600 graphite furnace atomic absorption spectrometer. To follow in vivo mitochondrial import of Sod2p, sod2⌬ yeast mutant cells were transformed with the GAL1-SOD2 vector pEL1G1 and cul-tured for 17 h in YPR medium to an A 600 of 1.1-1.5. 2% galactose was then added to induce SOD2 expression. Where needed, 20 M of the proton uncoupler carbonyl cyanide m-chlorophenylhydrazone (CCCP) (Sigma) was added to block mitochondrial import (29). Addition of 0.05% (v/v) ␤-mercaptoethenol (␤-ME) served to neutralize CCCP as described previously (29). Cell lysates were prepared by spheroplast homogenization, and mitochondrial and post-mitochondrial supernatant fractions were prepared as described above. Efficient Metalation of Sod2p Requires Mitochondrial Localization-We tested whether a mitochondrial localization of Sod2p was needed for manganese insertion into the enzyme. A cytosolic version of S. cerevisiae Sod2p was created by removing the N-terminal mitochondrial presequence ( Fig. 1A) (30). The resulting Sod2p molecule (CytSod2p) was expressed in a sod2⌬ mutant of S. ceresiviae lacking the endogenous mitochondrial Sod2p. As seen in Fig. 1B, CytSod2p co-localizes with the cytosolic marker Pgk1p and is largely excluded from the mitochondria marked by the mitochondrial matrix protein Mas2p. By comparison, expression of native Sod2p harboring the Nterminal presequence (MitoSod2p) resulted in a mitochondrial localization of the enzyme as expected (Fig. 1B). To test for enzymatic activity, lysates from cells expressing CytSod2p or MitoSod2p were applied to a native gel and analyzed for SOD activity by nitroblue tetrazolium staining. As seen in Fig. 1C, the cytosolic Sod2p was largely inactive compared with the mitochondrial enzyme (compare lanes 1 and 6). The activity of CytSod2p was restored by manganese supplementation in vivo, indicating that the lack of CytSod2p activity under physiological conditions results from a manganese deficiency in the enzyme. It is noteworthy that the amount of manganese required to activate CytSod2p is quite high (Ն100 M). This is a concentration that is somewhat toxic to the yeast, (30), and this region was removed from S. cerevisiae Sod2p to create CytSod2p. B, cell lysates were prepared from strain BY4741 expressing native mitochondrial Sod2p (MitoSod2p) and from the isogenic sod2⌬ mutant transformed with pEL124 expressing cytosolic Sod2p (CytSod2p). 60 g of total cell lysates (T) were separated by differential centrifugation into a post-mitochondrial supernatant fraction that is largely cytosolic (C) and a crude mitochondria fraction (M). The entire sample of each fraction, along with 60 g of total cell lysate, (to achieve identical cell equivalents) were subjected to denaturing gel electrophoresis and immunoblot analysis with antibodies directed against S. cerevisiae Sod2p, the cytosolic phosphoglycerate kinase (Pgk1p), and the mitochondrial processing protease (Mas2p). C, the sod2⌬ strain expressing either native mitochondrial Sod2p (Mito-SOD2p) on plasmid pEL111 or cytosolic Sod2p on plasmid pEL124 were grown overnight in YPD medium supplemented with the indicated concentrations of MnSO 4 . Whole cell lysates were subjected to either native gel electrophoresis and staining with nitroblue tetrazolium for Sod2p activity (top) or denaturing gel electrophoresis and immunoblotting with anti-Sod2p (bottom). It is noteworthy that, prior to denaturing gel electrophoresis, Sod2p-containing samples were heated in SDS at Ϸ40°C, rather than the standard 100°C, to prevent precipitation of Sod2p. as indicated by slowed growth ( (19) and (not shown)). The cytosolic form of Sod2p is only active when cells hyperaccumulate manganese. Under physiological conditions, Sod2p needs to be inside the mitochondria to be efficiently activated. Cytosolic SOD3 from C. albicans Is Largely Active When Expressed in S. cerevisiae-The pathogenic fungi C. albicans expresses a manganese-containing SOD in the cytosol (Ca SOD3) that is reported to be active when expressed in the cytosol of S. cerevisiae (20). We therefore addressed whether Ca SOD3 has a unique ability to acquire manganese in the cytosol. Consistent with earlier studies (19,20), Ca SOD3 expressed in S. cerevisiae exhibits some activity under physiological conditions ( Fig. 2A, lanes 2 and 8). Expression was observed in both a sod2⌬ strain (lane 8) and a strain expressing the endogenous mitochondrial Sod2p of S. cerevisiae (lane 2). Activity of Ca SOD3 expressed in S. cerevisiae is limited by manganese bioavailability. Decreasing intracellular manganese through a deletion of the Smf2p manganese transporter (18) abolished Ca SOD3 activity (Fig. 2A, lane 5), and activity was rescued by growing cells in the presence of 100 M manganese (lane 6). In fact, such manganese supplementation also had a dramatic effect on Ca SOD3 activity in SMF2 wild-type cells (lanes 3 and 9). Hence, there appears to be a large inactive pool of Ca SOD3 that is manganese-deficient and can be activated at high intracellular manganese, reminiscent of the scenario seen with S. cerevisiae cytSod2p (Fig. 1C). The expression of Ca SOD3 in S. cerevisiae is driven by a high copy vector and the strong constitutive ADH1 promoter (20). To estimate how much Ca SOD3 is being produced relative to endogenous Sc Sod2p, purified Sc Sod2p and Ca SOD3 proteins of known concentrations were used as standards in a semiquantitative immunoblot against lysates from cells expressing Sc Sod2p and Ca SOD3. As seen in Fig. 2B, Ca SOD3 is expressed in S. cerevisiae on a per mole basis at levels that are roughly 10-fold higher than the endogenous Sc Sod2p. This overexpression of Ca SOD3 protein may explain why activity can be detected in S. cerevisiae, despite the fact that a large fraction of the protein lacks manganese. Overall, the findings obtained with Ca SOD3 and with cytosolic Sod2p demonstrated that, in S. cerevisiae, efficient activation of Sod2p requires a mitochondrial localization. There is clearly a component absent from the cytosol that is required for efficient activation of Sod2p. Insertion of Manganese into Sod2p Requires New Protein Synthesis-How is mitochondrial Sod2p activated with manganese? We know that, in the case of copper-containing Sod1p, a pre-existing apopool of the enzyme is rapidly activated with copper in the absence of new protein synthesis (13,14). We tested whether the same was true for manganese-containing Sod2p of the mitochondria. To monitor activation of a pool of Sod2p that is largely apo for manganese, we utilized the manganese-deficient smf2⌬ mutant. In these cells, the Sod2p polypeptide still accumulates in the mitochondria, but is largely inactive because of low mitochondrial manganese (18). Sod2p activity is fully restored in this mutant by culturing cells in the presence of 10 M manganese (Fig. 3, A and C, lanes 3). We monitored the time required to activate Sod2p following the addition of manganese to the growth medium. As shown in Fig. 3, A and C, Sod2p was activated very slowly by manganese and required at least 2-3 h of treatment with the metal. By comparison, activation of cytosolic Sod1p by copper in S. cerevisiae cells is complete in Ͻ5 min (13). The slow activation of mitochondrial Sod2p is not a result of slow trafficking of the metal to the mitochondria, as mitochondrial manganese was restored to near wild-type levels after 15 min of treatment with manganese (Fig. 3B). Such a delay in metalation of the enzyme suggests that new protein synthesis may be required. To address the requirement for protein synthesis, the time course for Sod2p activation was monitored under conditions in which in vivo protein translation was blocked by cycloheximide. Fig. 3C shows that cycloheximide treatment (lanes 5, 7, and 9) FIG. 2. SOD3 from C. albicans is only partially active when expressed in the cytosol of S. cerevisiae (A). Wild-type strain BY4741 and the isogenic smf2⌬ and sod2⌬ mutants, transformed where indicated (SOD3, ϩ) with pVTSOD3 (20)-expressing C. albicans (Ca) SOD3, were grown in YPD medium that was supplemented where indicated (Mn 2ϩ , ϩ) with 100 M MnSO 4 . Total yeast cell lysates were analyzed for SOD activity by the native gel assay as described in the legend to Fig. 1C. The positions of C. albicans SOD3 and the endogenous Sod2p and Sod1p enzymes from S. cerevisiae are indicated. B, the specified amounts of whole cell lysate protein from either the sod2⌬ cell-expressing C. albicans SOD3 (left) or from wild-type BY4741-expressing endogenous Sod2p (right) were subjected to immunoblot analysis and compared with known amounts of the corresponding recombinant Mn-SOD molecule, which was purified to homogeneity as described previously (19,20). The purified recombinant Ca SOD3 and S. cerevisiae (Sc) Sod2p contain an N-terminal His 6 tag (20) and mitochondrial targeting sequence (18), respectively, that account for the slightly higher molecular weights on the immunoblot. completely abolished the increase in Sod2p activity with manganese treatment. Trafficking of manganese into the mitochondria, however, was unaffected by cycloheximide, as indicated by atomic absorption spectroscopy (Fig. 3B). Thus, manganese insertion requires new protein synthesis. The metal is not readily inserted into a pre-existing pool of apoSod2p, and only a freshly synthesized Sod2p molecule appears competent for metalation. Mitochondrial Import and Synthesis of Sod2p Appear Closely Coupled-Synthesis of the Sod2p polypeptide occurs outside the mitochondria, whereas metalation of Sod2p takes place within mitochondria. Based on these distinct cellular locations, why would polypeptide synthesis be required for manganese insertion? As a likely explanation, the synthesis of Sod2p may be closely coupled with mitochondrial import, and it is the import process that facilitates manganese insertion. In fact, it has been suggested that certain mitochondrial proteins are imported co-translationally (31)(32)(33)(34), because folding of the polypeptide in the cytosol would prohibit mitochondrial uptake. We tested whether this was the case for Sod2p. An experiment was designed in which Sod2p synthesis was controlled via the S. cerevisiae GAL1 promoter. A sod2⌬ deletion strain was transformed with an inducible Sod2p expression vector (Gal-Sod2p), and following 15-30 min of treatment with galactose, newly synthesized Sod2p became apparent (Fig. 4A). All of the newly synthesized Sod2p migrated as a single species (Fig. 4A), representing the mitochondrial processed ("P") form. Unprocessed, precursor Sod2p ("U") is completely absent in the induced samples, suggesting that the mitochondrial import of Sod2p occurs immediately following, or concomitant with, Sod2p synthesis. To address this further, we uncoupled mitochondrial import and protein synthesis. This was achieved through use of the proton ionophore CCCP, which blocks import by disrupting the mitochondrial membrane potential (29) . Fig. 4B, lanes 4 -6, shows that CCCP completely blocked import of Sod2p into the mitochondria. As a result, the unprocessed Sod2p precursor (U) accumulated in the cytosolic fraction (Fig. 4B, lane 5). The effect of CCCP can be neutralized by ␤-ME (29), and when ␤-ME is added shortly following CCCP, there is no inhibition of import, and newly synthesized Sod2p was taken into mitochondria and processed (P) (Fig. 4B, lanes 7-9). Using this system, we tested whether pre-existing cytosolic Sod2p can be chased into mitochondria. In the experiment of Fig. 4C, Sod2p synthesis was induced for 3 h in the presence of CCCP to allow accumulation of unprocessed cytosolic Sod2p (U) (lane 1). Where indicated, ␤-ME was then added for an additional 45 min. ␤-ME clearly reversed the effects of CCCP during this time frame, because the shorter form of Sod2p representing processed mitochondrial Sod2p (P) became apparent (Fig. 4C, lane 3). However, the unprocessed (U) Sod2p that accumulated prior to ␤-ME treatment was unchanged when mitochondrial import function was restored with ␤-ME (compare U in Fig. 4C, lanes 1 and 3). Furthermore, the appearance of processed mitochondrial Sod2p required new protein synthesis, as cycloheximide specifically prevented formation of processed (P) Sod2p in ␤-ME treated cells (lane 4). Together these findings are consistent with the notion that mitochondrial import requires freshly translated Sod2p. Overall, the synthesis, mitochondrial import, and manganese insertion steps for Sod2p are closely coordinated in time. DISCUSSION The mitochondrial SOD2 enzyme is well known for its role in eukaryote survival and fitness (35)(36)(37)(38)(39)(40)(41)(42). Yet despite this widespread importance, virtually nothing is known about the maturation of the SOD2 polypeptide in vivo. How is the inactive protein encoded by the nucleus converted into an active manganese-containing enzyme in the mitochondrial matrix? We have shown here that activation of S. cerevisiae Sod2p through insertion of the manganese cofactor must occur within the mitochondria. When expressed in the cytosol of S. cerevisiae, Mn-SOD molecules are poorly activated. Efficient manganese activation also requires new protein synthesis and mitochondrial import. Our data are consistent with a model in which the translation, mitochondrial import, and manganese activation of Sod2p are closely coupled in time. Although Sod2p is largely inactive when expressed in the cytosol of S. cerevsiae, activity could be restored by exposing cells to high toxic concentrations of manganese. Under normal physiological conditions, the bioavailability of manganese in the cytoplasm appears too low to activate newly synthesized Sod2p. This may be a universal phenomenon, because most eukaryotes do not express a cytosolic Mn-SOD. However, there are rare exceptions, as in the case of the cytosolic Mn-SOD of C. albicans (20) and of decapod crustaceans (21). Our studies here show that C. albicans SOD3 does not possess an inherent ability to acquire cytosolic manganese, as a large fraction of the protein remained inactive when expressed in the cytoplasm of S. cerevisiae. Instead, C. albicans, as well as the crustaceans, may have evolved novel methods for delivering manganese to Mn-SOD in the cytosol, e.g. mechanisms that involve a manganese chaperone or elevated bioavailability of the metal. Our studies show that import of S. cerevisiae Sod2p into mitochondria requires a freshly synthesized Sod2p polypeptide. If allowed to accumulate and fold in the cytosol, Sod2p is refractory to mitochondrial uptake. When folded, Sod2p is a notoriously stable molecule. Human SOD2 is stable at 60°C (43), and the S. cerevisiae enzyme can be purified following treatment at 70°C with little loss in activity (5). Studies with homologous manganese SODs from bacteria have shown that even the apoform of the enzyme forms a tight stable structure resistant to thermal denaturation (44,45). As such, it is not surprising that import of SOD2 into mitochondria must occur before the protein has a chance to fold in the cytosol. In this regard, it is noteworthy that the mRNA for SOD2 in mammalian cells and the mRNA/ribosomes for Sod2p in S. cerevisiae are both found associated with the outer membrane of mitochondria (46,47). With other mitochondrial proteins, the 3Ј- FIG. 4. Sod2p accumulated in the cytosol cannot be imported into mitochondria. sod2⌬ mutants harboring the pEL1G1 vector for galactose-inducible expression of SOD2 were grown in YPR medium to mid log before galactose was added to induce Sod2p synthesis. The Sod2p polypeptide in whole cell lysates (panel A and T) or in mitochondrial (M) and post-mitochondrial supernatant/cytosolic (C) fractions was analyzed by immunoblot as described in the legend to Fig. 1. The U-position of unprocessed Sod2p containing the mitochondrial leader sequence and P-position of processed or mature mitochondrial Sod2p are shown. A, galactose induction proceeded for the indicated time points in min prior to preparation of cell lysates. Lane 1 contains unprocessed Sod2p (U) used as a molecular weight control. B, galactose induction proceeded for 3 h. Where indicated, 20 M CCCP and 0.05% (v/v) ␤-ME were added at t ϭ 15 and t ϭ 30, respectively. C, galactose induction in the presence of 20 M CCCP proceeded for 3 h. Cells were either harvested (t ϭ O) or treated for 45 min with 0.05% (v/v) ␤-ME in the presence or absence of 100 g/ml cycloheximide (to block protein synthesis) as indicated. Cycloheximide was added Ϸ15 min prior to the addition of ␤-ME. UTR was found to mediate association with the mitochondrial outer membrane (47), and the same may be true for Sod2p. In any case, translation of S. cerevisiae Sod2p appears to occur at the site of the mitochondria to facilitate co-translational import of the protein into mitochondria (31,32,34,47). Once imported into mitochondria, Sod2p needs to rapidly acquire its metal, because a pre-existing inactive pool of mitochondrial Sod2p failed to acquire manganese in vivo. This is consistent with in vitro studies preformed on the homologous bacterial Mn-SOD enzymes (44,45,48). When folded, these enzymes cannot acquire the metal. Activation with manganese is only possible when the proteins were thermally denatured and the metal was present during, but not after, refolding of the polypeptide (44,45,48). With Mn-SOD molecules, metal access to the active site is guarded by a large transition state barrier that only becomes accessible when the polypeptide is unfolded (44,45,48). We propose that, in the case of eukaryotic manganese SOD2, the requisite protein unfolding step is achieved by mitochondrial import. Passage of polypeptides through the inner mitochondrial membrane requires extensive protein unfolding, followed by refolding once in the matrix (49,50). But in the case of eukaryotic SOD2, manganese insertion must take place prior to refolding. SOD2 has four amino acid ligands for manganese at positions 52, 107, 198, and 194 in the S. cerevisiae enzyme. Perhaps the metal begins to insert at the N-terminal ligands as the polypeptide emerges from the inner membrane (see schematic, Fig. 5). An accessory protein may also be involved to prevent protein folding prior to metal insertion. Currently, the only mitochondrial protein known to facilitate manganese activation of SOD2 is S. cerevisiae Mtm1p (19). Located in the inner membrane of mitochondria, Mtm1p is in a perfect position to assist in Sod2p metalation, as the polypeptide enters the mitochondrial matrix. The precise activity of Mtm1p is not known but may involve direct insertion of the manganese cofactor or maintaining Sod2p in a conformation that is competent for metal activation. These possibilities are under current investigation. Overall, our studies have provided a more detailed mechanistic picture for the post-translational activation of SOD2 with manganese. As shown in our model of Fig. 5, the ribosomes for S. cerevisiae Sod2p synthesis are juxtaposed to the outer mitochondrial membrane (46,47). This allows for the coupling of Sod2p synthesis and mitochondrial import. As the polypeptide emerges from the inner membrane, manganese ions are inserted through a process that is facilitated by Mtm1p in the inner membrane and perhaps other accessory proteins as well. Last of all, the manganese-containing protein is folded in the mitochondrial matrix in a stable quaternary tetramer. With the clear importance of SOD2 in eukaryotic survival and fitness (35)(36)(37)(38)(39)(40)(41)(42), these ordered steps must be carefully controlled. (47), which would facilitate co-translational import of Sod2p into mitochondria. As Sod2p emerges into the mitochondrial matrix, the protein immediately acquires its manganese cofactor prior to folding of the polypeptide. Manganese binding may begin prior to complete translocation of the Sod2p polypeptide. Manganese insertion requires the action of Mtm1p, a member of the mitochondrial carrier family of transporters in the inner membrane (19). The precise substrate for transport by Mtm1p is not known (circled X), but possibilities include manganese itself or a solute that facilitates manganese insertion. Following insertion of manganese, the Sod2p polypeptide folds and monomers associate to form the active tetrameric enzyme. H and D, manganese binding ligands H52, H107, D194, and H198.
2018-04-03T04:27:16.045Z
2005-06-17T00:00:00.000
{ "year": 2005, "sha1": "78acd8f9b5de00f1c420b66c3d85893462da55ed", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/280/24/22715.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "febc9530c8daf6a010a17a188b51d51c2a99613b", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
227913302
pes2o/s2orc
v3-fos-license
Quality indicators for the evaluation of end-of-life care in Germany – a retrospective cross-sectional analysis of statutory health insurance data Background The provision and quality of end-of-life care (EoLC) in Germany is inconsistent. Therefore, an evaluation of current EoLC based on quality indicators is needed. This study aims to evaluate EoLC in Germany on the basis of quality indicators pertaining to curative overtreatment, palliative undertreatment and delayed palliative care (PC). Results were compared with previous findings. Methods Data from a statutory health insurance provider (AOK Lower Saxony) pertaining to deceased members in the years 2016 and 2017 were used to evaluate EoLC. The main indicators were: chemotherapy for cancer patients in the last month of life, first-time percutaneous endoscopic gastrostomy (PEG) for patients with dementia in the last 3 months of life, number of hospitalisations and days spent in inpatient treatment in the last 6 months of life, and provision of generalist and specialist outpatient PC in the last year of life. Data were analysed descriptively. Results Data for 64,275 deceased members (54.3% female; 35.1% cancer patients) were analysed. With respect to curative overtreatment, 10.4% of the deceased with cancer underwent chemotherapy in the last month and 0.9% with dementia had a new PEG insertion in the last 3 months of life. The mean number of hospitalisations and inpatient treatment days per deceased member was 1.6 and 16.5, respectively, in the last 6 months of life. Concerning palliative undertreatment, generalist outpatient PC was provided for 28.0% and specialist outpatient PC was provided for 9.0% of the deceased. Regarding indicators for delayed PC, the median onset of generalist and specialist outpatient PC was 47.0 and 24.0 days before death, respectively. Conclusion Compared to data from 2010 to 2014, the data analysed in the present study suggest an ongoing curative overtreatment in terms of chemotherapy and hospitalisation, a reduction in new PEG insertions and an increase in specialist PC. The number of patients receiving generalist PC remained low, with delayed onset. Greater awareness of generalist PC and the early integration of PC are recommended. Trial registration The study was registered in the German Clinical Trials Register (DRKS00015108; 22 January 2019). Supplementary Information The online version contains supplementary material available at 10.1186/s12904-020-00679-x. Background In 2018, approximately 955,000 people died in Germany [1]. It is assumed that roughly 75% of all people at the end of life require palliative care (PC) [2][3][4]. Given estimates that the number of patients with PC needs will increase in the coming decades, health care systems are expected to face significant challenges [5]. PC is generally provided for patients with oncologic diseases, while patients with nononcologic chronic progressive diseases often receive PC at only a late stage in their disease trajectory [6][7][8]. Therefore, the World Health Organization has emphasised the importance of improving access to PC, especially for patients with non-oncologic diseases [9]. In Germany, outpatient PC includes both generalist and specialist PC. Generalist PC for patients in the community is mostly initiated and provided by primary care professionals (most frequently general practitioners). It is intended for patients at an early stage in their disease trajectory with overall low symptom intensity [10]. Since 2013, generalist outpatient PC in Germany has been available for statutory health insurance billing [11]. In contrast, specialist outpatient PC is typically provided by interdisciplinary teams comprised of trained specialists in PC for patients with complex problems and symptoms. Specialist outpatient PC is governed by the 2007 German Act to Strengthen Competition in statutory health insurance, and can be prescribed by both outpatient and inpatient physicians [12]. In 2015, Germany introduced legislation to improve hospice and PC (HPG) [13,14]. Specifically, the new act aimed at developing generalist outpatient PC and regulating specialist outpatient PC [13]. This act states that "palliative care is part of health care" (e.g. §27 social security statutes (SGB) V) and comprises concrete implications for clinicians who provide specialist outpatient PC (e.g. §132d SGB V) [13,14]. It promotes new forms of cooperation between interdisciplinary specialist outpatient PC teams and aims to improve care especially in regions with poor access to specialist PC services [13,15]. It is unclear, however, whether the resulting structural and political developments led to significant changes in the provision and quality of end-of-life care (EoLC). Radbruch et al. evaluated EoLC in Germany in the years 2010 to 2014 according to three categories of quality indicators [7]: 1. curative overtreatment (e. g. chemotherapy in the last month of life); 2. palliative undertreatment (e. g. generalist outpatient PC in the last year of life); and 3. delayed PC (e. g. onset of specialist outpatient PC before death). Other relevant analyses have focused on regional disparities, as well as the structures and utilisation of PC throughout Germany [7,15]. Radbruch et al. identified different PC patterns across the federal states, but an overall focus on curative care, over and above caring and accompanying approaches. The researchers recognised overtreatment with curative approaches at the end of life in most German regions, even when medical indications of the utility of such approaches were lacking [7]. In this context, it is reasonable to assume that the potential damage of curative treatment may outweigh the benefits [16]. At the same time, palliative treatment approaches may fall short, indicating palliative undertreatment. Future actions recommended by Radbruch et al. involved improving access to PC and raising awareness of the need for PC amongst health care professionals [7]. The aim of the present study was to evaluate current EoLC on the basis of quality indicators similar to those used by Radbruch et al. [7] Statutory health insurance data from the years 2016 and 2017 pertaining to deceased members' last year of life were analysed and compared with Radbruch et al.'s findings from 2010 to 2014 [7]. Specifically, the following questions were addressed: Study design A retrospective secondary analysis of statutory health insurance data was performed through a cross-sectional study following the RECORD Statement (Reporting of studies Conducted using Observational Routinely-collected Data) [17]. The study was part of the research project entitled "Optimal care at the end of life" (OPAL) [18], which aims at improving EoLC in selected rural regions in Lower Saxony, Germany. Study population AOK (Allgemeine Ortskrankenkasse) is one of the largest statutory health insurance providers in Germany. With more than 2.8 million insured members in Lower Saxony, AOK holds reliable data on approximately 36% of state residents [19]. Specifically, AOK collects demographic and sociodemographic data, as well as outpatient and inpatient diagnoses and treatments, for accounting purposes. For the present study, we used data pertaining to AOK Lower Saxony (AOK-N) members who died in 2016 or 2017, as these were the most recent available data. In this study, we included insured members with residence in Lower Saxony who needed to be at least 18 years old at the time of death and to be continuously insured in the year of death and the preceding calendar year. An additional inclusion criterion was the presence of a valid diagnosis for at least one chronic progressive oncologic or non-oncologic disease (Table 1) in the last year of life. We accepted diagnoses in the outpatient setting as valid, if the associated codes in the International Statistical Classification of Diseases and Related Health Problems -10th Revision (ICD-10) were documented in at least two of the five quarters preceding death (i.e., the quarter of death and the four preceding quarters). For inpatient diagnoses, a single diagnosis was considered sufficient for inclusion. Nonchronic conditions and suspected diagnoses were excluded [20,21]. Diagnoses of interest were predefined according to the ICD-10 and based on data from Murtagh et al. [22] and Rosenwax et al. [2] The ICD-10 code list was adjusted by an interdisciplinary expert council comprised of two physicians (a specialist and a trainee for family medicine and PC), a nursing scientist, a sociologist, a health scientist and a physiotherapist. Acute diagnoses, risk factors, conditions leading to chronic diseases without an immediate need for PC and diseases that do not require PC (from a clinical perspective) were excluded. In contrast to Radbruch et al., we have focused our analyses on chronic diseases and diseases that potentially cause PC needs. Outcomes Data from the deceased were analysed on the basis of approved quality indicators for the evaluation of EoLC, as described by Radbruch et al. [7] The published findings of Radbruch et al. on these quality indicators were used as a baseline from which to compare the EoLC findings of the present study [7]. Most of the relevant quality indicators are well-established and described in the international literature [23,24]. Of note, AOK-N was unable to provide complete data on chemotherapy treatments for deceased members in 2016, which is why the results for this indicator refer only to 2017. Table 2 shows the quality indicators examined in the present study. Data analysis Data were analysed descriptively using the software IBM Statistical Package for Social Sciences version 26 (SPSS Inc., Chicago, IL/USA). Data protection The present study followed the data security procedure described in the study protocol of the main research project (OPAL) [18]. AOK-N edited and anonymised data from the deceased before transferring them to the study team. Both project partners discussed and agreed on the anonymisation procedure in advance. As an example of this procedure, age groups were defined as broadly as possible, to ensure data security and to prevent the backtracking of individuals. All data were saved and stored on a secure and password-protected institutional server. Data processing was conducted by the study team, exclusively. Description of the study sample The present analysis used data pertaining to 64,275 deceased members (2016: 32,442; 2017: 31,833). The mean age of death was 80.0 years (SD 11.9): 82.9 years (SD 11.2) for females and 76.6 years (SD 11.9) for males. Figure 1 shows the inclusion and exclusion decisions for the deceased members. The final sample contained a slightly higher proportion of women. Table 3 presents the demographic characteristics of the study population. EoLC quality indicators The descriptive analyses of the evaluation of EoLC on the basis of quality indicators are presented in Table 4. Curative overtreatment In total, 10.4% of the deceased members with cancer (in 2017, only) received chemotherapy in the last month of life. The incidence of chemotherapy decreases with age (18-50 years old: 23.2%; 51-60: 16.9%; 61-70: 12.2%; Table 4). More than three-quarters of the deceased had at least one hospitalisation in the last 6 months of life ( Table 4), while the mean number of hospitalisations per deceased member was 1.6 (SD 1.5). Simultaneously, the mean number of days spent in inpatient treatment was 16.5 (SD 20.8). Palliative undertreatment In the last year of life, 28.0% of the deceased received generalist outpatient PC. Specialist outpatient PC was provided for 9.0% of the deceased ( Table 4). Discussion The main findings of this study were: (1) an increase and slightly earlier initiation of specialist outpatient PC, (2) a constant frequency and ongoing late initiation of generalist outpatient PC, (3) a reduction in the number of new PEG insertions in the last 3 months of life for patients with dementia and (4) a lower number of inpatient treatment days though an unchanged number of hospitalisations. In the following, we will discuss these results in comparison with earlier results and particularly with the published findings of Radbruch et al., who investigated EoLC on the basis of similar quality indicators in Germany for the years 2010 to 2014 (supplementary Table S1) [7]. [7]. It has been demonstrated that tube feeding does not improve clinically important outcomes, and it should therefore not be used, especially for patients with dementia [25]. For these patients, van der Steen et al. recommend intensified hand feeding, rather than permanent enteral tube nutrition [26]. Furthermore, insertion of a PEG tube is often perceived as burdensome by the general public and some health care professionals [7]. The decrease in new PEG insertions found in the present study may indicate an increase in the use of intensified hand feeding, as well as a higher awareness amongst health care professionals of the clinical limitations of PEG tubes at the end of life. The decrease may have also been affected by recent political initiatives and legal regulations in Germany, which may have improved PC awareness amongst health care professionals. Additionally, the new legislation to improve PC [14] may have encouraged the realisation of advance care planning concepts [13]. Therefore, undesirable overtreatments such as PEG tube insertions might continue to be reduced, especially within nursing homes, where they are often used for dementia patients at the end of life [13]. Compared to the findings of Radbruch et al. [7], the present results showed a slight increase in the number of cancer patients receiving chemotherapy in the last month of life (2010 to 2014: 9.6%). It would be incorrect to assume that all chemotherapy administered in the last month of life is inappropriate, as such treatment may be reasonable for patients with a fast disease progression or when aimed at improving quality of life [7]. However, exceedingly aggressive treatments (e.g. chemotherapy) at the end of life are indicative of poor EoLC, and they may negatively impact on patients' quality of life [24,27,28]. There are many reasons why chemotherapy may still be administered in the last month of life. Clinicians may overestimate the prognosis, applying inappropriate treatment and delaying PC [29,30]. Decisions on treatment intensity at the end of life may also be influenced by patient preferences. However, most patients at an older age prefer palliative treatment over life-extension treatment [31]. Early end-of-life conversations about patients' preferences and the timely initiation of PC may reduce the administration of chemotherapy, thereby improving patients' quality of life and care [32,33]. Further PC education amongst health care professionals may encourage the provision of PC and reduce curative overtreatment [34,35]. Compared to the results of Radbruch et al. [7], the present findings showed a consistent mean number of hospitalisations in the last 6 months of life, but a slightly lower (by approximately 2 days) number of inpatient treatment days (2010 to 2014: 1.7/18.6). The hospitalisation of patients with PC needs can sometimes be useful. However, hospital admissions with no medical indication may be deemed aggressive and burdensome by patients with PC needs at the end of life [7,36,37]. In Germany, the number of days spent in inpatient treatment has decreased over recent years, mainly due to changes in the health care system [38,39]. Therefore, the lower number of hospital treatment days found in the present study cannot necessarily be interpreted as an indication of a reduction in curative overtreatment. Indeed, hospital admissions and treatment days may be influenced by a variety of factors, including the tendency for patients to feel safe in a hospital and general patient characteristics (e.g. age, ethnicity) [40]. Furthermore, it is often difficult for physicians to determine the clinical need for hospital admissions [37], and this may be one reason for the overall high number of hospitalisations at the end of life. Training in caregiving for terminally ill patients might improve this situation. Also, changes in the health care system to expand outpatient care alternatives for critically ill patients may be useful [37]. Further studies should investigate the effects of various approaches to reduce unnecessary end-of-life hospital admissions, such as PC training for ambulance staff [41]. Palliative undertreatment Compared to the results of Radbruch et al., the present findings showed a reduction in palliative undertreatment for specialist outpatient PC, but a consistent level of generalist outpatient PC, and therefore ongoing palliative undertreatment [7]. This consistency (2014: 28.0%) is highly remarkable, given the introduction of billing codes for generalist outpatient PC in Germany in 2013, which was expected to significantly increase the provision of this service. In fact, recent legal changes in Germany appear to have failed to achieve their intended goals, for a variety of reasons. As recently described, generalist outpatient PC requires great effort, especially from general practitioners [42]. Thus, there may be a need for further legislation around health care structures and financial models [42,43]. A reform of payment models and funding approaches may improve widely access to PC, ensure best practice and prevent inverted incentives [43,44]. Additionally, general practice has taken on greater importance in recent years and, in line with this, the requirements and qualifications for general practitioners have become increasingly complex [45]. However, the increased demand for primary care services has not been accompanied by an equivalent growth in the workforce; thus, time restraints on general practitioners might reduce their quality of care and lower their job satisfaction [46,47]. Overall, time-consuming bureaucratic procedures, personal commitments and inadequate qualifications may prevent general practitioners from timely initiating PC, and this needs to be addressed [35]. Nevertheless, the present results do not enable any conclusions to be drawn relating to the daily care routines of general practitioners, since only billed health care services were included in the analysis. Finally, the present results indicated a considerable increase in specialist outpatient PC relative to Radbruch et al.'s findings (2010 to 2014: 5.3%) [7]. It has been estimated that, in recent years, approximately 10% of the deceased required specialist outpatient PC prior to their death [48], but were unable to access this service [7,49]. One important reason for the increase in specialist outpatient PC found in the present study might be the wider availability of specialist outpatient PC following its regional implementation in the community [50,51]. In fact, the present findings indicate that the capacity for specialist outpatient PC has increased and it can be assumed that the estimated population need for specialist PC is met. Presumably, the legal changes and initiatives to raise awareness for palliative needs have contributed to this increase since 2014. This shows that structural and legal changes can be an important driver for further development in health care systems. Existing structures need to be improved and expanded from top to down and cannot solely develop on regional level. Differences in regional structures and processes of the specialist outpatient PC teams might play a key role. There might also be certain regional disparities between the counties in Lower Saxony. While the potential population need might be met in some regions, it is potentially missed in others. However, our data cannot distinguish whether those patients with the greatest needs are actually the ones provided with specialist outpatient PC. Delayed PC The present findings underlined the ongoing late initiation of generalist outpatient PC. In contrast, specialist PC was initiated slightly earlier, relative to the findings of Radbruch et al. (2010 to 2014: median of 22.0 days) [7]. While the slightly earlier initiation of specialist outpatient PC found in the present study may suggest a step in the right direction, the number of days between the onset of this treatment and deathespecially with regards to generalist outpatient PCindicates an unchanged focus on the last months of life. It is well known that the early initiation of PC improves many important outcomes, such as quality of life and the burden of symptoms [3,[52][53][54]. PC must not be reserved solely for patients whose life-prolonging treatment options have been exhausted; rather, it should be considered shortly after diagnosis [55]. Many physicians find it difficult to determine the appropriate time to initiate PC in the disease progression [56,57]. Prognostic uncertainties form a major barrier for the early identification of patients with PC needs, and the estimation of disease progression is especially difficult for patients with non-oncologic diseases [6,58]. Internationally, there are several instruments that support the identification of patients with potential PC needs [59,60]. One such instrument is the Supportive and Palliative Care Indicators Tool (SPICT-DE), which is available for use in the German context [61,62]. Its application in primary care is currently being evaluated [18]. Nonetheless, identification instruments such as the SPIC T-DE are not implemented widely and consistently throughout Germany [63]. For this reason, further PC training for physicians and other health care professionals might represent an important step in supporting the identification of patients with potential PC needs and promoting the early initiation of PC [64][65][66]. Methodological strengths and limitations AOK-N is the largest statutory health insurance provider in Lower Saxony [19], and thus a reliable data source for the present analysis. The population of AOK-N members is comparable to the general population in Germany and Lower Saxony, regarding gender and age [67]. However, differences exist with respect to education and occupation, which is why lower socioeconomic groups may have been overrepresented in the current study [67]. To counteract this possible bias, the present study did not focus on socioeconomic differences between groups. Furthermore, the results were based on a large sample of AOK-N members who died in 2016 or 2017, enabling robust analyses to be conducted. One difficulty with all secondary analyses of health insurance data pertains to billing purpose. In the present study, conclusions regarding PC timing may have been unreliable in some cases. Data on inpatient stays and outpatient services (e.g. generalist outpatient PC) were highly reliable, as they contained the dates of service provision. However, data on specialist outpatient PC only contained the date of prescription, while the actual treatment by a specialised PC team may have been delayed. Furthermore, specialist outpatient PC may have been initially prescribed by hospital doctors, and such prescriptions were not observable in the current dataset. Nonetheless, all follow-up prescriptions in the outpatient sector were observed. Finally, the use of routinely collected data involves low expenditure for data collection and can be highly beneficial to reflect the care situation [68]. However, it has to be taken into account that the actual care situation cannot be completely represented by routinely collected data. Content-related strengths and limitations Although the data enabled us to evaluate the quality of EoLC on the basis of documented procedures of care, they did not allow us to analyse potential consequences of the analysed indicators, such as the effects on patients' quality of life. Further limitations pertain to diagnostic accuracy. Criteria for the validity of diagnoses cannot prove whether the diagnoses were correct and if patients were treated accurately [20]. Particularly in the outpatient sector, ICD-10 codes are often used imprecisely, due to variations in coding methods [69]. Data on diagnoses can be affected by an individual coder as well as by financial incentives in the German health care system. Additionally, statutory health insurance data does not record cause of death. While the comparison with the results of Radbruch et al. [7] was reasonable to contextualise our data, considerable differences existed between the study samples. In contrast to Radbruch et al., the present study predefined chronic diseases with potential PC needs. Nonetheless, the utilisation of criteria for the validity of diagnoses was an important strength of our study. Only data from patients with valid chronic diagnoses were included in the analysis. Furthermore, the ICD-10 code list was based on the current literature [2,22] and compiled by an interdisciplinary panel of experts. Conclusions In addition to finding a decrease in new PEG insertions and an increase in specialist outpatient PC at the end of life, the present study also showed an ongoing pattern of curative overtreatment, palliative undertreatment and delayed provision of generalist PC. Particularly with regards to generalist outpatient PC, the findings suggest room for improvement. The legal amendments led to crucial changes in the provision of EoLC in Germany, but the need especially for generalist outpatient PC is still unmet. In conclusion, there is a need for early end-of-life discussions, more timely initiation of PC and further PC training among health care providers. With regards to this latter point, increased awareness of PC needs is especially necessary in primary care. The wide and consistent use of standardised instruments to systematically identify patients with potential PC needs may improve EoLC by supporting the transition from curative overtreatment and palliative undertreatment to early integrated PC. Additionally, there is a need for further legislation concerning health care structures and financial models including strategies to strengthen the role of general practitioners in providing EoLC. Existing structures need to be expanded. Our results are based on the most recent available data and form the groundwork for a regular evaluation of EoLC. Additional file 1: Table S1. Comparison of EoLC quality indicators in Lower Saxony.
2020-12-09T14:35:40.938Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "a0eb6c4a3b805444a8b3342e48ba8acdac4637d4", "oa_license": "CCBY", "oa_url": "https://bmcpalliatcare.biomedcentral.com/track/pdf/10.1186/s12904-020-00679-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "797f17ba2e7d5d2a88765e1b1587c4a1ae5ff181", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
195767180
pes2o/s2orc
v3-fos-license
Speeding up adiabatic passage with an optimal modified Roland-Cerf protocol In this article we propose a novel method to accelerate adiabatic passage in a two-level system with only longitudinal field (detuning) control, while the transverse field is kept constant. The suggested method is a modification of the Roland-Cerf protocol, during which the parameter quantifying local adiabaticity is held constant. Here, we show that with a simple ``on-off"modulation of this local adiabaticity parameter, a perfect adiabatic passage can be obtained for every duration larger than the lower bound $\pi/\Omega$, where $\Omega$ is the constant transverse field. For a fixed maximum amplitude of the local adiabaticity parameter, the timings of the ``on-off"pulse-sequence which achieves perfect fidelity in minimum time are obtained using optimal control theory. The corresponding detuning control is continuous and monotonic, a significant advantage compared to the detuning variation at the quantum speed limit which includes non-monotonic jumps. The proposed methodology can be applied in several important core tasks in quantum computing, for example to the design of a high fidelity controlled-phase gate, which can be mapped to the adiabatic quantum control of such a qubit. Additionally, it is expected to find applications across all Physics disciplines which exploit the adiabatic control of such a two-level system. Introduction Controlling efficiently the fundamental quantum unit, the two-level quantum system, lies at the heart of many modern quantum technology applications [1,2]. One of the most effective methods to address this problem is adiabatic passage (AP) [3,4]. The system starts from an eigenstate of the initial Hamiltonian, then some parameter varies slowly with time and, if the change is slow enough, it ends up to an eigenstate of the final Hamiltonian. The traditional setup for AP is a two-level system where only the longitudinal z-field is time-dependent, while the transverse x-field is constant. This framework not only describes the setting of some classical applications, for example nuclear magnetic resonance, but is also pertinent to some modern applications, like several important core tasks in quantum computing [5,6,7,8,9,10]. As a concrete example we mention the design of a high fidelity controlled-phase gate [5], which can be mapped to the adiabatic quantum control of such a qubit [6]. In the traditional AP, the slow change in the control parameter, z-field, is linear, and the process is called Landau-Zener (LZ) sweep [11,12]. The method has been proven to be robust to moderate variations of the system parameters. Its major limitation is, as with every adiabatic method, the necessary long operation time which may lead to a degraded performance in the presence of decoherence and dissipation. In order to speed up the evolution, several methods have been suggested. For example, it has been shown that certain nonlinear LZ sweeps can achieve perfect fidelity for specific durations [13]. In a related work [6], the error probability of the final state with respect to the adiabatic evolution is minimized for durations larger than a certain threshold. A high fidelity is achieved, at levels appropriate for fault-tolerant quantum computation, even for durations as short as a few times the system timescale. Optimal control theory has also been exploited to find the quantum speed limit for the desired transfer [14,15], but it requires infinite values of the control field in order to implement instantaneous rotations around z-axis. More realistic speed limits have been obtained for bounded control [16], but their implementation also requires discontinuous and nonmonotonic changes of the z-field. Finally, we mention the methods developed under the umbrella of Shortcuts to Adiabaticity [17,18,19,20,21,22,23], where the quantum system is driven at the same final state as with a slow adiabatic process, but without necessarily following the instantaneous adiabatic eigenstates at intermediate times. The common characteristic of these techniques when applied to two-level quantum systems [24,25,26,27,28,29,30,31,32], is that both the longitudinal (z) and transverse (x) fields are exploited in order to speed up adiabatic evolution, while here we focus on the restricitve framework where only the z-field is time-dependent. The Roland-Cerf (RC) protocol was originally developed in order to accelerate quantum search in adiabatic quantum computation [33]. It relies on the fulfilment of a local (in time) adiabaticity condition, instead of a global one valid during the whole process. In the present work, we first apply the RC protocol with only detuning (z-field) control, as in Refs. [26,27], and show that it can achieve perfect fidelity for specific durations, as the nonlinear LZ sweeps. During the application of this protocol, the parameter quantifying local adiabaticity is held constant. Next, we suggest a modified RC protocol, with "on-off" modulation of the local adiabaticity parameter, which can achieve perfect fidelity for every duration larger than a lower bound. Compared to our recent related work [34], here we use optimal control theory to obtain an extra optimality condition, see Sec. 4, which allows to determine the timings of the "onoff" optimal control by solving a single transcendental equation. This is a significant improvement compared to Ref. [34], where the optimal timings are obtained through a numerical optimization with respect to the control amplitude. The suggested method exploits the advantages of composite pulses [35,36,37,38], while the corresponding control z-field varies continuously and monotonically in time. These characteristics |φ + ≈ (0 1) T , |φ − ≈ (1 0) T . In the traditional AP [11,12], the detuning is increased linearly with time, until the angle obtains the final value θ f < π/2, see Fig. 1. If the change is slow enough, i.e. for a sufficiently long duration, the system remains in the same eigenstate of the instantaneous Hamiltonian. For large final positive detuning it is θ f ≈ 0 and |φ + ≈ (1 0) T , |φ − ≈ (0 − 1) T . As at initial and final times each adiabatic state becomes uniquely identified with one of the original states of the system, AP achieves complete population transfer from state |0 to |1 and vice versa. The advantage of the method is its robustness to moderate variation of the system parameters, while its drawback is the long necessary time, which may render it impractical in the presence of decoherence and dissipation. In this article we derive controls, ∆(t) and θ(t), which drive the system to the same final eigenstate without following the intermediate adiabatic path. As a warm up example we present the Roland-Cerf protocol for the two-level system under consideration [26,27]. In this protocol, the matrix element of the rate of change dH/dt between the eigenstates |φ ± (t) , is taken to be proportional to the square of the instantaneous energy gap, i.e. where u is a constant parameter. For u ≪ 1 the local adiabaticity condition +|Ḣ|− /g 2 ≪ 1 is satisfied. For the two-level system, Eq. (9) becomeṡ which can be easily integrated to give The corresponding detuning can then be obtained from Eq. (2). The performance of the RC protocol was evaluated numerically in Refs. [26,27]. In order to evaluate the performance analytically and for arbitrarily large u, it is more convenient to work in the adiabatic frame. By expressing the state of the system in both the original and the adiabatic frames we obtain the following transformation between the probability amplitudes of the two pictures From Eqs. (4), (13) we find the following equation for the probability amplitudes in the adiabatic frame where the Hamiltonian now is The above equations are simplified if, inspired from Eq. (10), we use a dimensionless rescaled time τ defined as For 0 < θ < π, that we consider here, it is sin θ > 0 and the rescaling (16) is well defined. The equation for b becomes where and b ′ = db/dτ , θ ′ = dθ/dτ = −u are the derivatives with respect to the rescaled time. Since Hamiltonian H ′ ad is constant, from Eqs. (17), (18) we obtain at the final (rescaled) time τ = T that b(T ) = Ub(0), where the unitary transformation U is given by and If the system starts in the |φ + state, then b(0) = (1 0) T . For a perfect AP the system should end up in the same state at the final time τ = T , thus it is sufficient that b 2 (T ) = 0. From Eq. (19) we obtain the condition sin (ωT /2) = 0, such that U = ±I, which leads to During time T the angle should change from θ i to θ f , thus Combining Eqs. (21) and (22) we find the solution pairs for k = 1, 2, . . . The corresponding durations in the original time t can be found from Eq. (11) and they arẽ At this point it is worth mentioning that shortcuts to adiabaticity working for specific durations, like above, have been obtained for quantum teleportation [39] with two control fields playing the role of Stokes and pump pulses in the familiar STIRAP terminology [40,41], as well as for the quantum parametric oscillator [42,43]. Modified Roland-Cerf protocol as an optimal control problem in the adiabatic reference frame In the previous section we showed that the classical RC protocol, with constant control u = −dθ/dτ in the rescaled time, achieves perfect AP for specific durations T k and amplitudes u k . In the present section we explain how we can generalize this procedure and obtain perfect fidelity for arbitrary durations larger than the lower bound T 0 = π in the rescaled time, which we derive below. The main idea is to apply a modified RC protocol with time-dependent bounded control 0 ≤ u(τ ) ≤ v, and then use optimal control theory to obtain the minimum-time pulse-sequence which satisfies all the desired conditions, for specific maximum amplitude v. Note that the nonnegativity of u(τ ) assures that the magnetic field angle θ decreases monotonically from θ i to θ f . On the other hand, as the upper bound v increases, the duration of the optimal pulse-sequence decreases, approaching the limit T 0 = π. In order to formulate the corresponding optimal control problem in the adiabatic frame, we will use the Bloch equations corresponding to the two-level system (17). If we define the new state variables it is not hard to verify that they satisfy the following equationṡ or, in a more compact forṁ where s = (s x , s y , s z ) T and Since the matrices in Eq. (28) are antisymmetric, the system equation (27) can take the formṡ where × denotes the vector cross product and are the axes unit vectors. We can now formulate the optimal control problem for system (27) or (29). Starting from the north pole s = (0, 0, 1) T , we would like to find the bounded control T 0 1dτ needed to return to the starting point. In the following section we analyze the solutions to this problem using optimal control theory. Before doing so, we explain how is obtained the lower bound T 0 = π (in the rescaled time) of the pulse-sequence duration. In the original reference frame (not the adiabatic), we consider an instantaneous change in the total field from θ = θ i to θ =θ = (θ i + θ f )/2, i.e. in the middle of the arc connecting the initial and target states. The corresponding detuning is ∆ = Ω cotθ and the total field is √ ∆ 2 + Ω 2 = Ω/ sinθ. Under the influence of this constant field for durationT the Bloch vector is rotated from (φ = 0, θ i ) to (φ = 0, θ f ). After the completion of this half circle, the total field is changed again instantaneously from θ =θ to θ = θ f . Since θ =θ during this evolution, except the (measure zero) initial and final instants, Eq. (16) becomes dτ = Ωdt/ sinθ and the corresponding duration in the rescaled time is thus We finally point out that the corresponding quantum speed limit (in the original time) isT qsl = (θ i − θ f )/Ω, as obtained in Ref. [14] and formally proved in Ref. [15], see also Refs. [26,27,44], but is derived using infinite values of the detuning which implement instantaneous rotations around z-axis, while angle θ changes nonmonotonically. More realistic speed limits have been obtained for bounded detuning [16], but their implementation also requires discontinuous and non-monotonic changes of the magnetic field angle. On the contrary, the bounds in Eqs. (31), (32) are obtained with finite detuning values and a monotonic change of θ (decrease for θ i > θ f ). For θ i ≈ π and θ f ≈ 0, it isT qsl ≈T 0 ≈ π/Ω, as derived in [45]. Analysis of the optimal solution Let λ = (λ x , λ y , λ z ) be the time-dependent row vector of Lagrange multipliers corresponding to system equations and µ the constant multiplier corresponding to the integral condition for the pulse area. The control Hamiltonian for the previously formulated problem incorporates aside the cost (time) both the integral condition and the system equation Using Hamilton's equationsλ α = −∂H c /∂s α , α = x, y, z, we find the following equation for the adjoint variableṡ Note that multiplier µ is constant since the corresponding coordinate, angle θ, is cyclic. According to Pontryagin Maximum Principle [46], the optimal control 0 ≤ u(τ ) ≤ v is chosen to minimize H c . If we define the functions then the control Hamiltonian can be expressed as Figure 2. Candidate optimal pulse-sequences u(τ ) in the rescaled time τ . The initial and final "on" pulses have the same duration τ 1 , all the intermediate "off" pulses have the same duration τ 2 , while all the intermediate "on" pulses have the same duration τ 3 . The middle pulse can be "off", as in this figure, or "on". The total duration of the sequence is T . φ y < 0. If φ y = 0 for some finite time interval, then u takes some intermediate value which cannot be found from Maximum Principle. However, if φ y (τ ) = 0 andφ y (τ ) = 0, then at time τ the control switches between its boundary values and we call this a bang-bang switch. In the present article we concentrate on bang-bang solutions, i.e. pulse-sequences of the form "on-off-on-...-on-off-on", where u(τ ) alternates between 0 and its maximum value v, as displayed in Fig. 2. For each value of parameter v we will find the timings of the corresponding optimal pulse-sequence. We start by showing geometrically that in the optimal bang-bang pulse-sequence all the "off" pulses have the same duration, say τ 2 , and all the intermediate "on" pulses (i.e. aside the first and the last) have the same duration, say τ 3 . Using the equations for the state and adjoint variables we can show that the vector if we use vectors instead of antisymmetric matrices. From the last equation it is obvious that the motion of φ is restricted on a sphere, Now suppose that at time τ there is a switching from u = v to u = 0. This means that φ y (τ ) = 0, which also implies φ 2 (τ ) = −µ, thus the switching point P (φ 1 , −µ,φ 3 ) lies on the plane φ 2 = −µ, shown with green color in Fig. 3, whileφ 1 ,φ 3 denote the other two coordinates of P . The control u = 0 is applied for duration τ 2 and φ is rotated around z-axis along the horizontal black arc displayed in Fig. 3. Note that during this interval it is φ 2 > −µ ⇒ φ y > 0, thus u = 0 minimizes indeed the control Hamiltonian. At time τ + τ 2 the trajectory intersects the switching plane φ 2 = −µ at point Q(−φ 1 , −µ,φ 3 ), the symmetric of P with respect to the φ 2 φ 3 -plane. Since we consider bang-bang pulsesequences, the control switches from u = 0 to u = v. Vector φ is now rotated around the (red) axis n =ẑ + uŷ for duration τ 3 , along the inclined red arc shown in Fig. 3. During this time interval it is φ 2 < −µ ⇒ φ y < 0, thus u = v minimizes indeed the control Hamiltonian. At time τ + τ 2 + τ 3 the trajectory meets again the switching plane φ 2 = −µ; we will show that this intersection takes place at point P . During the rotation around axis n =ẑ + uŷ, the inner product φ · n = φ 3 + vφ 2 is constant. But Since the motion is restricted on the sphere (38), we easily deduce that φ 1 (τ + τ 2 + τ 3 ) =φ 1 . The trajectory thus intersects the switching plane at the point P (φ 1 , −µ,φ 3 ), and the evolution is repeated for all the subsequent "off" and intermediate "on" pulses. The conclusion is that all the "off" pulses have the same duration τ 2 , and all the intermediate "on" pulses have the same duration τ 3 . The initial and final "on" pulses can have different durations than τ 3 , corresponding to incomplete traversals of the red arc shown in Fig. 3. Since the system (27) starts from and returns to the same point, the north pole, for symmetry reasons we take the initial and final "on" pulses to have the same duration τ 1 . Thus, we consider candidate optimal pulse-sequences of the form shown in Fig. 2 and the optimization takes place within this subset. In the following we use geometric optimal control [47] to derive a relation between the pulse durations τ 1 , τ 2 , τ 3 . This relation will be exploited in the next section, along with the integral condition for the pulse area and the condition that the system should return to the north pole at the final time, in order to obtain these durations when the maximum control amplitude v is given. In the rest of this section we particularly use the theory developed in Ref. [48], as specified for the two-level quantum system in Refs. [49,50], while we adopt it to incorporate the pulse area condition. Optimal pulse-sequences In this section we use optimality condition (54), along with the pulse area condition T 0 u(τ )dτ = θ i − θ f and the final condition that the system returns to the north pole, in order to obtain the timings τ 1 , τ 2 , τ 3 for the optimal pulse-sequences. Let us consider a pulse-sequence u(τ ) containing m "off" pulses, where m = 1, 2, . . . is a positive integer. Since the "on" pulses have constant amplitude v, the total change in the angle θ is Next, observe that Eq. (54) can be solved with respect to τ 2 where note from Eqs. (55a), (55b) that A, B are functions of τ 1 , τ 3 only. Since τ 1 is expressed as a function of τ 3 in Eq. (56), obviously τ 2 can also be expressed as a function of τ 3 only. The last relation that we need is derived from the requirement that the system should return to the north pole. Instead of the Bloch system (29), it is more convenient to use system (17), for which the corresponding final condition is that it should return to the adiabatic state |φ + at the final time τ = T . Under the piecewise constant pulsesequence u(τ ), the propagator U connecting the initial and final states, b(T ) = Ub(0), can be expressed as where U j , j = 1, 3, is given by and The propagator in the middle of (58) is W 2 or U 3 , depending on the corresponding middle pulse. Using the expressions for U 1 , W 2 , U 3 and the following property of Pauli matrices where a, b, c can be any of x, y, z, δ ab is the Kronecker delta and ǫ abc is the Levi-Civita symbol, we can express the propagator U as a linear combination of σ a and the identity I, The coefficients of the matrices in the above expression are functions of the pulsesequence parameters. In the appendix we show that a x = 0. Now observe that I, σ z are diagonal. Since a x = 0 in Eq. (62), if we set a y = 0 then U is also diagonal. In this case, starting from b(0) = (1 0) T we find for the final state b(T ) = Ub(0) that b 2 (T ) = 0, and the system returns to the initial adiabatic state. The relation a y,m (τ 1 , τ 2 , τ 3 , v) = 0, along with Eqs. (56), (57), will be used for the determination of the pulse-sequence timing parameters τ 1 , τ 2 , τ 3 . The subscript m denotes that a y has a different functional form for pulse-sequences with different number m of "off" pulses. Following the procedure described in the appendix, we have found a y,m for m = 1, 2, 3, For each solution we find the total duration T of the corresponding pulsesequence and compare the results. The pulse-sequence with the minimum T is the optimal one for the specific value of v. As an example, we consider a change in the detuning from ∆ i = −10Ω to ∆ f = 10Ω, same as in [6], corresponding to θ f = tan −1 (1/10), θ i = π − θ f . In Fig. 4 we plot the duration of the optimal pulse-sequence for a range of v values, both in the rescaled time, Fig. 4(a), and in the original time, Fig. 4(b). Note that the duration in the rescaled time is larger than the corresponding duration in the original time due to the sine factor in Eq. (16). The diagrams display a stairway-like form, where the circles separating the steps are the points (u k , T k ) obtained in Sec. 2 where the original RC protocol, with constant control u(τ ) = u k , is optimal. We have obtained similar diagrams in our other works on optimal control of quantum systems [51,52]. On the first step from the right (larger values of v), the optimal pulse-sequence has the simple "on-off-on" form, with m = 1. Note that the solutions lying on this step are faster than the first , for θ f = tan −1 (1/10) and θ i = π − θ f . The diagrams display a stairway-like form, where the circles separating the steps are the points where the original Roland-Cerf protocol, with constant control u(τ ) = v, is optimal. On the first step from the right (larger values of v), the optimal pulse-sequence has the simple "on-off-on" form. Note that for large values of v the optimal duration tends to the limiting value π. On the second step, the optimal pulse-sequence changes to "on-off-on-off-on", on the third step becomes "on-off-on-off-on-off-on", and so forth. resonance of the original RC protocol (first circle from the right). For large values of v the duration of these solutions tends to the limit T 0 = π. On the second step, the optimal pulse-sequence changes to "on-off-on-off-on", with m = 2. On the third step becomes "on-off-on-off-on-off-on", with m = 3, and so forth. Note that these solutions with more switchings may require longer times, but the corresponding maximum control amplitude v is smaller and thus the change in the total field angle θ is less abrupt, a property which might be useful when designing a pulse-sequence. In Fig. 5 we present a specific example of the optimal pulse-sequence for maximum control amplitude v = 0.35, the case highlighted with a red star in Fig. 4. Since this point lies on the second step of the stairway-like diagram, the corresponding optimal pulse-sequence has the "on-off-on-off-on" form. In Fig. 5(a) we display the logarithmic error log 10 (1 − F ) = log 10 |b 2 (T )| 2 = log 10 |a y,2 | 2 (67) as a function of duration τ 3 ; the "resonance" indicates the solution of the transcendental equation a y,2 = 0. Having found the duration τ 3 of the intermediate "on" pulse, we find the durations τ 1 (of the initial and final "on" pulses) and τ 2 (of the "off" pulses), using Eqs. (56), (57), respectively. In Fig. 5(b) we plot the optimal pulse-sequence u(τ ) in the rescaled time τ . In Fig. 5(c) we show the detuning ∆(t) while in Fig. 5(d) the corresponding evolution of the total field angle θ(t), both in the original time t. Note that the total duration in the rescaled time is larger than the corresponding duration in the original time due to the sine factor in Eq. (16). In Fig. 5(e) we plot with red solid line the state trajectory on the Bloch sphere and in the original reference frame. The blue solid line on the meridian indicates the change in the total field angle θ. Finally, in Fig. 5(f) we plot the same trajectory (red solid line) but in the adiabatic frame. Note that in this frame the system starts from the adiabatic state at the north pole and returns there at the final time, while the total field points constantly in theẑ-direction (blue solid line). Also, observe that the trajectory in this frame contains a loop, which might look surprising at first sight for the solution of a minimum-time optimal control problem. The catch here is that there is actually an extra state variable not shown in this frame, the angle θ, which evolves from θ i to θ f . If the trajectory is displayed in the higher-dimensional space of all the state variables, the loop disappears. We close this section by clarifying the advantage of the present approach compared to our previous related work [34]. There, we fix the total duration T = 2τ 1 + mτ 2 + (m − 1)τ 3 of the pulse-sequence in the rescaled time, while we take the amplitude v as an unknown parameter. This relation, along with the pulse area condition (56) and the final condition (63), form a system of three equations with four unknowns, τ 1 , τ 2 , τ 3 , v. In order to tackle this problem, we find numerically the minimum value of the amplitude v such that this system has a solution for τ 1 , τ 2 , τ 3 . In the present article we follow a dual approach, where we fix amplitude v and seek the pulse-sequence with minimum duration which satisfies the area and final conditions. The use of optimal control theory leads to the optimality condition (54) which, along with Eqs. (56) and (63), form a system of three equations for the three unknowns τ 1 , τ 2 , τ 3 . Conclusion In this article, we presented a new method for speeding up adiabatic passage in a two-level system with only detuning (z-field) control. This technique is actually a modification of the Roland-Cerf protocol, where now the local adiabaticity parameter is not held constant but has a simple "on-off" modulation. Using optimal control theory, we found composite pulses which achieve perfect fidelity for every duration larger than the limit π/Ω, where Ω is the constant transverse x-field. The corresponding detuning control is a continuous and monotonic function of time. The present work is expected to find applications in various tasks in quantum information processing, for example the design of high fidelity controlled-phase gates, but also in other research areas where adiabatic passage is exploited. We first show that a x = 0 in Eq. (62). From Eqs. (58), (62), and a well-known identity regarding the trace of a matrix product, we have a x = 1 2 Tr(σ x U) But, using the explicit expressions (59), (60) for U 1 , W 2 , U 3 and the identity (61), it is not hard to verify that Using the above relations repeatedly in Eq. (68), it is not difficult to see that the calculation of a x is reduced to the calculation of Tr(σ x W 2 ) or Tr(σ x U 3 ), depending whether the middle pulse is "off" or "on", respectively. But Tr(σ x W 2 ) = Tr(σ x U 3 ) = 0, thus a x = 0 as well. We next explain how to find the coefficient a y in Eq. (62). It is obtained from a relation similar to Eq. (68), using repeatedly the equations U 1 σ y U 1 = in y sin ωτ 1 I + (n 2 z + n 2 y cos ωτ 1 )σ y + n y n z (1 − cos ωτ 1 )σ z , (71a) U 3 σ y U 3 = in y sin ωτ 3 I + (n 2 z + n 2 y cos ωτ 3 )σ y + n y n z (1 − cos ωτ 3 )σ z , (71d) U 3 σ z U 3 = − in z sin ωτ 3 I + n y n z (1 − cos ωτ 3 )σ y + (n 2 y + n 2 z cos ωτ 3 )σ z , which can be derived from expressions (59) for U 1 , U 3 and (60) for W 2 , as well as property (61). Observe that in this frame the state of the system returns to the north pole, while the total field points constantly in theẑ-direction (blue solid line).
2019-06-29T08:11:02.000Z
2019-06-29T00:00:00.000
{ "year": 2019, "sha1": "a352c087b30cd2a72db3e233a6a1bb0497ad7b9e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1907.00166", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a352c087b30cd2a72db3e233a6a1bb0497ad7b9e", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
5575848
pes2o/s2orc
v3-fos-license
Patient experience – the ingredient missing from cost-effectiveness calculations Standard cost-effectiveness calculations as used by the UK National Institute of Clinical Excellence compare the net benefit of an intervention with the financial costs to the health service. Debates about public health interventions also focus on these factors. The subjective experience of the patient, including financial costs and also transient pain, distress, and indignity, is routinely ignored. I carried out an Internet survey which showed that members of the public assign a high financial cost to routine medical interventions such as taking a tablet regularly or attending a clinic for an injection. It is wrong to ignore such costs when attempting to obtain an overall evaluation of the benefit of medical interventions. In a recent heated debate about the pros and cons of mammography, combatants on both sides brought to the argument disputed numbers regarding deaths prevented against harm caused by overdiagnosis and needless treatment. [1][2][3] The guidelines produced by the UK National Institute of Clinical Excellence (NICE) on assessing cost-effectiveness of an intervention instruct that the costs of treatment options should be considered relative to their health benefits. 4 In order to evaluate whether it is worth treating a section of the population with antiplatelet agents one would expect to take account of the health benefits from preventing thrombotic events weighed against additional morbidity and mortality from bleeding and the cost to the UK National Health Service (NHS) of delivering the treatment. 5 What is missing from all these scenarios is any consideration of the cost to the individual patient of the intervention. These costs are not trivial and it does not make sense to omit them when deciding whether or not to promote an intervention. Typically, the NHS will go about offering an intervention to a patient in two stages. Firstly, a cost-effectiveness calculation will be carried out to see whether it is worthwhile in terms of the net benefit divided by the financial cost. If the intervention is seen as cost-effective then it will be promoted to the patient as being of net benefit. The patient will be encouraged to accept the intervention because it is "good for them" even if it may cause them some degree of pain, distress, indignity, or financial loss. From this account, it can be seen that the "net benefit" is in fact counted twice -once against the financial cost to the health service and once against any costs to the patient. A recent example of how this process was followed is provided by the introduction of human papilloma virus (HPV) vaccination for 13-year-old females in the UK. First, a cost-effectiveness study, which considered financial costs, clinical outcomes, 252 Curtis and economic outcomes, was carried out. 6 Subsequently, a vaccination program was introduced, and the patient information leaflet explains the clinical benefits and side effects (http://www.cks.nhs.uk/patient_information_leaflet/hpv_ vaccination). These side effects were not considered in the cost-effectiveness study. Although a typical cost-effectiveness calculation will take account of major negative health outcomes for the patient, such as increased mortality, morbidity, or disability, a variety of other effects on the patient will be partially or completely ignored: time taken off work or away from childcare; transient pain, nausea, or distress; and indignity. 6 In many scenarios these personal and financial costs, borne by the individual and considered trivial enough to be regarded as irrelevant, will apply to far larger numbers of patients than those for whom the intervention has any effect. This applies especially to population-based screening and to primary prevention. In both of these situations the intervention will be applied to very large numbers of patients compared with the few who are expected to derive any real benefit. Thus, for every death averted through a mammography screening program there will be hundreds of patients who undergo screening without it having any clinical outcome for them personally but who nevertheless need to travel to the clinic, wait around until they are seen, and then undergo the usually mild anxiety, discomfort, and inconvenience involved in the procedure. 3 For every patient in which primary prevention with aspirin prevents a serious vascular event there will be over a thousand who have to take a tablet every day for a year without any benefit. 5 Likewise, almost a thousand people need to be vaccinated against HPV for every death avoided. 6 In order to attempt to assess how patients perceive the cost to themselves of what are regarded as routine medical interventions, I carried out an Internet-based survey in which participants were invited to put a financial cost on such activities as taking a tablet, receiving an injection, and being admitted to hospital. The survey sought to isolate the element of cost to the patient by asking people to imagine what they would charge if the procedure was of no benefit to them nor to medical science in general but was purely for commercial purposes. They were asked what they would charge to take a tablet daily with no side effects, with sedative effects, and with the effect of impaired sexual function. They were asked what they would charge to be given one injection, an injection on a regular basis and, if already in hospital, an injection which was painful and which produced stinging afterwards. They were asked what they would charge to be admitted to a general medical ward and a psychiatric ward. Invitations to participate in this survey were placed on a variety of websites, including an in-house journal for employees of a mental health trust, a website for users of mental health services, health-related Facebook groups, Twitter, and the researcher's own website. All these sites were UK-based and the invitation on the researcher's website requested that only potential users of the NHS complete it. Hence, it was expected that participants would consist largely of UK residents who would be a mixture of health service staff, service users, caregivers, and members of the public. The results of the survey are summarized in Table 1. The intraquartile ranges demonstrate that people produced strikingly varied responses, which is important in itself. The median figures tended to be high and easily comparable with what the financial cost of the treatment might be for the health provider -£5 daily for a tablet with no side effects, £200 for a single injection, £300 per day for a medical admission, and £400 per day for a psychiatric admission. Of course these responses are based on a small number of self-selected subjects but there is no reason to suppose that they are especially unrepresentative. The implication is that incorporating subjective cost to the patient in costeffectiveness calculations may have a substantial effect and could easily lead to some interventions moving from being supported to unsupported. Before leaving the survey, it is perhaps worth remarking on some other features. It was unsurprising that for most interventions the declared cost correlated with the income of the subject; that is, somebody who is better off will tend to charge more. The exception was for taking a tablet with no side effects, which was not correlated with income but which was weakly correlated with age. The cost of admission to hospital was correlated very highly with income, suggesting that loss of earnings or how much people valued their own time might be important. Interestingly, the cost to take medication with sexual side effects was highly correlated with income but not with age. The weak correlation of some other costs with age may be a result of the strong correlation between age and income which was present in this sample. A related issue to consider is that in the context of a publicly funded health system such as the NHS, a useful evaluation of any intervention should consider its cost to 253 Costs to patient of medical interventions society as a whole, not just to the health provider. Thus, if an intervention impacts on the economic activity of the patient then it should not be possible simply to ignore this. Taking the example of mammography again, it might be that the economic cost to the patient and/or their employer could be minimized by setting up a mobile screening unit which went out to the patient's place of work and/or operated outside normal working hours. But then one could reduce the cost to the health service by providing a centralized service within working hours, meaning that the patient had to take time off work and to travel. This could be seen as artificially shifting the cost from the health service to the patient (whose costs are invisible) in order to end up with a screening program which would then be judged by conventional criteria as "cost-effective". Yet the overall costs to society for the centralized service might be the same or higher than for one which was more user-friendly. Using real world data, it has been proposed that practical barriers such as accessibility form an important reason for women to fail to attend for cervical screening. 7 However, developing a service which attempted to address this problem would be likely to fail if conventional cost-effectiveness criteria were applied. The suggestion that the subjective effects on a patient of undergoing a medical intervention should be routinely considered when weighing up its overall value may prove unwelcome. Since these effects may include pain, inconvenience, and financial costs, the overall effect could only be to tip the balance towards some interventions being considered not worthwhile. However, it is intellectually dishonest and morally indefensible to simply ignore the patient experience. I argue that standard cost-effectiveness evaluations do indeed ignore the patient experience and that it is time to review the appropriateness of the methodology.
2016-05-04T20:20:58.661Z
2011-05-30T00:00:00.000
{ "year": 2011, "sha1": "c5a3bff3838461870e57bec13b2f168014ec6529", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=10195", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f20e9cadba80c715b46dd3f428b6a4831cbf1dc8", "s2fieldsofstudy": [ "Economics", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
254622842
pes2o/s2orc
v3-fos-license
Effects of sorbitol-mediated curing on the physicochemical properties and bacterial community composition of loin ham during fermentation and ripening stages Highlights • Mediated curing is a new method for salt reduction in meat products.• Sorbitol affects salt diffusion and water migration.• Sorbitol can reduce the salt content and water activity (aw) of loin ham.• Sorbitol makes an even distribution of Lactobacillus and Staphylococcus in loin ham. Introduction Traditional dry-cured fermented meat is generally prepared by rubbing high amounts amount of salt, spices, and sugar onto the surface of the meat product to cure it, and then naturally fermentation for a long time with the action of endogenous enzymes and beneficial microorganisms to form a meat product with unique flavor, color, and texture, as well as a long shelf life (Vidal, Bernardinelli, Paglarini, Sabadini, & Pollonio, 2019). The addition of large amounts of salt and moderate concentrations of starter cultures is essential not only to the texture and flavor of dry-cured meat but also to inhibit the growth of spoilage microorganisms, allowing to extend its shelf life and maintain its taste (Zhou, Pan, Cao, & Zhou, 2021;Zhou et al., 2022). However, this high salt content may induce oxidative stress in the body resulting in several chronic diseases (Mariutti & Bragagnolo, 2017), whilst excessive sodium intake directly increases the risk of cardiovascular disease (He, Tan, Ma, & MacGregor, 2020), which is extremely detrimental to human health. Mediated curing (MC) is a new method for salt reduction in drycured meat products. MC refers to the systematic construction of an exogenous food additive as a medium to achieve a low sodium curing strategy for meat products by influencing the osmotic diffusion pathway of salt and the water migration in the matrix without reducing the amount of salt (Gong et al., 2022). The behavior of being able to change the osmotic diffusion of salt by mechanical means such as ultra-high pressure, ultrasound, tumbling, and electrical stimulation is termed physical mediation, while the behavior of changing the osmotic rate of salt and water migration by adding some chemical substance in the curing process is referred to as exogenous substance-mediated behavior, which is a chemical mediation. These differ from the traditional curing methods, as the salt does not diffuse freely. Polyhydroxy alcohols can be applied as a curing medium in chemical MC due to their structure with multiple hydroxyl groups. These can bind to proteins in meat products and increase the polarity of certain groups in muscle proteins, converting some of the free water in myogenic fibers into bound water. This contributes to a change in the water distribution of the product and reduces water activity (a w ) . Furthermore, research suggests that polyhydroxy alcohols have an antibacterial effect, effectively inhibiting the growth of harmful microorganisms (Syafiq, Sapuan, Zuhri, Othman, & Ilyas, 2022). Sorbitol is a polyhydroxy alcohol, containing 6 hydroxyl groups, that can be hydrogen bonded with the hydroxyl groups in water, increasing water holding capacity (WHC), improving texture, decreasing a w , and prolonging the storage life of meat products (Martins, Sentanin, & De Souza, 2019). Additionally, it can influence the microbial and enzymatic activity in the product, affecting protein and fat degradation, flavor formation, while also having antibacterial activity Chai, Chen, He, Jiao, Cai, Dong, Liu, & Ren, 2022). There are few reports on the application of polyhydroxy alcohols for salt reduction in dry-cured fermented meat products. In this article, sorbitol, one of the polyhydroxy alcohols, was used as a medium to mediate curing. Moderate salt reduction seems to affect microbial growth and lead to changes in physicochemical properties of fermented meat products. For instance, Gan, Zhao, Li, Tu, and Wang (2021) reported that during lowsalt Chinese bacon processing the pH gradually decreased, Lactobacillus became the dominant genus, and the higher the KCl ratio, more rapid was the process. A study by Chen et al. (2019b) used KCl and selected amino acids to replace 30 % NaCl (w/w) in Harbin dry-cured sausages and analyzed the quality and microbial diversity of sausages. Results showed that the replacement salt did not negatively influence the physical properties of sausages, and the microbial diversity decreased during the fermentation of low-salt sausages. Staphylococcus and Lactobacillus became the dominant genera and the highest relative abundance of Staphylococcus at the end of fermentation. To the best of our knowledge, there is a lack of reports on the effects of salt reduction with sorbitol on the bacterial community of dry-cured fermented meat products. Hence, the aim of this research was to investigate the influence of sorbitol-mediated curing on the bacterial community of loin ham using a high-throughput sequencing technique. Moreover, the correlation between bacterial communities and physicochemical properties in loin ham was assessed to further explain how the quality of loin ham is affected by sorbitol in mediated curing. Material The pork loin was purchased from Huimin Fresh Supermarket in Huaxi District, Guiyang. The main components of the pigs' feed were corn, sorghum, and soybeans. After feeding a diet of about 250 kg and rearing for>365 days, the pigs are slaughtered (Tainong Xingwang Food Co., ltd.). Food-grade NaCl, spices, sugar, and glucose were purchased from Wal-Mart supermarket in Huaxi District, Guiyang. Food-grade sorbitol was purchased from Shandong Tianli Pharmaceutical Co., and the enteric coating was purchased from Yu Mu Group. The other chemicals and reagents were purchased from Aladdin (Shanghai, China). Starter culture preparation Lactobacillus plantarum SJ-4 (strain conservation number: CICC No. 11119 s) and Staphylococcus simulans QB7 (strain conservation number: CICC No. 11117 s). In a previous study conducted by our group, L. plantarum SJ-4 isolated from Chinese traditional fermented meats (i.e. Guizhou Jinping sour meat); a CNS strain, S. simulans QB7, was isolated in one of the Chinese dry-cured fermented sausages (i.e. Qianwufu). These two strains showed high proteolytic activity in degrading pork meat proteins. They were preserved by China Industrial Microbial Strain Conservation Management Center. Man Rogosa Sharpe (MRS) broth medium was used to activate SJ-4, and Mannitol Salt (MSA) broth medium was used to activate QB7. After four generations of activation, the strains were washed thrice with 0.85 % (w/v) saline solution and resuspended until their concentration was adjusted to 10 9 CFU/mL for subsequent preparation of fermented loin ham. Process of fermented loin ham The preparation of fermented loin ham was performed as described by Chen et al. (2021) and Boumaiza, Najjari, Jaballah, Boudabous, and Ouzari (2021) with some modifications. The pork loin was cut equally into small square pieces of about 100 g (n = 24). The formulation of the curing ingredients was based on the mass of raw meat (w/w) with 3 % NaCl, 3 % food grade sorbitol (no addition in the control group), 0.3 % five spice powder, 0.3 % white pepper powder, 0.3 % pepper powder, 0.5 % white sugar and 0.5 % glucose, which were uniformly rubbed on the surface of the loin meat and left to cure at 4 • C for 24 h, to allow the spices mixture to be homogeneously distributed into the meat. The amounts of added salt and sorbitol, as well as the curing time, were determined through the physicochemical properties of loin ham during curing period, such as pH, salt content, cooking loss, centrifugal loss, and a w . The activated SJ-4 and QB7 bacterial suspensions were inoculated into the loin at a ratio of 1:1 (10 7 CFU/g), and after vacuum tumbling for 30 min, the loin was stuffed into 45 mm diameter collagen casings and immediately suspended in the fermentation cabinet with constant temperature and humidity. The loin ham was fermented at 28 • C and 90 % relative humidity (RH) for the first 2 days, after which the fermentation cabinet was set to 15 • C and 80 % RH for ripening of the loin ham, for 20 days to the end. The finished loin ham was obtained at 22 days of processing. Samples, each with three parallel groups were taken on days 0, 2 (end of fermentation), 10 (mid-ripening), and 20 (end of ripening). Determination of physicochemical properties Minced meat sample (1 g) was weighed and diluted 10 times with ultrapure water and homogenized for 1 min at 2800 r/min using XHF-D homogenizer (Ningbo Xinzhi Biotechnology Co., Zhejiang, China), and the pH value was measured using digital pH meter (PHS-3C, Shanghai Yueping Scientific Instruments Co., China), while the NaCl content was determined using digital salinity meter (ES-421, ATAGO, Tokyo, Japan) and expressed as g/100 g meat. Regarding a w , 5 g of minced meat was weighed, spread evenly in a small petri dish and measured using an a w measuring instrument (Huake HD-4B, Wuxi, China). The a w instrument was calibrated using saturated NaCl and saturated magnesium chloride solutions. Loin ham sample (3-5 g) was cut into cubes (1 × 1 × 1 cm 3 ) and weighed as M1, then put into a steaming bag and sealed. Afterwards, the sample was put in an 80 ℃ water bath and heated for 20 min. The surface of the meat sample was blotted dry with absorbent paper and weighed as M2. The cooking loss was expressed as percentage and determined as seen in the equation below: The convenient computerized colorimeter (NH350 Agilent, Shenzhen Sanenshi Technology Co., China) was used for color measurement. Loin ham samples were cut into slices of uniform thickness and their brightness (L*), redness (a*), and yellowness (b*) were measured. Bacterial counts by a culture-dependent method After removing the loin ham from the casings in the ultra-clean bench, the spices of the surface were stripped off, and samples (10 g) were taken from the center of the loin ham and added to sterile homogenization bags, then 90 mL of sterile saline solution (0.85 % (w/v) NaCl) was added, sealed, and homogenized by tapping (12.0/s, 5 min) (YM-08X, Shanghai Yuming Instruments Co., China). The homogenized liquid was diluted and coated on the culture media. Lactic acid bacteria (LAB) and total aerobic bacteria (TACs) were counted on MRS and Plate Count Agar (PCA) respectively after 36 h incubation at 37 ℃, while Staphylococcus were counted on MSA plates after 48 h incubation at 37 ℃. Bacterial diversity by a culture-independent method The genomic DNA of the samples was extracted using a DNA extraction kit according to the manufacturers' instructions (D6356-F-96-SH), followed by the quantification of DNA using agarose gel electrophoresis and NanoDrop2000. Genomic DNA was used as the template, and PCR was performed using specific primers with barcode and Takara's Tks Gflex DNA Polymerase according to the selection of sequencing regions to ensure amplification efficiency and accuracy. Bacterial diversity was identified by analyzing the V3-V4 hypervariable regions of the 16S rRNA gene, which were amplified using the primers 343F and 798R, forward primer: 5 ′ -TACGGRAGGCAGCAG-3 ′ and reverse primer: 5 ′ -AGGGTATCTAATCCT-3 ′ (Nossa, 2010). A two-step cycle PCR method was used for amplification . Finally, aliquots were mixed according to PCR product concentrations, and 16S rRNA in the purified mixed samples was analyzed by high-throughput sequencing using the Illumina Nova-seq6000, PE250 platform (Oebiotechnology Co. Ltd., Shanghai, China). Bioinformatic analysis Raw sequencing data were in FASTQ format. Paired-end reads were then preprocessed using cutadapt software to detect and cut off the adapter. After trimming, paired-end reads were filtering low-quality sequences, denoised, merged, and detect and cut off the chimera reads using DADA2 with the default parameters of QIIME2 (2020.11). At last, the software output the representative reads and the ASV abundance table. The alpha diversity of the samples was calculated using QIIME, including the richness index (ASVs, ACE and Chao1), the diversity index (Shannon and Simpson), and the Coverage index. Stacked histograms, Circos, and clustered heatmaps of dominant genera among samples were visualized by R software. Statistical analysis Statistical analysis was performed using the one-way analysis of variance (ANOVA) followed by Duncan's multiple range test, with differences being considered significant for P-values below 0.05. All data were analyzed using SPSS 17 software (SPSS, Chicago, IL, United States) and GraphPad Prism 8 software (GraphPad Software Inc., California, USA), with results being expressed as mean ± mean standard error (S.E. M). Pearson's correlation coefficient analysis between physicochemical properties and microorganisms was performed using origin software 2021 (OriginLab Corporation, MA, USA). pH and salt content As expected, throughout the fermentation and ripening stages, the pH decreased from 5.28 and 5.26 to 4.95 and 4.31 in the control and sorbitol groups, respectively (Fig. 1A). The pH of the loin ham decreased rapidly during fermentation in both groups. During the ripening period, the pH slowly increased in the control group and slightly decreased in the sorbitol group until the end of ripening. The addition of Lactobacillus leads to the production of acid in meat products, resulting in a rapid pH decline. This fast acidification successfully inhibits the growth of microorganisms which cause the spoilage of food, being vital to enhance the quality and security of fermented meat products . In this study, the differences in pH were not significant during the fermentation period between the control and sorbitol groups (P > 0.05). However, at the cease of ripening period, the sorbitol group had a significantly lower pH than the control group (P < 0.05), which can be a result of the long-term oxidation of sorbitol in meat products, converting it into sorbic acid, and thus, resulting in lower pH (Leitmannová, Malá, & Č ervený, 2009). Regarding the salt content, as seen in Fig. 1B, it rose constantly throughout the fermentation and ripening of loin ham. The salt content reached 6.34 g/100 g and 5.32 g/100 g in the control and sorbitol groups at the end of ripening, respectively. The increase in salt content may have been a result of salts' infiltration and consequent decrease in the moisture content of the samples. The salt content of the sorbitol group was significantly lower than the control group in both fermentation and ripening stages (P < 0.05), implying that the presence of sorbitol affected the pathway of salt permeation diffusion and water migration in the meat products. This could have happened due to sorbitol remaining on the cell surface due to its excessive molecular mass and high viscosity, and thus, diffusing slower than salt. As a result, ensuing in a higher extracellular osmotic pressure ensued, which created a high viscosity barrier of sorbitol on the product surface ("barrier effect"), forming a solute film that hindered the diffusion of sodium chloride, hence reducing the salt content (Sharma, Banipal, & Banipal, 2020;Gong et al., 2022). Another possible justification is the fact that the oxhydryl of the sorbitol is bonded to the oxhydryl of water in the matrix in the form of hydrogen bonds, and this interaction accelerated the diffusion rate of sorbitol, while slowing down the free diffusion of water. The quantity of water molecules interacting with NaCl is reduced, and consequently the amount of Na + coming into the cell is also reduced, which leads to a decrease in the salt content of the whole matrix (Chen, Zhang, Hemar, Li, & Zhou, 2020). A w and WHC The a w showed a reducing trend during the fermentation and ripening of loin ham (Fig. 1C). At the end of ripening, a w decreased from an initial 0.965 and 0.935 to 0.764 and 0.729 in the control and sorbitol groups, respectively (P < 0.05). The a w was significantly decreased in the sorbitol group compared with the control group during the fermentation and ripening stages (P < 0.05). This may be a result of sorbitol being a humectant with multiple oxhydryl in its molecular structure which, as previously mentioned, can bind to proteins in meat products and then extend the polarity of certain groups in muscle proteins, converting some of the free water in myogenic fibers into bound water, which would reduce the free water content (Meena & Kishore, 2021). This consequently leads to a change in the moisture distribution of the product and reduces a w . These results stand in line with Liu et al. (2022) findings, in which they discovered that sorbitol significantly reduced a w in minced pork tenderloin. The reduction of a w inhibits microbial activities and chemical reactions, reducing the spoilage of food and prolonging its shelf life. As shown in Fig. 1D, the cooking loss of the sorbitol group was significantly lower than that of the control group at days of 0 and 2 fermentation, and day 10 of ripening (P < 0.05), because sorbitol is a humectant and has the effect of increasing WHC in meat products. For instance, Fahrizal (2018) found that the addition of sorbitol, sucrose, and sodium tripolyphosphate enhanced the WHC of freshwater fish surimi. However, there was not a significant distinction in cooking loss between the control and sorbitol groups at the end of ripening (P > 0.05), due to the normal evaporation of moisture during the ripening stage, which happens to lead to a low water content at the end of the ripening stage. Color analysis It has been reported that the color of meat products depends primarily on protein composition, namely myoglobin content, as well as protein denaturation, moisture content, pH and fat content (Mancini & Hunt, 2005). As shown in Table 1, L* gradually decreased, while a* and b* increased in all groups as fermentation and ripening progressed (P < 0.05), which is consistent with the findings of Chen et al. (2019b). The decrease in L* may be attributed to water loss in meat products as the brightness is susceptible to moisture content, and the lower the moisture content, the lower the L* (Chai et al., 2022). The L* was consistently higher in the sorbitol group than in the control group (P < 0.05) during the fermentation and ripening stages, due to sorbitol containing six oxhydryl, which combine with the oxhydryl of water in the meat product in the form of hydrogen bonds to enhance its WHC. The a* of all groups were progressively enhanced during the ripening period, which may be related to the formation of nitrosomyoglobin via bacterial action. This occurs mostly due to the NO 2 − 3 present being reduced to NO − 2 , and the decomposition of NO − 2 to NO to combine with myoglobin results in the formation of nitrosomyoglobin, giving the meat products a vibrant red color (Huang et al., 2020). In addition, the a* in the control group was consistently higher than in the sorbitol group during the ripening period (P < 0.05), which may be related to the presence of LAB and the increase in pH. The number of LAB in the control group was significantly higher than in the sorbitol group (P < 0.05), and studies have shown that some LAB can promote the formation of zinc protoporphyrin IX (ZnPP), a substance that can be used to enhance a* of fermented meat products (Kauser-Ul-Alam, Hayakawa, Kumura, & Wakamatsu, 2021). Furthermore, the amount of ZnPP formed in the presence of acid with pH > 4.75 was significantly increased (Wakamatsu, Kawazoe, Ohya, Hayakawa, & Kumura, 2020), so low pH in the sorbitol group may result in a lower a* than in the control group. The increase in b* may be due to the presence of a yellow pigment. This pigment is a result of products of lipid oxidation with amines in phospholipid head groups or amines in proteins (Liu, Wang, Zhang, Wang, & Kong, 2019). The b* of the control group was higher than that of the sorbitol group during the ripening period (P < 0.05), which was likely due to the ability of sorbitol to retard lipid oxidation, reduce lipid oxidation-related products. Bacterial counts The numbers of LAB, Staphylococcus and TACs increased exponentially (P < 0.05) in the control and sorbitol groups during the fermentation period, with their highest levels being registered at the end of this process. Additionally, the numbers of LAB and Staphylococcus increased rapidly due to their adaptableness to the sarcomatrix ( Fig. 1E and F), but the amounts of LAB, Staphylococcus and TACs decreased significantly in the ripening stage in contrast to the fermentation stage (P < 0.05). Since the conditions of the fermentation period present the optimal temperature and humidity for microbial growth, the microorganisms rapidly consume the nitrogen and carbon sources in loin ham, achieving rapid growth. However, the temperature and humidity at the ripening stage are not ideal for microbial growth, and, as plenty of the nitrogen, carbon and water sources in loin ham have already been consumed, the total number of bacteria was significantly lower than in the fermentation stage. This is in accordance with studies by Chen et al. (2021b), who reported that sausages inoculated with Lactobacillus and Staphylococcus xylosus as fermenters, led to an exponential growth of Lactobacillus and Staphylococcus during the fermentation period, whilst the numbers of LAB and Staphylococcus decreased slightly during the ripening period. From the end of fermentation to the end of ripening, the number of LAB in the control group was significantly higher than in the sorbitol group (P < 0.05), while the number of Staphylococcus was considerably lower than in the sorbitol group (P < 0.05). This may be due to sorbitolmediated curing inhibiting the growth of LAB and promoting the growth of Staphylococcus. Moreover, TACs still showed higher numbers in loin ham, which indicated that other microorganisms also grew well in the meat products inoculated with fermenters. TACs were significantly higher in the control group than in the sorbitol group during the fermentation and ripening period (P < 0.05) (Fig. 1G), indicating that sorbitol-mediated curing can inhibit the growth of TACs. Alpha diversity of the bacterial community during the fermentation and ripening of loin ham In total, 1,777,040 high-quality and valid sequences were collected across all 24 samples of the fermented loin ham, among which 891,256 and 885,784 valid reads were obtained in the control and sorbitol groups, respectively. The alpha diversity index, ASVs richness, and sample coverage of bacteria are shown in Table 2. As can be seen from the table, all samples had a good coverage (>99.9 %), indicating that the sequencing depth had largely covered all species in the loin ham samples. The ASVs, ACE and Chao1 indexes are generally used to indicate the richness of bacterial communities, while the Shannon and Simpson indexes usually reflect the species diversity of bacterial communities. These values showed a downward trend during the fermentation and ripening period, indicating that the abundance and diversity of microorganisms gradually decreased as fermentation and ripening proceeded. Furthermore, Simpson index was significantly higher in the sorbitol group than in the control group by the end of the ripening process, implying that the microbial diversity was higher in the sorbitol group when compared with the control group (P < 0.05). This result indicates that sorbitol-mediated curing can increase the microbial community diversity. Moreover, there were no significant differences among the groups in the observed ASVs, ACE and Chao1 indexes at the end of ripening (P > 0.05). As the samples were fermented by inoculation with fermenters, it is plausible that a subset of bacteria became dominant in the loin ham samples (Zhang, Zhang, Zhou, Wang, & Li, 2021). Bacterial community composition during the fermentation and ripening of loin ham The relative abundance and association heatmap of bacterial communities at the phylum level along with their species phylogenetic trees and ASV abundance maps are presented in Fig. 2. As seen in these results, regardless of the stage of the process, most of the bacterial communities in the loin ham belonged to three phyla: Firmicutes, Bacteroidota and Proteobacteria (Fig. 2C). However, the percentages of phylum distribution changed during the fermentation and ripening period, as well as the bacterial community relative abundance in the loin ham. The bacterial community diversity was the most abundant in the samples fermented at day 0. Firmicutes, Bacteroidota and Proteobacteria were observed in the C0 group, accounting for 39.39 %, 36.62 %, and 19.22 % of the whole sequences, respectively; and in the S0 group, Firmicutes, Bacteroidota and Proteobacteria, accounting for 42.75 %, 16.20 % and 21.64 % of the whole sequences, respectively ( Fig. 2A). With increasing fermentation and ripening times, the relative abundance of Firmicutes increased in all groups and was significantly higher than other phyla (P < 0.05). For instance, in the control and sorbitol groups it increased from 39.39 % and 42.75 % at the initial stages of fermentation to 98.94 % and 98.58 % at end of ripening ( Fig. 2A). On the other hand, the relative abundance of Bacteroidota and Proteobacteria gradually decreased (Fig. 2B). This indicates that Firmicutes dominated the loin ham samples from fermentation to the end of ripening, which is consistent with the findings of Gan et al. (2021). One reason for the dominance of Firmicutes is that these can produce budding spores, which can resist dehydration and extreme environments. Another reason is that Firmicutes contains both the Lactobacillus and Staphylococcus genera, which is consistent with the results on the relative abundance of bacterial communities at the genus level. The relative abundance, correlation heatmap, and Circos figure of bacterial communities at the genus level are shown in Fig. 3. It is clear that Lactobacillus,Staphylococcus,Muribaculaceae,Ralstonia,Bacteroides,and Lachnospiraceae_NK4A136_group in the day 0 of fermentation were all groups present in high relative abundance, with Lactobacillus being the most abundant in the C0 and S0 groups with 15.72 % and 28.58 %, respectively. At the end of fermentation and ripening stages the relative abundance of the genera Muribaculaceae, Ralstonia,Bacteroides,Lachnospiraceae_NK4A136_group,Sphingomonas,Prevotella,Alloprevotella,Alistipes,Faecalibacterium,Acinetobacter, and Escherichia-Shigella decreased rapidly (Fig. 3A). These microorganisms are associated with spoilage, including Acinetobacter, Clostridia_UCG-014 and Escherichia-Shigella and were all inhibited in loin hams inoculated with fermenters. These bacteria are considered spoilage factors in meat as they lead to the manufacturing of undesirable metabolites and offflavor compounds (Zhu, Wang, Zhang, Li, Zhang, Ji, Zhao, Zhang, & Chen, 2022). Controlling spoilage flora is an effective way to improve the quality of meat products. In this study, the amount of Lactobacillus is higher than that of Staphylococcus, and the large growth of Lactobacillus can inhibit the proliferation of other pathogenic microorganisms, which has a positive effect on the safety of low-salt ham. As fermentation and ripening proceed, the numbers of other microorganisms decreased while Lactobacillus and Staphylococcus gradually increased and became the Note: expressed as mean ± mean standard error (S.E.M). Different lowercase letters (a-d) in the same column indicate significant differences (P < 0.05). C0: control group fermented for 0 day; C2: control group fermented for 2 days; C10: control group ripened for 10 days; C20: control group ripened for 20 days; S0: sorbitol group fermented for 0 day; S2: sorbitol group fermented for 2 days; S10: sorbitol group ripened for 10 days; S20: sorbitol group ripened for 20 days. most dominant bacteria in all groups. This could have been due to the inoculation of fermenters in all groups, which helped increase the antimicrobial metabolites produced by Lactobacillus and Staphylococcus, as well as their increased competition with other bacteria for nutrients, causing them to inhibit the growth of other bacteria. During the fermentation and ripening stages, Lactobacillus was extensively higher in the control group than in the sorbitol group (P < 0.05), while Staphylococcus was significantly higher in the sorbitol group than in the control group (P < 0.05). It can also be seen from the heatmap that the relative abundance of Lactobacillus was higher than that of Staphylococcus in the control group, while in the reverse happened in the sorbitol group (Fig. 3B). Circos figure showed that sorbitol-mediated curing altered the relative abundance of bacteria genera levels in the loin hams (Fig. 3C). Especially at the end of ripening, the percentage of Lactobacillus reached 95.14 % and 51.90 % in the control and sorbitol groups, respectively, while the percentage of Staphylococcus reached 3.57 % and 46.37 % in the control and sorbitol groups, respectively. This indicates that sorbitol-mediated curing can inhibit Lactobacillus, while potentially promoting the growth of Staphylococcus, which may be because sorbitol has an antibacterial effect and can inhibit the growth of some microorganisms (Beyler Çigil, Ş en, Birtane, & Kahraman, 2022), while Staphylococcus has antibacterial activity and can resist the bacteriostatic effect of sorbitol (Kanjan & Sakpetch, 2020). Thus, Lactobacillus and Staphylococcus were evenly distributed in the sorbitol group. In dry-cured fermented meat products, the microflora contributes greatly to their fermentation, especially all through the ripening stage . In addition to Lactobacillus, which is the ideal microorganism, Staphylococcus also belongs to the main genus in meat fermentation and is necessary throughout the fermentation and ripening of meat products . Under the action of proteolytic enzymes and lipases, these bacteria can also expand the flavor of the product and prevent off-flavor and sourness owing to their antioxidant activity (Tu, Wu, Lock, & Chen, 2010). It has been demonstrated that Staphylococcus is the first dominant genus in the fermentation process of low-sodium ham, and due to its sturdy resistance, it gradually becomes The red color indicates a higher relative abundance of species, and the blue color indicates a lower relative abundance of species. Top50 species evolutionary tree and ASV abundance map (C). S0: sorbitol group fermented for 0 day; S2: sorbitol group fermented for 2 days; S10: sorbitol group ripened for 10 days; S20: sorbitol group ripened for 20 days; C0: control group fermented for 0 day; C2: control group fermented for 2 days; C10: control group ripened for 10 days; C20: control group ripened for 20 days. the dominant genus during the fermentation process. This allows for it to compete with other spoilage or pathogenic microorganisms, inhibiting their growth and enhancing the ham's safety for consumption. The loin ham by sorbitol-mediated curing was inoculated with L. plantarum SJ-4 and S. simulans QB7, which not only enhanced the competitive ability of two dominant bacteria (Lactobacillus and Staphylococcus), but also multiplied the relative abundance of beneficial bacteria in loin ham, while inhibiting the growth of harmful bacteria (pathogens and spoilage bacteria), which helped to improve the quality of loin ham. Comparison of microbial communities between different groups LEfSe analysis (LDA log score threshold ≥ 4) of loin hams was assessed at the beginning of fermentation and the end of ripening (Fig. 4A). It was possible to clarify the similarities and differences in community composition between groups at any taxonomic level. Bacteroidota was the dominant bacterial phylum in group C0 and Firmicutes was the dominant bacterial phylum in group C20, while Proteobacteria was the dominant bacterial phylum in group S0 (Fig. 4B). Additionally, Lactobacillus was the key bacterial genus, dominating the C20 group (Fig. 4B). LAB is a major flora in the ripening stage of drycured meat products . Traditional naturally fermented hams are susceptible to contamination by undesirable microorganisms, while inoculation with fermenters enhances the competitiveness of predominant bacteria, while also inhibiting the growth of unwanted bacteria. Furthermore, taking into account the adaptation of LAB to the sarcomatrix, the number of LAB increases rapidly and dominates the microflora (Xiao, Liu, Chen, Xie, & Li, 2020). Lactobacillus and Staphylococcus are regarded as the main bacteria involved in the lipolysis and proteolysis processes in meat products, which contributes to the formation of flavor in meat products (Hu, Wang, Kong, Wang, & Chen, 2021). However, the overgrowth of Lactobacillus in the C20 group also Fig. 3. Bacterial taxonomic compositions at genus level (TOP15) during the manufacturing process of loin ham (A). genus level Heatmap (B). The red color indicates a higher relative abundance of species, and the blue color indicates a lower relative abundance of species. the Circos figure of loin ham (C). S0: sorbitol group fermented for 0 day; S2: sorbitol group fermented for 2 days; S10: sorbitol group ripened for 10 days; S20: sorbitol group ripened for 20 days; C0: control group fermented for 0 day; C2: control group fermented for 2 days; C10: control group ripened for 10 days; C20: control group ripened for 20 days. inhibited Staphylococcus, which is a genus that contributes extensively to flavor. Sorbitol-mediated curing had a significant impact on the bacterial community of the loin ham, which resulted in an even distribution of the dominant microorganisms (Lactobacillus and Staphylococcus) present. Therefore, sorbitol-mediated curing improved the quality of loin ham. Correlation between microorganisms and physicochemical changes It has been reported that microbial growth is highly correlated with physicochemical changes in fermented meats . In this study, we evaluated the relationships between microorganisms (Top 5 relative abundances at the genus level), between physicochemical properties, as well as between microorganisms and physicochemical properties in loin ham using Pearson's correlation analysis (Fig. 4C) . Lactobacillus can also demonstrate lipase activity, leading to the release of free fatty acids, and thus acidifying the meat and inhibiting the growth of other unwanted genera and spoilage bacteria Fig. 4. LEfSe analysis of the key phyla and genera of the bacterial community of loin ham. The histogram shows the LDA scores (A) calculated for features with different abundances between groups, with higher scores and longer bands having greater influence and importance. The clade plot (B) shows that the yellow points are the unimportant bacteria in any group; the other colored points are the important bacteria in the groups marked with the same color; the shaded colors cover the highest taxonomic units when the difference is significant corresponding to the highest abundance of the group. Pearson's correlation matrix calculated from physicochemical properties and relative abundance of major microorganisms (C). The positive correlations are marked in red, and the negative correlations are in blue. The size of each circle and color intensity are proportionate to the correlation coefficient. The bar on the right indicates the correlation coefficient and the corresponding color. (Gao, Jiang, Xu, & Xia, 2018). The a* was positively correlated with Lactobacillus, indicating that Lactobacillus may promote the formation of zinc protoporphyrin IX (ZnPP), which can be used to improve the a* of fermented meat products. Additionally, salt content showed a positive correlation with Lactobacillus and a negative correlation with other genera, while a w showed a negative correlation with Lactobacillus and Staphylococcus and a positive correlation with other genera, indicating that the increase in salt content and the decrease in a w could significantly inhibit the growth of undesirable microorganisms whilst favoring dominant bacteria in loin ham. Conclusion This study revealed that the effects of sorbitol-mediated curing on physicochemical properties and bacterial community composition of loin ham. The physicochemical results of the loin hams demonstrated that sorbitol-mediated curing did not negatively influence the physicochemical properties of the loin ham. Sorbitol-mediated curing of loin ham led to a significant decrease in salt content and a w (P < 0.05), which facilitated the salt reduction of loin ham and prolonged its shelf life. Moreover, Lactobacillus gradually dominated in the control group, while both Lactobacillus and Staphylococcus were evenly distributed in the sorbitol group throughout the fermentation and ripening stages. This indicates that sorbitol-mediated curing may promote the growth of Staphylococcus or inhibit the overgrowth of Lactobacillus, resulting in an even distribution of dominant microorganisms in the loin ham and, therefore, improving the quality of loin ham. Our study provides a preliminary perspective on the potential development of a salt-reduced fermented meat product in the food industry, with promising results. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability Data will be made available on request.
2022-12-14T16:12:35.168Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "3557638906f6ecfba5b6953ed59aaf0bcca80075", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.fochx.2022.100543", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d6fd58bf70d56907889d34dc7acab7ce3a1daf5e", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
2281675
pes2o/s2orc
v3-fos-license
The Ratio of Circulating Regulatory T Cells (Tregs)/Th17 Cells Is Associated with Acute Allograft Rejection in Liver Transplantation CD4CD25FoxP3 regulatory T cells (Tregs) and Th17 cells are known to be involved in the alloreactive responses in organ transplantation, but little is known about the relationship between Tregs and Th17 cells in the context of liver alloresponse. Here, we investigated whether the circulating Tregs/Th17 ratio is associated with acute allograft rejection in liver transplantation. In present study, thirty-eight patients who received liver transplant were enrolled. The patients were divided into two groups: acute allograft rejection group (Gr-AR) (n = 16) and stable allograft liver function group (Gr-SF) (n = 22). The frequencies of circulating Tregs and circulating Th17 cells, as well as Tregs/Th17 ratio were determined using flow cytometry. The association between Tregs/Th17 ratio and acute allograft rejection was then analyzed. Our results showed that the frequency of circulating Tregs was significantly decreased, whereas the frequency of circulating Th17 cells was significantly increased in liver allograft recipients who developed acute rejection. Tregs/Th17 ratio had a negative correlation with liver damage indices and the score of rejection activity index (RAI) after liver transplantation. In addition, the percentages of CTLA-4, HLA-DR, Ki67, and IL-10 Tregs were higher in Gr-SF group than in Gr-AR group. Our results suggested that the ratio of circulating Tregs/Th17 cells is associated with acute allograft rejection, thus the ratio may serve as an alternative marker for the diagnosis of acute rejection. Citation: Wang Y, Zhang M, Liu Z-W, Ren W-G, Shi Y-C, et al. (2014) The Ratio of Circulating Regulatory T Cells (Tregs)/Th17 Cells Is Associated with Acute Allograft Rejection in Liver Transplantation. PLoS ONE 9(11): e112135. doi:10.1371/journal.pone.0112135 Editor: Valquiria Bueno, UNIFESP Federal University of São Paulo, Brazil Received June 13, 2014; Accepted October 13, 2014; Published November 5, 2014 Copyright: 2014 Wang et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Data Availability: The authors confirm that all data underlying the findings are fully available without restriction. All relevant data are within the paper. Funding: This work was supported by Project of Research on The Application of Capital, Clinical Characteristics (Z111107058811069); The Key Project of Medical Science and Technology of PLA (BWS11J075) of China. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing Interests: The authors have declared that no competing interests exist. * Email: shiming302@sina.com . These authors contributed equally to this work. Introduction Despite the use of potent immunosuppressive agents, acute rejection (AR) remains a major cause of early allograft loss and an obstacle for long-term allograft survival. The hallmarks of acute rejection include infiltration of T lymphocytes, monocytes, and other inflammatory cells [1,2]. Laboratory and clinical investigations have indicated that CD4 + CD25 + FoxP3 + regulatory T cells (Tregs) are one of the major cell types responsible for the immune responses to alloantigens. Tregs activation is involved in the prevention of rejection, the induction and maintenance of peripheral tolerance of the allograft [3], and the support of allograft survival [4][5][6]. Several other studies indicated that Tregs are an essential element of the immunoregulatory pathway which induces peripheral allograft tolerance [7,8], that the frequency of circulating Tregs is significantly decreased during acute rejection [9], and that the transfer of Tregs pre-stimulated in vitro can protect skin and cardiac allografts from acute and chronic rejection [10,11]. In clinical transplantation, T cells with the phenotypic characteristics of regulatory cells are detected in both the peripheral blood and within the graft itself [3,12,13]. In renal transplant recipients, grafts infiltrated with more Tregs display much longer survival [7,14]. Pediatric patients who acquired operational tolerance after liver transplantation showed increased levels of circulating Tregs compared with patients who received immunosuppression [12]. Allograft tolerance in liver transplant recipients may be partly attributable to a higher frequency of circulating Tregs [9]. Therefore, an increased level of circulating Tregs may be beneficial for allograft survival. Th17 cells are a subset of T helper cells which is characterized by the production of IL-17. Th17 cells have been suggested to play a role in allograft rejection in the context of organ transplantation [15][16][17][18]. A study reported that cardiac allografts infiltrated with Th17 cells underwent accelerated vascular rejection in Tbet2/2 mice model [19]. IL-17, a potent proinflammatory cytokine, has been demonstrated to participate in allograft rejection [20][21][22][23]. It promotes cardiac allograft rejection by inducing the maturation, antigen presentation, and co-stimulatory capabilities of dendritic cells in mice [20]. In a corneal transplant model, mice with deficient IL-17 experienced delayed graft rejection compared to wild-type mice [24]. Blocking IL-17 promoted the maturation of dendritic cells, inhibited the proliferation of alloreactive T cells in vitro, and prolonged the survival time of vascularized cardiac allografts in vivo [19,20]. IL-17 neutralization inhibits acute, but not chronic, vascular rejection in mice [17,25]. Clinical evidence showed that the level of IL-17 in the blood is positively correlated with acute allograft rejection in the renal [21,26] and the liver [22] transplant recipients. Graft infiltrated with Th17 cells is associated with a faster destruction of allograft in renal transplant patients [27,28]. The aforementioned evidence suggests that Tregs cells have a protective effect against graft rejection, whereas Th17 cells play an essential role in promoting graft rejection. The differentiation pathways of Tregs and Th17 cells are known to be antagonistic [29,30], and Tregs can be converted into Th17 cells under inflammatory conditions [31]. However, the relationship between Tregs and Th17 cells is yet to be fully understood in the context of transplant alloresponse. Further validation is necessary to determine whether the balance between circulating Tregs and Th17 cells may be used as a predictor for the outcome of transplantation. This study is aimed to investigate the dynamics of Tregs/Th17 ratio in liver transplant recipients with or without post-operative rejection, and to assess whether Tregs/Th17 ratio may serve as an alternative marker for the diagnosis of acute rejection. Patients The study protocol was approved by the institutional review board of Beijing 302 hospital. All participants provided written informed consent to participate in this study. Thirty-eight patients were enrolled in our hospital for this study. All participants received a first cadaveric liver transplantation with an identical or compatible blood-group graft. Based on clinical and biochemical indicators as well as pathologic diagnosis, the patients were divided into two groups: acute allograft rejection group (Gr-AR, n = 16) and stable allograft liver function group (Gr-SF, n = 22). The histopathologic diagnosis of acute allograft rejection was defined according to Banff criteria [32]. Acute rejection and stable allograft liver function were defined as previously described [33]. All patients received conventional immunosuppressive agents after liver transplantation, such as tacrolimus, steroids (prednisolone) and mycophenolate mofetil (MMF). The dose of tacrolimus was adjusted when acute rejection was diagnosed. Patients with HBV infection received prophylactic therapies with hepatitis B immune globulin (HBIG) plus nucleos(t)ide analogues (NAs). The blood samples were obtained from all patients prior to transplant and at the following timepoints after transplantation: 1, 2, 3, 4, 8, 12 weeks. In addition, the blood samples and allograft biopsy tissues were obtained at the time of presenting worsening liver function test results and/or symptoms suggestive of acute rejection after liver transplantation. The clinical characteristics of these subjects were listed in Table 1. Flow cytometric analysis The phycoerythrin (PE)-conjugated anti-IL-17A and fluorescein isothiocyanate (FITC)-conjugated anti-FoxP3 were purchased from eBioscience (San Diego, CA), and all other antibodies used in flow cytometry were from BD Biosciences (San Jose, CA). For immunostaining of intracellular IL-17A, two samples of freshly heparinized peripheral blood (200 mL each) were incubated for 6 hours with phorbol-12-myristate-13-acetate (PMA, 300 ng/mL, Sigma-Aldrich, St. Louis, MO) and ionomycin (1 mL/mL, Sigma-Aldrich) in 800 mL of RPMI 1640 medium supplemented with 10% fetal calf serum. Monensin (0.4 mM, BD PharMingen) was added during the first hour of incubation. Then cytofix/cytoperm kit (BD PharMingen), anti-CD3, anti-CD8, anti-IL17, and anti-IFN-c antibody (mAb) were used in one sample, whereas anti-CD4, anti-CD25, and anti-FoxP3 mAb were used in the other sample according to the manufacturers' protocols. For Tregs analysis, anti-CD4, anti-CD25, and anti-HLA-DR mAb were added to 200 mL freshly heparinized blood sample, and then the sample was permeabilized and fixed using fix/perm kit (eBioscience) according to the manufacturer's instructions. After permeabilization, cells were incubated with anti-FoxP3, anti-CTLA-4, and anti-Ki67 mAb. The stained cells were acquired on a FACSCalibur (BD Biosciences) and analyzed using FlowJo software (Tritar, USA). Immunohistochemistry Biopsy specimens from 16 patients with acute rejection were collected and used in immunochemical staining with antiFoxP3 (eBioscience) and anti-IL-17 (R&D Systems). Formalin-fixed, paraffin-embedded liver tissues were cut into 5 mm sections and placed on polylysine-coated slides. Antigen retrieval was achieved via pressure cooking for 10 min in citrate buffer (pH 6.0). Endogenous peroxidase activity was blocked with 0.3% H 2 O 2 . The sections were then incubated with anti-FoxP3 or anti-IL-17 antibodies for overnight at 4uC. 3-amino-9-ethyl-carbazole (red color) was used as a substrate, and hematoxylin was used in the subsequent counterstaining. Statistical analysis SPSS 16.0 software (SPSS, Chicago, IL, USA) was used for all statistical analyses. The data were presented as means 6 SD. Mann-Whitney nonparametric U-test was applied to comparisons between 2 groups. Spearman's rank test was used to analyze the association between the severity of allograft tissue injury and Tregs frequency, Th17 cell frequency, or Tregs/Th17 ratio. Chi-squaretest was used to assess the difference among clinical data. A value of P,0.05 was considered to be statistically significant. Results The patterns of Tregs and Th17 cell frequencies and Tregs/Th17 ratio in transplant recipients with acute rejection We investigated Tregs and Th17 cell frequencies and Tregs/ Th17 ratio in all participants after liver transplantation. We collected the values of Tregs, Th17 cells and Tregs/Th17 ratio in Gr-SF and Gr-AR in the period prior to a rejection or at the onset of acute rejection after liver transplantation, and compared the values in Gr-SF group to those in Gr-AR group. Flow cytometry was used to analyze Tregs and Th17 frequencies in peripheral blood in all patients after liver transplantation. The results showed that during the period preceding rejection, the frequencies of Tregs, Th17 cells, and the Tregs/Th17 ratio have not significant differences between two groups. At the period onset of acute rejection, however, the frequency of Tregs was significantly higher in Gr-SF than in Gr-AR (P,0.01), but the frequency of Th17 was significantly lower in Gr-SF than in Gr-AR (P,0.01), yielding a significantly higher Tregs/Th17 ratio in Gr-SF than in Gr-AR (P,0.01). In addition, the frequency of IL-17/IFN-c producing CD4 + T cells (IL-17 + IFN-c + ) was higher in Gr-AR than that in Gr-SF (P,0.05) (Fig. 1A, B). To investigate the distribution patterns of Tregs and Th17 cells in acute rejection allografts, we next examined the infiltration of Tregs and Th17 cells in biopsy samples obtained from allografts in patients with acute rejection. Immunohistochemical staining was performed using anti-FoxP3 + and anti-IL-17 antibodies on paraffin embedded sections. Our results demonstrated an extensive infiltration of Tregs and Th17 cells in the acute reject allograft liver tissue (Fig. 1C). These findings, along with previously published data [34], suggested that Tregs may be involved in the regulation of alloreactive response in liver allograft tissue, but might be deficient in some patients. One representative patient with acute allograft rejection was followed up for 12 months after liver transplantation. The dynamics of Tregs and Th17 cell frequencies during the follow-up period was depicted in Figure 1D. The Th17 cell frequency exhibited a trend opposite to that of Tregs or Tregs/Th17 ratio. At the onset of acute rejection, Tregs frequency and Tregs/Th17 ratio were sharply decreased, whereas Th17 cell frequency was dramatically increased. Interestingly, as the rejection subsided, the frequencies of Tregs and Th17 cells were both restored to levels close to those before rejection. The correlation between Tregs/Th17 ratio and the biochemical indices of liver damage Little is known about the association between the balance of Tregs/Th17 and the liver damage in liver transplant recipients. Therefore, we analyzed the correlation of Tregs/Th17 ratio and the biochemical indices of liver damage, such as alanine amino transferase (ALT), aspartate amino transferase (AST), alkaline phosphatase (ALP) and gamma-glutamyl transpeptidase (GGT), in the 16 patients during the acute allograft rejection episode. Negative correlations were observed between Tregs/Th17 ratio and the levels of ALT (r = 20.668, P = 0.005), AST (r = 20.541, P = 0.031), ALP (r = 20.518, P = 0.039), and GGT (r = 20.764, P = 0.001) (Fig. 2). These results indicated that Tregs/Th17 ratio may be used as an alternative indicator for the diagnosis of liver damage in liver transplant recipients. Tregs frequency, Th17 cell frequency, and Tregs/Th17 ratio is correlated with rejection activity index (RAI) To confirm whether Tregs and Th17 cells were associated with liver allograft rejection, we analyzed the correlation between the rejection activity index (RAI) and the frequencies of circulating Tregs and Th17 cells. We found that Tregs/Th17 ratio (r = 2 0.859, P,0.001) and the level of Tregs (r = 20.867, P,0.001) had a negative correlation with RAI, whereas the level of Th17 cells showed a positive correlation with RAI (r = 0.890, P,0.001) (Fig. 3). These results suggested that Tregs/Th17 ratio may serve as a biomarker for the diagnosis of acute rejection. The phenotypes of CTLA-4 + , HLA-DR + , Ki67 + Tregs in liver transplant patients To better understand the mechanism by which Tregs function in liver transplant recipients, some important molecules that regulate Tregs were analyzed. CTLA-4 is expressed by human Tregs and is also upregulated in T cells upon activation. We characterized the patterns of CTLA-4 expression in Tregs in all patients. The percentage of CTLA-4 + Tregs was calculated as the percentage in total Tregs. The results showed that the frequency of CTLA-4 + Tregs was higher in Gr-SF group (35.5618.9%) than in Gr-AR group (23.7612.8%) (P,0.05). We also evaluated the activated (HLA-DR + ) and proliferating (Ki67 + ) Tregs in peripheral blood in all patients. We found that the percentages of HLA-DR + Tregs and Ki67 + Tregs were higher in Gr-SF (26.8617.2%, 30.6615.8%, respectively) than in Gr-AR (17.2611.6%, 20.3610.9%, respectively) (P,0.05) (Fig. 4). Such data suggested that more Tregs were in active and proliferating state in Gr-SF than in Gr-AR, and may facilitate the suppression of alloreactive responses in liver transplant recipients. Discussion Many studies have demonstrated that CD4 + CD25 + FoxP3 + Tregs and Th17 cells are involved in the tolerance or rejection response in organ transplantation [17,[34][35][36][37]. The current study is designed to investigate the relationship between Tregs and Th17 cells in the context of alloresponse in liver transplant patients. The major finding of our study is that Tregs/Th17 ratio is associated with alloresponse after liver transplantation. Our data confirm that the frequency of circulating Tregs is significantly decreased, whereas the frequency of Th17 cells is significantly increased in liver allograft recipients with acute rejection, and that Tregs/Th17 ratio has a negative correlation with liver damage. To our knowledge, this is the first study to demonstrate an association between Tregs/Th17 imbalance and allografts rejection. These findings suggest that the ratio of circulating Tregs/Th17 may serve as an alternative marker for the diagnosis of acute rejection and for the evaluation of the immune status in liver transplant recipients. Tregs are a unique subset of CD4 + T helper cells in that they control the responses of effector T-cells to prevent autoimmune reactions. Several studies show that Tregs can prevent rejection and promote the long-term survival of skin grafts in a mouse model [6,38]. In clinical, Tregs have been reported to be Tregs and Th17 Cells Associated with Acute Liver Allograft Rejection PLOS ONE | www.plosone.org associated with allograft tolerance in liver transplant recipients [9,12]. The Th17 subset is involved in mediating autoimmune responses and regulating allograft rejection both in rat renal transplant models and human renal transplantation [39,40]. In lung and heart transplantation, IL-17 has also been reported to be involved in allograft acute rejection [41,42]. A recent study reported that the levels of circulating CD4 + IL 2 17 + T cells are substantially higher in rejection group than in non-rejection group in liver transplant recipients, and the frequency of CD4 + IL 2 17 + cells in peripheral blood is positively correlated with the rejection activity index [43]. Recent researches reported that a new subpopulation of CD161 + Treg is able to produce IL-17 and has both inflammatory and suppressive potentials [44,45]. The functional and phenotypic characteristics of this subset in alloreactive response are worth further study. The frequency of Tregs in Gr-AR is significantly lower; on the contrary, the frequency of Th17 cells in Gr-AR is significantly higher than that in Gr-SF. In addition, the frequency of circulating Tregs has a negative correlation with RAI, whereas the frequency of circulating Th17 cells has a positive correlation with RAI. These data indicate that the decreased levels of Tregs and increased levels of Th17 cells may be involved in the acute rejection episodes in liver transplantation. Histopathological results demonstrate that the allograft tissue with acute rejection is extensively infiltrated with Tregs and Th17 cells. These findings are consistent with that from Stenard's study, which revealed increased intragraft Tregs during acute rejection [34]. Such data suggest that Tregs are mobilized to the site of immune activation and may participate in the regulation of alloreactive responses. However, the observation that acute allograft rejection can occur, even in the presence of Tregs, indicates that at least under some circumstances the mobilization of Tregs to the site is insufficient to effectively downmodulate the alloreactivity. Next, we suggest the mechanism by which Tregs are involved in the rejection episodes. CTLA-4 is an inhibitory receptor expressed by both activated T cells and Tregs, and may be crucial for their activity. HLA-DR is a marker for T cell activation. Ki67 is a marker of T cell proliferation. In our results, the percentages of CTLA-4 and HLA-DR + Tregs are significantly higher in Gr-SF than in Gr-AR. In addition, the level of Ki67 + Tregs is significantly higher in Gr-SF than in Gr-AR. In general, the increase in the frequencies of CTLA-4 + Tregs, HLA-DR + Tregs, and Ki67 + Tregs following the alloreactive immunosuppression may facilitate Tregs to exert their suppressive function, and may reflect the restoration of their functions because these changes occurred in parallel with stable liver functions. However, we have not assessed the suppressive function of Tregs in stable versus acutely rejecting subjects, so cannot draw any conclusions about the functional relevance of these cells in preventing/ameliorating rejection and impacting transplant outcomes. In conclusion, maintaining an appropriate balance between Tregs and Th17 cells is indispensable for the maintenance of stable liver function in transplant recipients. Tilting Tregs-Th17 equilibrium toward Tregs dominance may promote transplant tolerance. However, we carried out a small-scale observation study that is underpowered to draw firm conclusions about cause and effect between Treg/Th17 ratio and acute rejection. These findings should be the subject of further inquiry through a carefully conducted, larger, prospective study to determine whether Tregs/Th17 ratio can be used as a diagnosis marker and whether it may serve as a potential therapeutic target to manage the acute rejection of liver allografts. Figure 3. The frequency of Tregs, the frequency of Th17 cells, and Tregs/Th17 ratio are correlated with RAI. To confirm whether Tregs, Th17 cells and Tregs/Th17 were associated with the liver allograft rejection, we analyzed the correlation between RAI and the frequency of circulating Tregs, the frequency of circulating Th17 cells, and Tregs/Th17 ratio. We found that the Tregs level and Tregs/Th17 ratio had a negative correlation with RAI, whereas the Th17 cell level showed a positive correlation with RAI (P,0.01). doi:10.1371/journal.pone.0112135.g003 Figure 4. The phenotypes of CTLA-4 + , HLA-DR + , Ki67 + Tregs in liver transplant patients. The activated and proliferative molecules on Tregs were detected in liver transplant patients. The results showed that the frequencies of CTLA-4 + , HLA-DR + , and Ki67 + Tregs were higher in Gr-SF than in Gr-AR (all P,0.05). The above data suggested more Tregs were active and proliferating in Gr-SF than in Gr-AR in liver transplant recipients. doi:10.1371/journal.pone.0112135.g004
2016-05-12T22:15:10.714Z
2014-11-05T00:00:00.000
{ "year": 2014, "sha1": "01d21dadb20db704e9d61f5f37ba870209609f6e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0112135&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "01d21dadb20db704e9d61f5f37ba870209609f6e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52921643
pes2o/s2orc
v3-fos-license
Three Antifungal Proteins From Penicillium expansum: Different Patterns of Production and Antifungal Activity Antifungal proteins of fungal origin (AFPs) are small, secreted, cationic, and cysteine-rich proteins. Filamentous fungi encode a wide repertoire of AFPs belonging to different phylogenetic classes, which offer a great potential to develop new antifungals for the control of pathogenic fungi. The fungus Penicillium expansum is one of the few reported to encode three AFPs each belonging to a different phylogenetic class (A, B, and C). In this work, the production of the putative AFPs from P. expansum was evaluated, but only the representative of class A, PeAfpA, was identified in culture supernatants of the native fungus. The biotechnological production of PeAfpB and PeAfpC was achieved in Penicillium chrysogenum with the P. chrysogenum-based expression cassette, which had been proved to work efficiently for the production of other related AFPs in filamentous fungi. Western blot analyses confirmed that P. expansum only produces PeAfpA naturally, whereas PeAfpB and PeAfpC could not be detected. From the three AFPs from P. expansum, PeAfpA showed the highest antifungal activity against all fungi tested, including plant and human pathogens. P. expansum was also sensitive to its self-AFPs PeAfpA and PeAfpB. PeAfpB showed moderate antifungal activity against filamentous fungi, whereas no activity could be attributed to PeAfpC at the conditions tested. Importantly, none of the PeAFPs showed hemolytic activity. Finally, PeAfpA was demonstrated to efficiently protect against fungal infections caused by Botrytis cinerea in tomato leaves and Penicillium digitatum in oranges. The strong antifungal potency of PeAfpA, together with the lack of cytotoxicity, and significant in vivo protection against phytopathogenic fungi that cause postharvest decay and plant diseases, make PeAfpA a promising alternative compound for application in agriculture, but also in medicine or food preservation. INTRODUCTION Fungal infections are an emerging worldwide threat to animal, human, and wildlife health (Fisher et al., 2012;Meyer et al., 2016). In medicine and agriculture, control of pathogenic fungi represents a serious challenge due to the increasing number of immunocompromised patients and the emergence of antifungal resistant strains. Accordingly, new antifungal strategies are needed, and current interests are focused on novel antifungal agents with properties and mechanisms of action different from existing ones. Ideally, newly developed antimycotics should also combine major aspects such as sustainability, high efficacy, limited toxicity, and low costs of production (Marx et al., 2008;Meyer, 2008). Antifungal proteins (AFPs) secreted by filamentous fungi meet the desired characteristics to fight fungal contaminations and infections. AFPs are small, cationic, cysteine-rich proteins highly stable to pH, high temperatures, and proteolysis, and exhibit broad antifungal spectra and different mechanisms of action against opportunistic human, animal, plant, and foodborne pathogenic filamentous fungi (Marx et al., 2008;Hegedüs and Marx, 2013;Delgado et al., 2016). AFPs are coded with a signal peptide (SP) at the N-termini that includes a pre-sequence involved in AFP secretion to the extracellular space, and a prosequence, whose function is still controversial although it is assumed that might be involved in maintaining AFPs in an inactive form (Marx et al., 1995). As shown by genome mining, fungi have a complex repertoire of AFP-like sequences, which are grouped in three major classes A, B, and C . Noteworthy, filamentous fungi genomes encode more than one AFP from different classes. The Penicillium chrysogenum genome harbors three genes that code for AFPs belonging to each of three different classes while Penicillium digitatum has only one AFP in its genome (class B). The genome of Neosartorya fischeri encodes two AFPs (classes A and C) but recently a new AFP has been characterized, which seems to be the first member of a fourth class (Tóth et al., 2016). As new AFPs are being experimentally identified, differences regarding production, biological function, mode of action and antifungal spectrum are observed. Nowadays, the antifungal activity of at least one representative of all AFP classes has been experimentally demonstrated, and lots of efforts are being made to further examine these proteins. Class A includes those AFPs described firstly, such as PAF from P. chrysogenum (Marx et al., 1995) and AFP from Aspergillus giganteus (Nakaya et al., 1990;Wnendt et al., 1994;Campos-Olivas et al., 1995;Lacadena et al., 1995) which have been deeply characterized (Meyer, 2008;Hegedüs and Marx, 2013). The first reported class B AFP was Anafp from Aspergillus niger (Lee et al., 1999) and currently representatives of class B also include those from P. chrysogenum (Delgado et al., 2015;Huber et al., 2018), P. digitatum (Garrigues et al., 2017), and Monascus pilosus (Tu et al., 2016). Only the antifungal activity of two class C representatives, the BP protein from Penicillium brevicompactum (Seibold et al., 2011) and the Pc-Arctin from P. chrysogenum (Chen et al., 2013), has been reported. Some AFP-like proteins are yet uncharacterized, including those from the phytophatogenic fungus Penicillium expansum, whose genome contains three genes that code for three different AFP-like proteins, one of each class . Whether the distinct AFP-like proteins within a given fungus are differentially produced, perform different biological functions, or have different antifungal profiles and mode of action is still unknown, and P. expansum represents an opportunity to address these issues. In this study, the production of the putative AFPs from P. expansum was evaluated, and their antifungal activity demonstrated and described. Only the representative of class A, PeAfpA, was identified in culture supernatants of the native fungus whereas an heterologous expression system in P. chrysogenum allowed the production of PeAfpB and PeAfpC. Native and recombinant AFPs have been successfully purified and their characterization showed distinctive antifungal profiles. For transformation, vectors were propagated in Escherichia coli JM109 grown in Luria Bertani (LB) medium supplemented with 100 µg/mL ampicillin or 75 µg/mL kanamycin. P. chrysogenum paf was firstly grown in P. chrysogenum minimal medium (PcMM) agar (Sonderegger et al., 2016) supplemented with 200 µg/mL nourseotricin for 7 days at 25 • C. Conidia were subsequently harvested with a solution containing 0.9% NaCl and 0.01% Tween 80, and were grown in Aspergillus complete medium (Sonderegger et al., 2016) for 36 h at 25 • C with shaking. Transformants were grown on PcMM plates supplemented with 1 µg/mL pyrithiamine hydrobromide (Sigma-Aldrich, St. Louis, MO, United States). To analyze the growth of the P. chrysogenum transformant strains in solid media, 5 µL of conidial suspension (5 × 10 4 conida/mL) were placed on the center of PDA and PcMM plates, and the colony diameter was monitored daily from 3 to 12 days. For protein production, 200 mL of Potato Dextrose Broth (PDB; Difco-BD Diagnostics) or PcMM were inoculated with a final concentration of 10 6 conidia/mL of either P. expansum CMP-1 or P. chrysogenum transformant strains and were incubated for 10 or 4 days, respectively. Protein Sequences and Structure Prediction Sequences from the three different Peafp genes and the corresponding amino acid sequences were identified through BLAST searchers that were conducted at the National Center for Biotechnology Information (NCBI) server 1 (Ballester et al., 2015;Garrigues et al., 2016). Multiple sequence alignments were performed with the Clustal Omega algorithm 2 , using the mature protein sequences without their SP. The I-TASSER software 3 (Yang et al., 2014) was used to predict the three dimensional (3D) structure of the P. expansum AfpA, AfpB, and AfpC, using the P. chrysogenum antifungal proteins PAF and PAFB, and the P. brevicompactum bubble protein as templates, respectively (Protein Data Bank ID 2MHV, 2NC2, and 1UOY). Models obtained were refined using the ModRefiner software tool 4 (Xu and Zhang, 2011) and validated by RAMPAGE 5 (Lovell et al., 2003) to ensure that all amino acids were located inside the favored and energetically allowed regions according to the Ramachandran Plot. The theoretical molecular weight (MW) and isoelectric point (pI) of the mature proteins were examined with the Compute pI/MW and ProtParam tools of the ExPASy Proteomics Server 6 . All 3D models were visualized by UCSF Chimera software (Pettersen et al., 2004). Vector Constructions and P. chrysogenum Transformant Strains Generation Nucleotide sequences of afpA, afpB, and afpC genes were PCR amplified from P. expansum CMP-1 genomic DNA, whereas the paf gene promoter, SP-pro, and terminator sequences were obtained from the vector pSK275paf (Sonderegger et al., 2016). All PCR procedures were performed using AccuPrime High-Fidelity polymerase (Invitrogen, Eugene, OR, United States), and the resulting DNA constructs were purified using High Pure PCR product Purification Kit (Roche, Mannheim, Germany), and verified by Sanger sequencing. Specific primers used for genetic amplification and vector generation are listed in Supplementary Table S1. The three different DNA constructions were generated by fusion PCR (Szewczyk et al., 2007) and cloned into the pGEM-T R Easy vector system (Promega, Madison, WI, United States), from where they were excised using two internal restriction sites BspOI and NotI, and subsequently inserted into the previously digested vector pSK275paf (pSK275_PeafpA, pSK275_PeafpB, and pSK275_PeafpC) containing the pyrithiamine hydrobromide resistant cassette as positive selection marker. For the protein production of P. expansum AfpA, AfpB, and AfpC in P. chrysogenum, the deletion strain paf was used as recipient for the plasmids pSK275_PeafpA, pSK275_PeafpB, and pSK275_PeafpC. Protoplast transformation was performed as previously described (Cantoral et al., 1987;Kolar et al., 1988), using 15 µg of SmaI linearized plasmids per transformation. Transformant strains were single spored four times on PcMM agar plates supplemented with 1 µg/mL pyrithiamine hydrobromide (Sigma-Aldrich). Positive transformants were confirmed by PCR amplification of genomic DNA (Supplementary Table S1 and Supplementary Figure S1). Protein Production and Purification The P. digitatum AfpB was produced and purified as previously described (Hernanz-Koers et al., 2018). PeAfpA was purified from a 10-day PcMM supernatant of P. expansum CMP-1 strain. PeAfpB and PeAfpC were purified from supernatants of P. chrysogenum transformant strains growing in PcMM for 72-96 h. Cell-free supernatant containing PeAfpA was dialyzed (2 K MWCO, Sigma-Aldrich) against 20 mM phosphate buffer pH 6.6, and supernatants containing PeAfpB and PeAfpC were dialyzed against 20 mM acetate buffer pH 5.4. Dialyzed solutions were applied to an AKTA Purifier system equipped with 6 mL RESOURCE TM S column (GE Healthcare Life Sciences, Little Chalfont, United Kingdom) equilibrated in the corresponding buffer. Proteins were eluted applying a linear gradient from 0 to 1 M NaCl in the same buffer. Matrix-Assisted Laser Desorption/Ionization-Time-of-Flight Mass Spectrometry (MALDI-TOF MS) Analyses were performed in the proteomics facility of SCSIE University of Valencia (Spain). The mass of the purified proteins was analyzed on a 5800 MALDI-TOF/TOF (AB Sciex, Framingham, MA, United States) in positive linear mode (1500 shots every position) in a range of 2000-20,000 m/z. For protein identification by peptide mass fingerprinting (PMF), samples were subjected to trypsin digestion and the resulting mixtures analyzed on a 5800 MALDI-TOF/TOF in positive reflectron mode (3000 shots every position). Five of the most intense precursors (according to the threshold criteria: minimum signalto-noise: 10, minimum cluster area: 500, maximum precursor gap: 200 ppm, maximum fraction gap: 4) were selected for every position for the MS/MS analysis. MS/MS data was acquired using the default 1 kV MS/MS method. The MS and MS/MS information was sent to MASCOT via the Protein Pilot (AB Sciex). Antibody Generation and Western Blot For PeAFPs detection, rabbit polyclonal antibodies were generated as previously described (Mercader et al., 2017) with minor modifications. Procedures for animal immunization were approved by the Ethics Committee of the University of Valencia (Spain) for Animal Experimentation and Welfare (project 2016/VSC/PEA/00136). Animal manipulation was performed according to Spanish and European laws and guidelines concerning the protection of animals used for scientific purposes (RD 1201, Law 32/2007, and European Directive 2010/63/EU). Briefly, two white rabbits of around 2 kg were subcutaneously immunized with 300 µg of each PeAFP in a 1:1 emulsion of phosphate buffer solution and Freund's adjuvant (Sigma-Aldrich; complete for the first immunization, and incomplete for further boosts). The immunogen was given at least 4 times with intervals of 21 ± 1 days. Blood was taken 10 days after the final injection and it was allowed to coagulate overnight at 4 • C. The antibody-containing sera were separated by centrifugation (270 × g, 15 min) and antibodies were precipitated twice with 1 volume of saturated ammonium sulfate solution. Precipitated antisera were stored at 4 • C until use. Total proteins from supernatants and purified AFPs were separated by SDS-16% polyacrylamide gels and transferred to Amersham Protran 0.20 µm NC nitrocellulose transfer membrane (GE Healthcare Life Sciences). Protein detection was accomplished using anti-PeAfpA, anti-PeAfpB, and anti-PeAfpC antibodies diluted 1:2500 for PeAfpA and PeAfpC, and 1:1500 for PeAfpB. As secondary antibody, 1:20,000 dilution of ECL NA934 horseradish peroxidase donkey anti-rabbit (GE Healthcare) was used and chemiluminescent detection was performed with ECL TM Select Western blotting detection reagent (GE Healthcare Life Sciences) using a LAS-1000 instrument (Fujifilm, Tokyo, Japan). The experiments were repeated at least twice. Antimicrobial Activity Assays Growth inhibition assays were performed in 96-well, flat-bottom microtiter plates (Nunc, Roskilde, Denmark) as previously described (Garrigues et al., 2017) with minor modifications. Briefly, 50 µL of fungal conidia (5 × 10 4 conidia/mL) or yeast cells (2.5 × 10 5 cells/mL) in 10% PDB containing 0.02% (w/v) chloramphenicol to avoid bacteria contamination were mixed with 50 µL of twofold concentrated proteins from serial twofold dilutions (final concentration 200 µg/mL). Plates were statically incubated for 48 h at 25 • C in case of yeasts (S. cerevisiae at 30 • C), and 72 h at 25 • C for filamentous fungi (A. vanbreuseghemii at 28 • C) except dermatophytes which were incubated for 120 h. Growth was determined every 2 and 24 h, respectively, by measuring the optical density (OD) at 600 nm using FLUOstar Omega plate spectrophotometer (BMG labtech, Orlenberg, Germany), and the OD 600 mean and standard deviation (SD) between three replicates were calculated. Dose-response curves were generated from measurements after 48 h in yeasts, and 72 h in filamentous fungi (120 h in dermatophytes). These experiments were repeated at least twice. Minimum inhibitory concentration (MIC) is defined as the protein concentration that completely inhibited growth in all the experiments performed. Hemolytic Activity Assays The hemolytic activity of the three PeAFPs was determined in a 96 round-bottom microtiter plate (Nunc) on 1:4 diluted rabbit red blood cells (RBCs) as described (Helmerhorst et al., 1999;Muñoz et al., 2006) with minor modifications. RBCs were harvested by slow centrifugation (100 × g, 15 min) and washed at least three times in 35 mM phosphate buffered saline (PBS, 150 mM NaCl, pH 7) or phosphate buffer glucose (PBG, 250 mM glucose as osmoprotectant). One hundred microliters of twofold protein concentration were mixed with 100 µL of RBCs in triplicate. Plates were incubated for 1 h at 37 • C and subsequently centrifuged (300 × g, 5 min). Eighty microliters were transferred to a new microtiter plate and the absorbance was measured at 415 nm (FLUOstar Omega, BMG labtech). Absence of hemolysis and 100% hemolysis were determined in controls with a mixture of PBS or PBG, and 0.1% Triton X-100, respectively. The hemolytic activity was calculated as the percentage of total hemoglobin released compared with that released by incubation with 0.1% Triton X-100. Protection Assays Against Fungal Infection Caused by P. digitatum in Citrus Fruits For protection assays, three replicates of five untreated, freshly harvested orange fruits (Citrus sinensis L. Osbeck cv. Navelina) were inoculated at four wounds around the equator with 5 µL of a P. digitatum conidial suspension (10 4 conidia/mL) that were pre-incubated for 24 h with different concentrations of PeAfpA and P. digitatum AfpB (0.15, 1.5, and 15 µM). Orange fruits were stored at 20 • C and 90% relative humidity. The diameter of infection in each wound was measured daily for infection symptoms on consecutive days post inoculation (dpi). Statistical analyses were performed using STATGRAPHICS Centurion 16.7.17. Fisher's minimum significant difference (LSD) procedure was performed to discriminate between means of % of infected wounds in each treatment with respect to the untreated control at each particular dpi with a 95% confidence. Protection Assays Against Fungal Infection Caused by B. cinerea in Tomato Leaves Tomato leaves (Solanum lycopersicum cv. Marmande) from 21-days old plants grown at 22 • C with 16 h light/8 h dark photoperiod were locally inoculated with conidial suspension of B. cinerea alone or in the presence of increasing amounts of AfpB from P. digitatum or PeAfpA from P. expansum. For this, two drops of 20 µL of the conidial suspension (5 × 10 5 conida/mL) together with the appropriate concentration of each AFP (1, 5, and 10 µM) were applied onto leaf surfaces. Sterile water was used for the negative control. The plants were maintained with high humidity and the progression of symptoms was measured daily. Leaf damage was quantified by image analysis using the Fiji ImageJ2 package (Schindelin et al., 2012). Statistical analyses were performed using Free Statistics Software, Office for Research Development and Education, version 1.2.1 (Wessa, 2018) to calculate the ANOVA and Tukey's HSD test. P. expansum Encodes up to Three Distinct AFPs From Different Classes but Only Secretes AFP From Class A In order to detect and isolate any of the three putative P. expansum AFPs, called PeAfpA, PeAfpB, and PeAfpC, from culture supernatants, the fungus was grown in either PDB or PcMM growth media, and time-course supernatants were analyzed by SDS-PAGE ( Figure 1A). In silico studies predicted molecular masses of 6.64, 6.57, and 8.12 kDa and pI values of 9.5, 7.6, and 7.7 for PeAfpA, PeAfpB, and PeAfpC, respectively. The largest amount of proteins was detected in PcMM supernatants, from which a protein band of apparent molecular mass of approximately 6 kDa was observed from day 5 till day 10 of growth. No band around 6 kDa was detected in PDB supernatants. To identify the putative PeAFPs produced in PcMM, PMF from an in-gel digestion of the 6 kDa band was performed. A Mascot database search resulted in a statistically significant hit for PeAfpA (score 125; E-value 5.8e −11 ) with a sequence coverage of 78% ( Figure 1B). According to its predicted chemical properties, PeAfpA purification was achieved from a 10 days P. expansum PcMM supernatant by one-step cation-exchange chromatography with yields of 125 mg/L. The protein eluted as a single broad chromatography peak at 0.1-0.5 M NaCl, and SDS-PAGE ( Figure 1A) and MALDI-TOF MS analyses ( Figure 1C) revealed a single protein with a molecular mass of 6619.81 Da, which was very similar to that obtained by our previous in silico calculations. Recombinant Production of PeAFPs in P. chrysogenum Since only PeAfpA was detected and isolated from the P. expansum culture supernatants, we used the P. chrysogenumbased expression cassette (Sonderegger et al., 2016;Garrigues et al., 2017) to produce the other two undetected AFPs from P. expansum PeAfpB and PeAfpC in P. chrysogenum under the regulation of the strong paf promoter and terminator sequences (Figure 2A). In addition, the PeAfpA production in P. chrysogenum was addressed as an internal control. Several positive transformants were obtained and evaluated for protein production in the case of proteins PeAfpB and PeAfpC, and one clone from each with the highest recombinant protein production was selected for further characterization. The selected producer strains were PCSGB14 for PeAfpB and PCSGC33 for PeAfpC. On the contrary, only one positive PeAfpA producer clone, PCSGA29, was obtained. The growth in solid medium of the selected transformants, the reference strain P. chrysogenum Q176 and the parental P. chrysogenum strain used for transformation ( paf ) are shown in Figures 2B,C. The growth of PeAfpB and PeAfpC transformants was indistinguishable from those of the control strains independently of the medium used. In contrast, the PeAfpA transformant showed a significant reduction of colony diameter, more pronounced in PcMM plates, and a drastic defect in conidia production (data not shown). Moreover, the transformant produced little amounts of PeAfpA, which hindered its use for purification of this recombinant protein. Selected clones for PeAfpB and PeAfpC production in P. chrysogenum were grown in PcMM and, after clearing the culture broth from insoluble matter, the proteins in the supernatants were purified by one-step cation-exchange chromatography. Optimal production was achieved after 72 h with yields of 32 mg/L for PeAfpB and 62 mg/L for PeAfpC. PeAfpB eluted as a broad chromatography peak between 0.15 and 0.3 M NaCl while PeAfpC eluted as a sharp single peak at 0.075 M NaCl. SDS-PAGE analysis revealed a protein band in both protein samples, having apparent molecular masses higher than 6 kDa. PeAfpB showed less migration than expected from its predicted molecular mass (6.57 kDa) and in comparison to purified PeAfpA ( Figure 3A, top panel). Molecular masses of both recombinant proteins were determined by MALDI-TOF MS. Single peaks corresponding to average masses of 6576.07 and 6718.5 Da were detected for PeAfpB and PeAfpC, respectively (Supplementary Figure S2). The experimental mass of PeAfpB is consistent with the calculated theoretical mass of the oxidized protein predicted after cleavage from the PAF SP-pro sequence (6572.2 Da), indicating the presence of three intra-molecular disulphide bonds and the absence of other post-translational modifications. By contrast, the average mass detected for PeAfpC was lower than the theoretical mass expected of 8123 Da, suggesting an inappropriate processing. To verify the identity of the recombinant PeAfpC produced in P. chrysogenum, PMF analysis of the purified protein was done. A Mascot database search resulted in a statistically significant hit for DUF1962 (protein with domains of unknown function) from P. expansum (score 280; E-value 7.9e −21 ) with a sequence coverage of 53% (Supplementary Figure S3). This protein corresponded to PeAfpC, which in the genomic annotation included an internal insertion of 11 extra amino acids that were theoretically present in the three different P. expansum sequenced strains but absent in class C proteins from other fungi (Ballester et al., 2015;Garrigues et al., 2016; Supplementary Figure S3A). Our data demonstrate that this insertion is absent in our purified PeAfpC (Supplementary Figure S3B). These results indicated that PeAfpC has a theoretical pI of 6.87 and a predicted molecular mass of 6.72 kDa, in accordance to that experimentally determined (6718.5 Da), and similar to that reported for other homologs belonging to the same class. Immunodetection Confirmed the Absence of PeAfpB and PeAfpC in P. expansum Supernatants Purified PeAFPs were used to generate polyclonal antibodies. The polyclonal anti-PeAfpA, anti-PeAfpB, and anti-PeAfpC specifically recognized the corresponding purified protein while no cross reactivity among the three proteins was observed ( Figure 3A, bottom panel). Purified PAF from P. chrysogenum and AfpB from P. digitatum were also included as representatives of classes A and B proteins, respectively. However, neither the polyclonal anti-PeAfpA recognized PAF nor anti-PeAfpB immunoreacted with P. digitatum AfpB (Figure 3A, bottom). Specific signals were also detected in the supernatants of the selected PeAFP producer P. chrysogenum transformant strains PCSGA29, PCSGB14, and PCSGC33 (Supplementary Figure S4). Polyclonal antibodies were then used to analyze the supernatants of P. expansum. In the P. expansum supernatants that were initially analyzed by Coomassie blue staining ( Figure 3B, top panel), neither PeAfpB-nor PeAfpC-specific signals could be immunodetected in either PDB or PcMM culture supernatants. As expected, PcMM supernatants only reacted with the anti-PeAfpA antibody, and no immunoreaction was observed in the PDB culture supernatants (Figure 3B, bottom panel), confirming that P. expansum only produces PeAfpA naturally in PcMM under the conditions tested. PeAfpA and PeAfpB show 53 and 77% amino acid identity with the P. chrysogenum PAF and PAFB, respectively. Tertiary structure of PeAfpA and PeAfpB were very similar to their classes A and B homologs, with five antiparallel β-strands forming a compact β-barrel that would be theoretically stabilized by three disulphide bonds following the abcabc pattern, as described for PAF and PAFB (Váradi et al., 2013;Huber et al., 2018). PeAfpC shows 74% amino acid identity with the BP protein used as template. However, PeAfpC predicted structure significantly differs from that of the BP. BP contains five antiparallel β-strands and four disulphide bonds connecting the two compacted β-sheets forming a basic accessible shallow funnel that may be relevant to the protein function (Olsen et al., 2004). Furthermore, BP contains a small α-helix structure absent in the other classes of AFPs. On the contrary, PeAfpC is predicted to have partially lost its tertiary structure if compared with BP. PeAfpC only contains three antiparallel β-strands forming a compacted β-sheet, whereas the second β-sheet FIGURE 2 | Phenotypical characterization of the P. chrysogenum transformant strains producing recombinant PeAFPs. (A) Schematic representation of the expression systems used to produce proteins PeAfpA (blue), PeAfpB (red), and PeAfpC (green) in P. chrysogenum. In gray: paf promoter (Ppaf), paf signal peptide (SP), and paf terminator (Tpaf). (B) Colony morphology of P. chrysogenum PeAfpA producer strain (PCSGA29), PeAfpB producer strain (PCSGB14), and PeAfpC producer strain (PCSGC33) compared to the wild type Q176 and the parental strain paf after 5 days of growth on PDA and PcMM plates. (C) Growth in solid PDA and PcMM determined by the colony diameter from 3 to 11 days of growth at 25 • C. Plotted data are mean values ± SD of triplicate samples. and α-helix structures present in BP are missing in PeAfpC (Figure 4). Antimicrobial Activity Assays The three PeAFPs were tested for their antimicrobial activity toward a selection of filamentous fungi that include the P. expansum parental strain and several plant pathogens such as the citrus fruit specific P. digitatum and P. italicum, the polyphagous B. cinerea, the rice blast fungus M. oryzae, and the soilborne plant pathogen F. oxysporum. Furthermore, the mycotoxin producers A. flavus and G. moniliformis, and clinically relevant pathogens such as the skin pathogens T. rubrum and A. vanbreuseghemii, and the opportunistic human pathogens C. albicans, C. glabrata, and C. parapsilosis were also examined. Finally, S. cerevisiae, the PAF producer P. chrysogenum strain, and a strain from A. niger which is particularly sensitive to AFPs were also evaluated. Differences in antimicrobial activity were observed among the three PeAFPs (Table 1 and Figure 5). PeAfpA showed high antifungal activity and inhibited the growth of all tested fungi. The minimum inhibitory concentration (MIC) values varied from 1 µg/mL against P. digitatum to 16 µg/mL against M. oryzae. The Penicillium species tested and A. niger were more susceptible to PeAfpA, including the producer parental strain P. expansum. By contrast, PeAfpC was inactive against all the fungi at the highest concentration tested (200 or 64 µg/mL), while PeAfpB showed a moderate antifungal activity with MIC values ranging from 12 µg/mL against the three phytopathogenic Penicillium species to 50 µg/mL against P. chrysogenum, B. cinerea, and A. niger. PeAfpB was not active against either M. oryzae or F. oxysporum at 200 µg/mL or against G. moniliformis, A. flavus, or A. vanbreuseghemii at 64 µg/mL. PeAfpB was also inactive against yeast species. . Two micrograms of proteins PAF from P. chrysogenum and AfpB from P. digitatum were added as controls to test cross-reactivity among PeAFPs antibodies. Immunoblot analyses of these samples were performed using specific anti-PeAfpA, anti-PeAfpB, and anti-PeAfpC antibodies generated in this work. (B) SDS-PAGE (Top) and Western blot analyses (Bottom) of P. expansum culture supernatants (10 µL of 10× supernatants loaded per lane) after 3, 5, 7, and 10 days of growth in PDB and MM. Immunoblot analyses of P. expansum supernatants were performed using the three specific PeAFPs antibodies. All SDS-PAGE analyses were visualized by Coomassie blue staining. M: SeeBlue R Pre-stained protein standard. Proteins belonging to different phylogenetic classes are highlighted in different colors. Proteins belonging to class A are represented in green, while classes B and C are shaded in orange and blue, respectively. Conserved intra-class motifs are shadowed following their color code. Cysteine patterns are shadowed in red. Strongly conserved amino acids between classes are shadowed in black. (B) Comparison of the predicted tertiary structure of PeAfpA, PeAfpB, and PeAfpC from P. expansum with the three-dimensional structures of the proteins PAF and PAFB from P. chrysogenum and BP from P. brevicompactum used as templates, respectively. PeAFPs Showed No Hemolytic Activity Hemolytic activity assays are performed in order to determine the cytotoxicity of specific proteins and peptides against eukaryotic cells by their ability to lyse RBCs. The hemolytic activity of the three PeAFPs and of the cytolytic peptide melittin as positive control was determined using a high ionic strength phosphate NaCl buffer (PBS) and also a low ionic strength isotonic glucose phosphate buffer (PBG) (Helmerhorst et al., 1999). None of the PeAFPs showed hemolytic activity at any of the concentrations tested (1-100 µM), neither in the presence of NaCl as in PBS ( Figure 6A) nor glucose (Figure 6B), in contrast to the hemolysis caused by melittin at 25 µM. PeAfpA Confers Protection Against P. digitatum Infection in Orange Fruits Based on the in vitro antimicrobial results, experiments were designed to evaluate PeAfpA ability to control the green mold disease caused by P. digitatum infection to citrus fruit. AfpB from P. digitatum, which has been previously described as a highly in vitro active AFP (Garrigues et al., 2017), was also included as a potential candidate to control green mold. Figure 7 shows the effects of different concentrations of AfpB and PeAfpA. The latter showed control of experimental P. digitatum infections when used at concentrations as low as 0.15 µM at late dpi. Contrarily, AfpB showed no significant protection at any of the concentrations tested (p < 0.05). PeAfpA Confers Protection Against B. cinerea Infection in Tomato Leaves Experiments were designed to assess the effectiveness of PeAfpA against the infection caused by the polyphagous fungus B. cinerea in vivo in a detached leaf assay. Recently we have shown the effectiveness of P. digitatum AfpB at a concentration of 10 µM to control B. cinerea in tomato leaves (Shi et al., unpublished), and thus AfpB was included here as a positive control and for comparison of antifungal efficacy with PeAfpA. Development of disease symptoms on the detached leaves was monitored visually. Four days after inoculation, lesions were observed in the fungusinfected leaves that had not been treated with proteins (control) (Figure 8A). However, lesions were not observed or were significantly smaller when treated with PeAfpA (Figures 8A,B). This protective effect was dependent on protein doses, being still effective even at concentrations as low as 1 µM, and greater than the caused by PeAfpB (Figures 8A,B). Interestingly, protection afforded by both PeAfpA and AfpB was also effective on established infection foci when proteins were applied 6 h after conidia ( Figure 8C). This data suggests that both proteins can be used to treat already infected plants. The protective effect of these AFPs was also observed in whole plant assays in which two leaves per plant were inoculated ( Figure 8D). The control plants showed complete necrosis of the inoculated leaves and mild systemic signs of decay, while Afp-treated plants showed little or no infection symptoms. DISCUSSION In this study, we detail the differential patterns of production of the three AFPs from the phytopathogenic fungus P. expansum. PeAfpA, PeAfpB, and PeAfpC are new members of classes A, B, and C, respectively, and here we experimentally characterize their antifungal activity. Only PeAfpA was detected in culture supernatants of P. expansum when grown in MM with sucrose as carbon source while in the nutritionally rich medium PDB (potato infusion + glucose) no protein was observed. The class A member PAF is also abundantly secreted by P. chrysogenum but its production depends on the type of carbon source present in the growth medium (Marx et al., 1995). AFP, another representative of class A, was successfully isolated from the culture supernatants of A. giganteus when grown in a rich medium based on corn starch and beef extract . Expression studies performed with afp and paf do not indicate a general pattern for both genes, except that the maximum mRNA and protein yield is reached during the stationary growth phase after 70-90 h of cultivation (Meyer and Stahl, 2002;Marx, 2004). Our time course experiments for protein production showed that PeAfpA was detected in MM P. expansum supernatants from day 5, and high yields of the protein (125 mg/L) were reached from 10 day-old supernatants. Thus, cultivation conditions seem to regulate PeAfpA production since the protein was neither detected by Coomassie staining nor by anti-PeAfpA antibodies in PDB supernatants. PeAfpA production at such long incubation times in MM suggests that it might be linked to nutrient limitation as described for PAF and AFP, and that glucose might suppress production (Marx et al., 1995). Remarkably, a given fungal strain might produce different AFPs depending on culture broth as described for N. fischeri NRRL 181. The class A NFAP was isolated when the fungus was grown in a complex medium with starch, beef extract, peptone, FIGURE 5 | In vitro inhibitory activity of the three PeAFPs against filamentous fungi and yeasts. Dose-response curves comparing the antifungal activity of PeAfpA (blue diamonds), PeAfpB (red squares), and PeAfpC (green triangles) against the filamentous fungi P. chrysogenum, F. oxysporum, P. expansum, and B. cinerea, and the pathogenic yeasts C. albicans and C. glabrata. Dose-response curves show mean ± S.D. OD 600 of triplicate samples after 72 h at 25 • C for fungi and 48 h at 28 • C for yeast. NaCl and ethanol for 7 days (Kovács et al., 2011) while NFAP2 but no NFAP was isolated from a 7-day old MM supernatant with sucrose as carbon source (Tóth et al., 2016). PeAfpB and PeAfpC were not detected in any of the conditions tested. Instead, both proteins were produced using a P. chrysogenum-based expression system (Sonderegger et al., 2016), and they were purified from supernatants of recombinant P. chrysogenum strains. This expression system comprises the strong paf gene promoter, the paf pre-pro sequence for correct protein processing and secretion, and the paf gene terminator (Marx et al., 1995). This system allowed the production of high amounts of several AFPs (Sonderegger et al., 2016(Sonderegger et al., , 2017, including P. digitatum AfpB and P. chrysogenum PAFB which could not be isolated from the supernatants of the corresponding parental strains (Garrigues et al., , 2017Huber et al., 2018). Here, the heterologous production of PeAfpB and PeAfpC in P. chrysogenum resulted in yields of 32 and 62 mg/L, respectively, confirming the suitability of the system as a platform for the production of small cysteine-rich AFPs. Further studies focusing on gene expression patterns will reveal whether the lack of PeAfpB and PeAfpC in the P. expansum culture broth results from a strict regulation during fungal growth, similar to the reports of AfpB and PAFB (Huber et al., 2018), or in contrast, from non-functional or unexpressed genes. Peptide mass fingerprinting of the recombinant PeAfpC revealed that this protein lacked the 11 amino acid insertion that was predicted by in silico annotation of three different sequenced strains of P. expansum (Ballester et al., 2015). Genes coding for AFPs from classes A and B have two introns, whereas genes coding for class C AFPs have only one . The predicted insertion within PeAfpC amino acid sequence correlates with an incorrect annotation of the single intron present in the class C AFP encoding gene from P. expansum. Thus, PeAfpC is similar to other characterized and putative class C AFPs regarding their size and chemical properties. Two of the three PeAFPs are effective against filamentous fungi while PeAfpC did not show any antimicrobial activity under the conditions tested. One possible explanation for this different activity patterns could be explained with their distinct physico-chemical properties, especially the positive net charge at pH 7, which would correlate with their ability to bind fungal membranes. PeAfpA, which showed the highest antifungal FIGURE 6 | Hemolytic activity of the three AFPs from P. expansum. Analyses were conducted in PBS (150 mM NaCl) (A), and in PBG (250 mM glucose) (B). Proteins were used at the concentrations indicated (from 1 to 100 µM). For PeAFPs, 1, 10, 25, and 100 µM correspond to 6.6, 66, 166, and 662 µg/mL for PeAfpA; 6.5, 65, 164, and 657 µg/mL for PeAfpB; and 6.7, 67, 168, and 678 µg/mL for PeAfpC, respectively. The cytolytic peptide melittin (25 µM) was included for comparison. The hemolytic activity is given as the mean ± SD of the percentage of mammal red blood cells (RBCs) hemolysis (three replicates), as compared with the positive control in the presence of the detergent Triton X-100 (regarded as 100% hemolysis). activity, is a very cationic protein with a pI of 9.47. In contrast, PeAfpB (pI = 7.4) showed a moderate antifungal activity against some of the fungi tested, but not against yeasts, and this protein showed a lower antifungal activity when compared to its class B homolog AfpB from P. digitatum (pI = 9.06). On the contrary, PeAfpC (pI = 6.87) was inactive against all fungi and yeasts tested in this work. Only the antifungal activity of two other class C representatives, the BP protein from P. brevicompactum (Seibold et al., 2011) and the Pc-Arctin from P. chrysogenum (Chen et al., 2013), have been reported. The former showed antifungal activity against S. cerevisiae and no other fungal species were evaluated (Seibold et al., 2011), while Pc-Arctin was effective against some plant pathogenic fungi (Chen et al., 2013). The predicted 3D structure of PeAfpC significantly differs from the one experimentally determined for its class C homolog BP from P. brevicompactum. A loss of the three-dimensional organization in the PeAfpC in silico predicted structure might explain the loss of its antifungal activity. However, it has been reported that structural features of AFPs are not exclusively responsible for their antifungal activities. This is the case for P. digitatum AfpB, where we demonstrated that thermal denaturation did not affect its antifungal activity (Garrigues et al., 2017), or for PAF from P. chrysogenum, where the change of a single amino acid did not affect its 3D structure, but resulted in a complete loss of antifungal efficacy (Sonderegger et al., 2017). Recently, the anti-viral activity of some AFPs has been documented for the first time (Huber et al., 2018), suggesting that the properties of AFPs go beyond the traditional antifungal activity. Further structural and functional characterization of PeAfpC is currently in progress. PeAfpA is the most potent AFP from P. expansum. It is highly effective against relevant phytophatogenic fungi that cause postharvest decay and plant diseases. Moreover, we have shown that PeAfpA exerted significant protection against P. digitatum in oranges and against B. cinerea in tomato Concentrations of 1, 5, and 10 µM correspond to 6.6, 32, and 66 µg/mL, respectively. Ten micromolar (66 µg/mL) of AFPs were applied in panels (C,D). Pictures were taken at 4 days post inoculation. Graph in panel (B) is a box plot of the percentage of leaf damage quantified from at least six leaves per treatment from two independent assays. Asterisks denote statistically significant differences in comparison to control values (ANOVA and Tukey's HSD test * * p < 0.001; * p < 0.05). leaves. The application of antimicrobial peptides and proteins in postharvest conservation and crop protection has been described (Coca et al., 2004;Marcos et al., 2008). To our knowledge, AFP from A. giganteus is the only AFP that has been previously shown to successfully protect plants from fungal infection, albeit at higher protein doses than used in our assays. Similarly to the in vivo experiments described here, rice plants were protected from Magnaporthe grisea infection by direct application of 10 µM AFP to rice leaves either by drops or spray (Vila et al., 2001) and geranium plants from B. cinerea (Moreno et al., 2003). A. giganteus AFP at a concentration of 100 µg/mL preincubated with tomato seedlings also prevented the infection of tomato roots by the plant-pathogenic fungus F. oxysporum f.sp. lycopersici (Theis et al., 2005). Moreover, AFP sprayed on artificially infected-wounded bananas with Alternaria alternata was able to partly or totally inhibit the growth of the phytopathogen at concentrations in the range 15-50 µg/mL (Barakat, 2014). For crop protection, strategies based on the heterologous expression of the A. giganteus afp encoding gene conferred enhanced resistance to transgenic rice plants against the blast fungus M. oryzae (Coca et al., 2004), to transgenic wheat plants against the powdery mildew fungus Erysiphe graminis f.sp. tritici and the leaf rust fungus Puccinia recondite f.sp. tritici (Oldach et al., 2001) and to transgenic olive plants against the root infecting fungal pathogen Rosellinia necatrix (Narváez et al., 2018). Our results show that PeAfpA is highly effective in controlling P. digitatum and B. cinerea infections in citrus and tomato at concentrations as low as 0.15-1 µM. Both fungi have a considerable economic importance. Severe fruit losses due to Penicillium decay have an important impact in agriculture, especially decay caused by P. digitatum, one of the main postharvest pathogens of citrus fruits. P. digitatum specifically infects citrus fruits through peel injuries produced in the field, the packing house or during the fruit commercialization chain, causing the green mold disease (Palou, 2014). By contrast, the impact of B. cinerea in many areas is due to its broad host range, causing severe damage, both pre-and postharvest (Dean et al., 2012). Despite the effectiveness of commercial chemical fungicides, concerns about environmental contaminations, the emergence of resistant strains and human health risks associated with fungicide residues lead to the search of new control strategies. Thus, PeAfpA might represent a powerful alternative in the control of phytopathogenic fungi. Moreover, considering the broad in vitro antifungal activity of PeAfpA against phytopathogenic fungi and against mycotoxins producers, it seems feasible that the protein may be effective also in other pathosystems not tested in this study. Our results also point to the heterologous expression of P. expansum afpA encoding gene in transgenic plants to confer disease resistance. Previously we described a very promising efficacy of the synthetic hexapeptide PAF26 and derivatives in citrus fruit protection (López-Garcia et al., 2003;Muñoz et al., 2007), although the high cost of synthetic peptide production and the failure to produce PAF26 through biotechnology (unpublished data) poses an obvious limit to postharvest applications. By contrast, different expression systems, including the one used here, allow effective AFP-production (Sonderegger et al., 2016;Garrigues et al., 2017;Patiño et al., 2018;Shi et al., unpublished), enabling the use of AFPs in crop and postharvest protection. Penicillium digitatum AfpB identified as an in vitro highly active AFP against the own producer fungus (MIC = 3.2 µg/mL) (Garrigues et al., 2017) showed no in vivo effect in oranges as it did against B. cinerea in tomato leaves (MIC = 12.5 µg/mL). Until recently, it was assumed that AFPs were not active against the producer fungus. However, in addition to P. digitatum AfpB, PAFB (Huber et al., 2018) and now PeAfpA are effective toward P. chrysogenum and P. expansum, respectively. In vitro AFP growth inhibition against the own producer fungus is induced adding the protein exogenously to the culture media. Whether in vivo activity parallels that observed in in vitro tests deserves further studies. Interestingly PeAfpA is also highly active against human fungal pathogens including dermatophytes (MIC 4 µg/mL), clinically important Candida species (MIC values 4-8 µg/mL), and also against mycotoxin-producer fungal strains (MIC 4 µg/mL), suggesting its potential application also in medicine and food preservation. The use of antimicrobial peptides for the prevention and treatment of fungal skin infections like those caused by T. rubrum and A. vanbreuseghemii has been proposed . AFPs such as PAF and PAFB from P. chrysogenum were active against T. rubrum with similar MIC values as that described here for PeAfpA (Huber et al., 2018). Nevertheless, further characterization of AFPs in in vivo models are mandatory to confirm the potential of AFPs as novel therapies to treat dermatological diseases. Originally, AFPs were described as highly effective against filamentous fungi but not active against yeasts or bacteria (Marx et al., 2008;Meyer, 2008). However, anti-yeast activity of PAF was recently reevaluated and its effectiveness against S. cerevisiae and C. albicans was reported, as well as that of PAFB (Huber et al., 2018). PAFB was the most active against both yeasts species with MIC values similar to those obtained here for PeAfpA. At present, NFAP2 is the most potent anti-yeast AFP described so far, with MIC values in the range of 0.2-1.5 µg/mL (Tóth et al., 2016). Remarkably, this protein, which seems to be the first member of a new, phylogenetically distinct fourth group among AFPs, was ineffective against filamentous fungal isolates whereas the opposite antifungal profile was determined for the class A NFAP (Virágh et al., 2014). Toxicity of antimicrobials should also be considered for successful application. The toxicity of PeAFPs has been measured as their cytolytic activity against RBCs. The hemolytic activity of the three proteins was negligible in the conditions tested, even in assays conducted at low ionic strength isotonic conditions, which are considered more sensitive for detecting the hemolytic activity of cationic peptides (Helmerhorst et al., 1999). The lack of cytotoxicity was previously reported for PAF (Szappanos et al., 2005;Palicz et al., 2013) and A. giganteus AFP (Szappanos et al., 2006), and recently for P. digitatum AfpB (Garrigues et al., 2017) and P. chrysogenum PAFB (Huber et al., 2018), suggesting that AFPs can be regarded as safe. CONCLUSION To conclude, the high antifungal efficacy against human and plant pathogens and mycotoxin-producer fungi, together with the protection observed here upon application of PeAfpA for postharvest conservation of orange fruits and plant protection on tomato leaves, suggest that PeAfpA is a promising candidate for crop and postharvest protection and for its application in medicine or food security. AUTHOR CONTRIBUTIONS MC, FM, JM, and PM conceived and designed the study. PM coordinated the study and prepared the first draft of the manuscript. SG and FM produced AFPs in P. chrysogenum. SG and PM produced AFP in P. expansum. SG and JM performed antimicrobial experiments and structural modeling. MG and SG performed Western blot analyses and performed hemolytic assays and protection assays in citrus fruits. LC and MC carried out protection assays in tomato plants. All authors read, revised, and approved the final manuscript. FUNDING This work was funded by grant BIO2015-68790-C2-1-R (to JM and PM) and BIO2015-68790-C2-2-R (to MC) from the "Ministerio de Economía y Competitividad" (Spain) (MINECO/FEDER Funds), grant PROMETEO/2018/066 (to JM and PM) from "Generalitat Valenciana" (Spain) and the Austrian Science Fund grant P25894-B20 (to FM). SG was recipient of a predoctoral scholarship (FPU13/04584) within the FPU program from "Ministerio de Educación, Cultura y Deporte" (MECD, Spain). We acknowledge support of the publication fee by the CSIC Open Access Publication Support Initiative through its Unit of Information Resources for Research (URICI).
2018-10-05T13:03:00.108Z
2018-10-05T00:00:00.000
{ "year": 2018, "sha1": "77c03601ed56f752d26dddc4404fab6520b22a43", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.02370/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "77c03601ed56f752d26dddc4404fab6520b22a43", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
267085155
pes2o/s2orc
v3-fos-license
Elucidating the Visual Snow Spectrum: A Latent Class Analysis Study Objective People with visual snow syndrome (VSS) experience a range of perceptual phenomena, in addition to visual snow (VS; flickering pinpricks of light throughout the visual field). We investigated the patterns of perceptual phenomena associated with VSS in a large sample of people without prior knowledge of VSS or its associated symptoms. Methods and Measures. Two thousand participants completed a screening questionnaire assessing the frequency and severity of perceptual phenomena associated with VSS. We used latent class analysis (LCA), a clustering technique which identifies qualitatively different subgroups within a given population, to investigate whether the presence (or absence) of VS impacted class structure. Results Of 1,846 participants included for analysis, 41.92% experienced VS some of the time, including 4.49% who had VSS without prior knowledge. The mean number of perceptual phenomena experienced was 2.03. Optimal four-class LCA solutions did not substantially differ whether VS was included in the model; instead, classes differed in the frequency and total number of symptoms experienced. Discussion. Our results suggest that the perceptual phenomena associated with VSS are likely to be common in the general population and do not necessarily indicate an underlying pathology. We also showed that visual snow itself does not explain the presence of other perceptual phenomena. Introduction Visual snow (VS) is a perceptual phenomenon characterized by persistent flickering noise in the visual field.It is often described as being like a sensation of pixelation, or the "snow" of an out-of-tune analogue television, from which it takes its name.It is also the primary perceptual experience associated with visual snow syndrome (VSS).People with VSS experience visual snow constantly, along with at least two additional perceptual phenomena from a list including nyctalopia (poor night vision), photophobia (pain or discomfort in bright light), palinopsia (trailing after-images), and a range of enhanced entoptic phenomena [1,2].VSS can be either lifelong or acquired.Research suggests that as many as 36% of people with VSS have experienced it for as long as they can remember [3].However, many people with lifelong VSS do not realize their perceptual experience is unusual and thus do not pursue a diagnosis.Consequently, this group is often not accounted for in research findings. Research has often described VSS as a spectrum-based disorder because it is associated with a range of perceptual experiences and levels of severity [4,5].People with confirmed (i.e., medically or self-diagnosed) VSS experience a variety of different impacts, from minor annoyance or fascination, to major disruptions to quality of life [3,6].The number of additional symptoms experienced, and the frequency with which these are experienced, also varies substantially between patients [4,7].There is also growing anecdotal evidence from online support groups associated with VSS that some people self-describe as having VSS despite never or rarely experiencing visual snow.These people tend to experience a wide range of symptoms associated with VSS but do not meet the current diagnostic criteria, which require the continuous experience of visual snow [8]. Research to date has not investigated whether the visual snow spectrum might extend beyond the existing diagnostic criteria, to include a range of normal perceptual experiences in the general population.Several recent studies have provided evidence that both visual snow and VSS are somewhat common in the general population; around 40% of people experience visual snow at least some of the time, while a 2020 prevalence estimate shows that 2.2% of people meet the diagnostic criteria for VSS without prior knowledge [6,7,9].These studies suggest that both lifelong and undiagnosed VSS may be highly prevalent and that VSS may not always have noticable impacts on the people who experience it.Given the large proportion of people who report experiencing visual snow some of the time-and the fact that most other VSS symptoms are experienced by most people from time to time-it seems feasible that an extended visual snow spectrum might exist.Identifying and understanding population-level experiences of perceptual phenomena associated with VSS may be instructive in understanding how a wide range of perceptual experiences, with a variety of known and unknown mechanisms, cooccur so consistently in people with VSS. The Present Study The present study investigated the extent to which perceptual experiences associated with VSS cooccur in the general population.We used latent class analysis (LCA) to investigate subgroups of perceptual experience, based on patterns of presence, frequency, and cooccurrence of the perceptual phenomena that make up the VSS diagnostic criteria.Given anecdotal evidence suggesting many people experience a broad range of additional phenomena associated with VSS in the absence of visual snow, we also investigated whether the presence of visual snow was essential to model classification, as it is to the VSS diagnostic criteria. Methods This study was approved by The University of Melbourne Human Research Ethics Committee and was conducted in accordance with the Declarations of Helsinki.Participants provided informed consent via an online form prior to the commencement of the study. 3.1. Participants.We recruited a sample of 2,000 naïve participants via Amazon Mechanical Turk (MTurk).Participants were required to be 18 years of age or older and to be fluent in English.All participants were reimbursed US $1 for their participation.The study was advertised to 1,000 participants at a time across two dates in 2022 to ensure that it remained near the top of lists of available "tasks" on MTurk.Participants who engaged in the study when it was first advertised were prevented from participating again.The study was advertised under the title "Answer a survey about your vision (5-20 mins)."Aside from practical information relating to payment, inclusion criteria, and the time we expected the task to take, the following text was used to advertise the study: You will be asked about some perceptual experiences you may or may not have had, and about some of your medical history, because conditions such as migraine can impact visual perception. It is possible that recruiting participants via this method introduced a degree of selection bias, as participants with particular interest in their visual experience (due to concern, fascination, or otherwise) may have been more likely to choose to participate.We did not attempt to verify that our participants were a representative sample of the general population, but rather chose to use the largest sample we reasonably could. 3.1.1.Response Screening.Participants were screened to assess for responses from bots, responses which did not meet inclusion criteria, and bad-faith responses.In addition to reCAPTCHA and a series of standard attention checks, all free-text answers were assessed to determine if responses were reasonable.In total, 92.3% of participants (n = 1,846) were considered valid respondents and included for analysis. Measures. Participants completed a screening questionnaire assessing the presence, frequency, and perceived impact of perceptual phenomena currently included in the VSS diagnostic criteria.The questionnaire was adapted from the work of Kondziella et al. [9].The only substantive change made was that participants who stated they experienced a given perceptual phenomenon were asked to rate how often they experienced this phenomenon and how often they felt their life was impacted by the phenomenon (daily, weekly, monthly, several times a year, or yearly).All language describing perceptual experiences was repeated exactly from Kondziella et al.'s work. This paper represents the primary analysis of these data.However, data collection for this study formed part of a broader project, and participants who met the criteria for VSS engaged with additional scale-based measures related to sensory sensitization, which are not reported here. 3.2.1.Diagnostic Categorization.Participants were categorized as experiencing visual snow, VSS, and migraine with or without aura in accordance with the categorization process described by Thompson et al. [7]. Statistical Analyses 3.3.1.Software.All analyses were conducted using R 4.0.3, and all graphics were generated using ggplot2 [10,11].Data preprocessing and scale scoring were conducted using the psych package [12].Latent class analysis was conducted using poLCA, and graphs for latent class analysis were generated using open-access code sourced from GitHub [13,14].Cohen's weighted Kappa was calculated using vcd [15].All other analyses, including demographic betweengroup comparisons, were conducted using base R and its associated stat package. Latent Class Analysis. Latent class analysis is a technique which identifies qualitatively different subgroups within a population based on shared characteristics.Its core assumption is that membership of unobserved classes can explain patterns of scores across survey questions or scales [16].We sought to identify classes of participants based on experiences of perceptual phenomena commonly described Behavioural Neurology in the VSS literature.Phenomena included for analysis were visual snow, photophobia, nyctalopia, self-light of the eyes, blue-field entoptic phenomenon, halos, palinopsia, and excessive floaters. We began by estimating a one-class model and added classes up to eight.Starting values were determined randomly.To ensure model stability, each model was estimated 100 times and the model with the lowest log-likelihood was used.All models had suitable entropy (>0.8) [16].For each model, we examined fit based on (a) the Bayesian information criterion (BIC), (b) sample-size adjusted BIC (SABIC), (c) the Akaike information criterion (AIC), and (d) G 2 , the likelihood ratio statistic.In all cases, lower values indicate better model fit.Because there was a discrepancy between fit statistics, the final model selection was made based on balancing good fit with a model which most logically explained the data [16]. Data Sharing. The data that support the findings of this study are available from the corresponding author upon reasonable request. Results Of 1,846 participants included for analysis, 774 (41.92%) experienced visual snow at least some of the time, including 83 (4.49%) who met the International Classification of Headache Disorders (ICHD) criteria for visual snow syndrome (without prior knowledge).Hallucinogen persisting perceptual disorder could not be ruled out in 15 cases of VSS, where drug use immediately preceded the onset of perceptual experiences.These participants are not included in the VSS count. Participants' mean age was 38.34 years, and the range was 18 to 80 years.1,084 participants were identified as male, 746 identified as female, 7 identified as nonbinary, and 9 preferred not to say.The mean number of perceptual phenomena experienced (including visual snow) was 2.03 (SD = 1 79).A summary of sample characteristics is presented in Table 1, and Figure 1 presents the frequency distribution of the number of perceptual phenomena experienced in our sample. 4.1. Between-Group Comparisons.We investigated whether there were differences between participants with visual snow, VSS, cases of VSS where HPPD could not be ruled out, and those without visual snow.Table 2 presents data pertaining to perceptual phenomena and comorbid conditions per group. First, we investigated whether there were differences in the number of perceptual phenomena experienced between participants in each category.A Kruskal-Wallis H test showed significant differences, χ 2 3 = 537 67, p < 0 001, η 2 = 0 29.Post hoc, Holm-corrected Wilcoxon pairwise tests revealed no significant difference between participants where HPPD could not be ruled out and those with VSS (p = 0 55).However, all other combinations of variables showed significant differences at the p < 0 001 level. Next, we used a series of chi-squared tests of independence to investigate whether there were associations between Note.The participants with visual snow here also include the participants with VSS and participants with possible HPPD.Participants who had possible HPPD, i.e., met the criteria for VSS and who had a history of drug use which was immediately prior to the onset of their perceptual phenomena, are not included in the VSS category. Latent Class Analysis. Our first latent class model included data on the presence and frequency of all perceptual phenomena associated with VSS.Based on the various fit statistics presented in Table 3, we selected a four-class model as the most parsimonious fit for the data.The inconsistencies in fit statistics are not unusual in LCA, and Weller et al. suggest that in this scenario, the SABIC should be relied upon in decision-making [16]. The four classes in our model include (1) a high-intensity class, comprising people who experience many perceptual phenomena associated with VSS, with a high degree of frequency; (2) and (3) two medium-intensity classes, comprising people who experience some perceptual phenomena associated with VSS, with less frequency in class 3 than class 2; and (4) a low-intensity class, comprising people who rarely experience phenomena associated with VSS. The composition of each of our four classes is illustrated in Figure 2. Here, each box represents a different class.Along the x-axis are perceptual phenomena.The proportion of each bar that is shaded in each color indicates the conditional probability that someone who experiences that perceptual phenomenon with that frequency will be included in each class.Importantly, a different scale was used to assess the frequency of visual snow as this question was taken directly from the work of Kondziella et al. [9].Initial estimates of class population shares and final participant class allocations based on posterior probabilities We compared the model's classifications against the VSS diagnostic criteria.In total, 9 participants who met the diagnostic criteria for VSS were included in the high-intensity class, along with 104 participants who did not experience VS, and 59 who experienced visual snow in the absence of the full syndrome.Meanwhile, 47 participants who met the VSS diagnostic criteria were in the first medium-intensity Is Visual Snow Essential to Model Classification? To determine if visual snow is essential to model classification, as it is to the VSS diagnostic criteria, we conducted LCA with the visual snow variable removed as a predictor.Once again, we selected a four-class model as the most parsimonious fit for the data (Table 4); and broadly, the LCA model in the absence of VS was like that in the presence of VS. Figure 3 presents the full results of this model, in the same format as for the previous model. Based on posterior probabilities, the proportions of participants in each class in this model differed slightly from those of the model including visual snow.The high-intensity class gained 20 participants to a total of 193 (10.4%).How-ever, the two medium-intensity classes and the low-intensity class saw shifts in their relative sizes.Medium-intensity (1) comprised 185 (10.02%) participants and medium-intensity class (2) comprised 383 (20.75%) participants.Finally, the low-intensity class grew to 1,085 (58.77%) participants. To investigate whether these changes in class composition indicated that the model had changed substantially in the absence of visual snow, we calculated Cohen's weighted Kappa to determine the degree of correspondence between the two models: κ = 0 75, 95% CI (0.72, 0.79), p > 0 05.This indicates substantial agreement between the two models.The confusion matrix presented in Table 5 shows that most changes to class allocations between models occurred when participants who had been in medium intensity those classes in the model excluding visual snow.Overall, this suggests that the model was relatively robust even when visual snow was removed. Discussion In this study, we used latent class analysis to investigate population distributions of perceptual phenomena associated with visual snow syndrome (VSS).We demonstrated that perceptual phenomena associated with VSS are likely to be common in the general population; approximately half of the participants experienced some perceptual phenomena associated with VSS, some of the time.Our results also indicate the presence of a visual snow spectrum, which includes perceptual experiences that extend beyond the existing diagnostic criteria.Using all perceptual phenomena associated with VSS as predictors, we identified a four-class LCA model as the most parsimonious fit for our data.The four classes were based on subgroups of participants whose perceptual experiences increased in number and frequency across classes, resembling a spectrum.We also showed that visual snow itself was not essential to model classification.When we removed visual snow as a predictor variable, a four-class model remained the most parsimonious fit for the data, and the four classes were similar in size and composition.Once participants were allocated to classes based on posterior probabilities, we used Cohen's weighted Kappa as a measure of agreement between the two models, with the result indicating substantial agreement.Few participants moved between the high-and lowintensity classes when visual snow was removed from the model.This suggests that visual snow itself may not be key to explaining patterns of perceptual experiences and may not be the defining feature of the spectrum we identified.However, it is important to note that visual snow was measured using a different scale than the other perceptual experiences we assessed; this may have artificially diminished its importance in the model. Is Visual Snow Normal? Our results show some similarities to existing work in the field and indicate that both visual snow and other perceptual phenomena associated with VSS are common perceptual experiences.For example, the percentage of our participants who experienced visual snow at least some of the time is remarkably similar to the findings of Costa et al., whose work shows that 44% of people experience visual snow at least 10% of the time [6].Meanwhile, 41.92% of our participants reported experiencing visual snow some of the time.These estimates also correspond with our previous work [7].Together, they provide further evidence that visual snow is a common perceptual experience for which most people do not require clinical attention. Our results also correspond with the only previous LCA conducted on data related to VSS.In a study of 1,060 participants with confirmed VSS, Puledda et al. demonstrated that additional symptoms of VSS do not present in specific combinations, but that floaters, palinopsia, and photophobia are "almost invariably present."[4].This matches well with our own findings.Our model did not identify classes based on cooccurrence of specific symptoms, but rather based on the total number and frequency of symptoms experienced.Floaters and photophobia were also the most common perceptual phenomena in our sample.However, while Puledda et al.'s latent class models provided support for the current VSS diagnostic criteria [7], our results indicate that the range of perceptual experiences associated with VSS goes beyond the existing diagnostic criteria.Almost half of our participants fell in our first three classes, suggesting that it is common to experience perceptual phenomena with some degree of regularity.While some people find these phenomena impactful in their day-today lives, the presence of the phenomena themselves may not necessarily indicate an underlying clinical pathology. Like other work showing visual snow spectrum phenomena are common in the general population, this study was conducted online via population screening.To date, work of this kind has not been validated against traditional diagnostic techniques, and it is possible that these studies have captured less impactful perceptual experiences than those identified by clinical diagnosis.However, there is as yet no specific test for VSS and clinical diagnosis is by exclusion.It is also important to note that recruitment via online tools involves participants self-selecting into research based on brief descriptions of what is being studied; this may lead to self-selection bias, with participants who have unusual perceptual experiences being more likely to engage with studies like this one.However, as three separate studies (including this one) have found a visual snow prevalence of around 40% using various online screening techniques, we are confident that the perceptual experiences we identified are genuine.Whether they are the same as those identified in people with confirmed VSS, and whether our model findings will generalize to populations with confirmed VSS, remains to be seen.To this end, future research should address whether patterns of perceptual experience in confirmed VSS are defined by visual snow. Clinical Implications. We have shown that a populationlevel visual snow spectrum likely exists, and that visual snow spectrum perceptual experiences are common in the general 7 Behavioural Neurology population.Our results indicate that the number of perceptual phenomena experienced may move someone closer to meeting the diagnostic criteria for VSS but does not necessarily indicate the impact on day-to-day life, as our participants form a nonclinical sample.As there is currently no objective measure of VSS severity, clinicians and researchers must evaluate people's own descriptions of the impact of their perceptual experiences.Our work, along with that of Costa et al., shows that it is possible to experience numerous visual snow spectrum phenomena in the absence of either visual snow itself, and in the absence of negative impacts [6].As such, the question remains: what causes visual phenomena to be-or become-distressing? At present, the most promising solution to this problem lies in the affective response to perceptual experiences.Certain mental health conditions are commonly associated with VSS, and it has been argued that they may be inherent to the VSS phenotype [3].For example, in participants with confirmed VSS, Solly et al. found clinical levels of depression and anxiety, the presence of depersonalization and derealization, and sleep problems and fatigue [3].It is possible that this psychopathology is not inherent to VSS, but rather that it is what elevates perceptual phenomena to be impactful on daily life.Most importantly, whether these conditions are inherent to VSS or regularly comorbid with it, they all have existing, recommended treatment protocols, and treating comorbid mental health conditions has been effective in reducing symptom impact in related conditions such as tinnitus [17].While psychological and behavioral interventions may not alter perceptual phenomena themselves, they may provide some relief for certain patients. Recently, Wong et al. have published data showing that mindfulness-based cognitive therapy (MBCT) can be effective in relieving the symptoms of VSS [18].The results of this study showed subjective symptom improvement after MBCT in a small sample of people with clinically diagnosed VSS and demonstrated that subjective improvements were associated with changes in fMRI results.The researchers found that, three months after intervention, fMRI results showed alterations in the functional connectivity of the visual network, with changes noticeable in extra-striate regions of the occipital cortex, areas of the cerebellum which are related to visual processing and attention, and the posterior hub of the default mode network.This study both provides evidence that the subjective experience of VSS can be improved through psychological interventions and demonstrates that such interventions can have functional implications, potentially impacting both subjective and objective experience.Further work investigating mechanisms which cause visual snow spectrum phenomena to become distressing will be important-as will work investigating additional psychological interventions for symptom relief in people with severe presentations. A limitation of the present study is that we did not collect data pertaining to psychopathology in our sample.As such, we can only speculate about the role of comorbid mental health conditions in the perceived severity of VSS, based on existing literature which indicates a connection between mental health conditions and VSS.Future research should consider comparing psychopathology and mental health symptoms in samples of naïve participants who meet the VSS diagnostic criteria and in people with clinical diagnoses (or self-diagnoses). Conclusion In this study, we demonstrated that perceptual phenomena associated with VSS are common in the general population and that visual snow is not key to explaining the presence of these phenomena.We also identified a spectrum of perceptual experiences associated with VSS, with people who experience numerous perceptual phenomena often at one extreme, and people who rarely experience any perceptual phenomena at the other.Our results indicate that visual snow spectrum perceptual experiences are common and do not necessarily indicate underlying pathology.In the absence of objective measures of VSS severity, it seems that the same number and frequency of VSS symptoms can be distressing to some people, while others can ignore them entirely.Future research should investigate whether visual snow is key to explaining the perceptual experiences of people with confirmed VSS and to addressing whether psychological interventions are effective in relieving the distress caused by VSS. Figure 1 : Figure 1: Frequency chart showing the number of perceptual phenomena experienced by participants.Note.Dotted line indicates the mean number of perceptual phenomena experienced. Figure 2 : Figure 2: A four-class model including all perceptual experiences associated with VSS as predictors. (2) in the model including visual snow were allocated to medium intensity (1) in the model excluding visual snow.Participants who had been in the high-intensity and low-intensity classes in the model including visual snow were almost all still in Figure 3 : Figure 3: A four-class model excluding visual snow as a predictor. Table 1 : Summary of sample characteristics. Table 2 : Group means for perceptual phenomena and comorbid conditions. Note.Unless otherwise specified, data are counts (percentages).The participants with visual snow here exclude the participants with VSS.ence many perceptual phenomena daily.However, not all participants in this class experience visual snow, and many of those who do experience visual snow do not experience it continuously.The fact that half (n = 937, 50.76%) of participants fell in the low-intensity class and that many of these participants experienced some VSS-related phenomena (albeit rarely) shows that the experience of perceptual phe- Table 3 : Evaluating class solutions for a model including all predictors. Note.The lowest values for each fit statistic are in bold. Table 4 : Evaluating class solutions for a model excluding visual snow. Note.The lowest values for each fit statistic are in bold. Table 5 : Confusion matrix comparing participant posterior allocations between latent class models including and excluding visual snow.
2024-01-23T16:26:48.161Z
2024-01-19T00:00:00.000
{ "year": 2024, "sha1": "7d7d9ce3b4613023343c19ac1c20a442e34f400f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/bn/2024/5517169.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a71cf5f183acc5b6134077f0eef85a53ebabe878", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
58537851
pes2o/s2orc
v3-fos-license
Prognostic value of γ‐glutamyltransferase‐to‐albumin ratio in patients with pancreatic ductal adenocarcinoma following radical surgery Abstract Pancreatic ductal adenocarcinoma (PDAC) is a devastating malignancy with poor prognosis. Many preoperative biomarkers can predict postoperative survival of PDAC patients. In this study, we created a novel ratio index based on preoperative liver function test, γ‐glutamyltransferase‐to‐albumin ratio (GAR), and evaluated its prognostic value in predicting clinical outcomes of PDAC patients following radical surgery. We retrospectively enrolled 833 PDAC patients who had underwent radical surgery at our institution between January 2010 and January 2017. Patients were divided into two groups according to the cut‐off value of GAR. Univariate and multivariate survival analysis between the groups were evaluated. TNM stage, GAR, preoperative serum carbohydrate antigen 19‐9 (CA19‐9) and tumor differentiation were combined to generate a more accurate prognostic model. The optimal cut‐off value of GAR was 0.65. Significant correlations were found between GAR and tumor location, tumor size, vascular invasion, obstructive jaundice, biliary drainage and parameters of liver function test. Univariate and multivariate analysis showed that high level of GAR independently predicted poorer postoperative overall survival (OS, P < 0.001) and recurrence‐free survival (RFS, P < 0.001). Subgroup analysis demonstrated that GAR was predictive of survival in patients without biliary obstruction or severely impaired liver function. In addition, integration of GAR, preoperative serum CA19‐9, and tumor differentiation into TNM staging system could better stratify the prognosis for PDAC patients compared with TNM stage alone. Our study demonstrates that preoperative GAR is an independent prognostic factor for prediction of surgical outcomes in PDAC patients. Combination of TNM stage, GAR, preoperative serum CA19‐9, and tumor differentiation can enhance the prognostic accuracy. | INTRODUCTION Pancreatic cancer is one of the most lethal malignancies worldwide, with a 5-year survival rate of 8% with all stages combined. 1 In 2018, there will be approximately 55 440 new cases of pancreatic cancer and 44 330 pancreatic cancer-related deaths in the United States, and pancreatic cancer is estimated to rank fourth among all causes of cancer death. 1 In China, the incidence rate for pancreatic cancer has been increasing sharply in the past decade, and it is now the ninth leading cause of cancer-related mortalities. 2 Radical resection is the only option for a curative treatment. However, even for patients underwent curative surgery, the 5-year survival rate is only around 25%. 3 Lack of detective biomarkers for early-stage pancreatic cancer and high incidence of local recurrence and distant metastasis are two main reasons for the poor outcome of this disease. Currently, the prediction of survival and tumor recurrence for resectable pancreatic cancer patients mainly relies on histopathological features of tumor specimen, such as tumor size, lymph node metastasis, tumor differentiation, and resection margin. [4][5][6] However, these predictors are only available for evaluation postoperatively, which are costly and time consuming and make it difficult for survival prediction before surgery. Besides, patients with the same TNM stage usually exhibit different clinical outcomes, therefore causing confusion among clinicians when making further treatment strategies. Serum carbohydrate antigen 19-9 (CA19-9) is a well-established predictive tumor biomarker in pancreatic cancer. An elevated level of preoperative serum CA19-9 is associated with poor prognosis. 7 However, around 5% to 14% of the population is CA19-9 nonsecretory phenotype, which limits the clinical use of CA19-9 alone in certain group of patients. 8 Liver function test is a basic routine examination before surgery. Some of its components, including alanine aminotransferase (ALT), albumin (ALB), and alkaline phosphatase (ALP), have been shown to have prognostic values for postoperative pancreatic cancer patients. 9,10 Therefore, to provide better prognostic indicators in patients with resectable pancreatic cancer, it is of interest to further dig into parameters of liver function test and identify potential preoperative biomarkers that can predict postoperative survival. γ-glutamyltransferase (GGT) is an important enzyme conventionally assessed in liver function test. It is widely distributed on the luminal surface of most secretory epithelial cells, especially hepatocytes and cholangiocytes. 11 GGT plays a key role in the metabolism of glutathione (GSH), the major intracorporal antioxidant, and maintain its adequate level, therefore protecting cells from oxidative stress produced under physiological and pathological conditions. 11 Elevated GGT is commonly seen in hepatic and biliary diseases. 12,13 It is also implicated in cardiovascular disease, type 2 diabetes mellitus, and hypertension. 14 High level of GGT is an early marker of oxidative stress and a predictor of increased cancer risk. 15 More importantly, increasing evidence has suggested that high level of serum GGT is associated with poor prognosis in different types of cancers, such as pancreatic cancer, cervical cancer, renal cell carcinoma, and prostate cancer. [16][17][18][19] ALB is synthesized in the polysomes of hepatocytes and reflects liver reserve ability. It is crucial in multiple physiological processes, including maintenance of the colloid osmotic pressure, drug delivery, scavenging of oxygen free radical, and participation in intracellular signaling pathways. 20 Hypoalbuminemia usually occurs when liver function is impaired. 21 It also has diagnostic and prognostic values in various types of cancers, including hepatocellular carcinoma, pancreatic cancer, and breast cancer. [22][23][24] Specifically, some ALB-based ratio index have been identified as independent prognostic factors for pancreatic cancer patients, including C-reactive protein/albumin (CRP/ALB) ratio and platelet-toalbumin ratio (PAR). 25,26 Therefore, we reasonably combined the above two parameters and created a novel serological marker, γ-glutamyltransferase-to-albumin ratio (GAR), based on preoperative liver function test. It is easily accessible, time saving and can be obtained from all resectable patients before surgery. The purpose of this study is to explore the predictive value of GAR on postoperative survival in patients with resectable pancreatic ductal adenocarcinoma (PDAC) and further assess whether combination of GAR with other prognostic factors can improve prognostic accuracy. | Patients selection and data collection A total of 833 eligible patients who underwent radical operation for PDAC from January 2010 to January 2017 at Department of Pancreatic Surgery, Fudan University, Shanghai Cancer Center were collected. The inclusion and exclusion criteria were as follows: (a) pathologically proven PDAC; (b) no preoperative antitumor treatment; (c) no history of other malignant tumors; (d) complete clinicopathologic and follow-up data after operation; (e) negative K E Y W O R D S γ-glutamyltransferase-to-albumin ratio, overall survival, pancreatic ductal adenocarcinoma, prognosis, recurrence-free survival resection margin demonstrated by pathological examination; (f) no evidence of distant metastasis at the time of surgery; (g) no perioperative death caused by severe surgical complications. The following clinicopathologic variables were collected in this study: gender, age, tumor location, tumor size, lymph node metastasis, TNM stage, tumor differentiation, vascular invasion, obstructive jaundice, biliary drainage, and laboratory tests including blood routine, CA19-9, ALP, ALT, aspartate aminotransferase (AST), GGT, ALB, and glucose. Blood samples for laboratory tests were collected and analyzed within 7 days before operation. The clinical staging was determined by TNM staging system of the American Joint Commission on Cancer (AJCC) 8th edition via clinical evaluation and postoperative pathological examination. GAR was calculated as the serum GGT level divided by the serum ALB level. This study was approved by the Human Research Ethics Committee of Fudan University Shanghai Cancer Center and was in accordance with the tenets of the World Medical Association Declaration of Helsinki. Informed consent was obtained from all patients according to the committee's regulations. | Follow-up All patients were regularly followed up after surgery. Physical and laboratory examinations were carried out for each patient every 3 months. Enhanced abdominal computed tomography scan (CT) or magnetic resonance imaging (MRI) were routinely performed every 6 months. If local recurrence or distant metastasis was suspected, image examinations including CT, MRI, bone scans, and positron emission tomography-computed tomography (PET-CT) were selectively conducted immediately. Overall survival (OS) was defined as the interval between the date of surgery and death or the last follow-up visit. Recurrence-free survival (RFS) was defined as the interval between the date of surgery and tumor recurrence or the last follow-up visit. The last follow-up time was October 2017. | Statistical analysis All statistical analyses were performed using SPSS 21.0 (Chicago, IL, USA). The optimal cut-off value for GAR was determined by receiver operating characteristic curve (ROC) analysis. The correlations between GAR and clinicopathologic variables were analyzed by Pearson Chi-squared test, Fisher's exact test or Mann-Whitney U test as appropriate. The Cox proportional hazard regression model was used for univariate and multivariate analyses. Survival curves were plotted according to the Kaplan-Meier method and differences between subgroups were compared using the log-rank test. The concordance index (C-index) and Akaike information criterion (AIC) were calculated by Stata/SE 11.0 (Texas, USA). P values <0.05 (two-sided) were considered statistically significant. | Clinicopathologic characteristics Detailed clinicopathologic characteristics of all enrolled patients are summarized in Table 1. Of the entire study population, 465 were males and 368 were females. The median age was 61 years (range 33-84 years). In total, 466 patients had tumors located at the pancreatic head, whereas the remaining had tumors located at the body or tail of the pancreas. The size of tumor was not more than 4 cm in 605 patients and lymph node metastasis was present in 408 patients. According to TNM staging system of the AJCC 8th edition, the number of patients classified into I, II, and III stages were 322, 407, and 104, respectively. A normal level of preoperative serum CA19-9 was observed in 196 patients. A total of 226 patients had preoperative obstructive jaundice, and 148 of them received biliary drainage before surgery. All patients were followed up until October 2017. At the time of last follow-up, 505 patients were confirmed died. The median OS time was 20.8 months, and the OS rates at 1, 2, and 3 years were 80.7%, 42.5%, and 26.4%, respectively. The median RFS time was 10.7 months, and the RFS rates at 1, 2, and 3 years were 46.3%, 24.9%, and 20.5%, respectively. | Prognostic value of GAR in different subgroups According to whether patients had preoperative obstructive jaundice and abnormalities of GGT or ALB, we further investigated the predictive effect of GAR in each different subgroups. The results showed that high level of GAR was a significant prognostic indicator of poorer OS (24.9 months vs 17.3 months, P < 0.001, Figure 2A) and RFS (14.2 months vs 8.7 months, P < 0.001, Figure 2B) in patients without preoperative obstructive jaundice. Furthermore, in patients with normal level of GGT, GAR >0.65 had notable prognostic value in predicting poorer OS (24.6 months vs 17.5 months, P < 0.001, Figure 2C) and RFS (14.1 months vs 8.2 months, P < 0.001, Figure 2D), and this prognostic value of OS (25.4 months vs 17.6 months, P < 0.001, Figure 2E) and RFS (14.7 months vs 9.1 months, P < 0.001, Figure 2F) also existed in patients without ALB abnormality. However, we could not find out the association of GAR and prognosis in patients with any one abnormality of preoperative jaundice, GGT, or ALB. When stratified by preoperative biliary drainage, we found that among patients who did not have biliary drainage, those with low level of GAR had significantly longer OS (24.6 months vs 16.8 months, P < 0.001, Figure 3A) and RFS (14.1 months vs 8.6 months, P < 0.001, Figure 3B) than those with high level of GAR. However, we failed to confirm the prognostic value of GAR in patients who received preoperative biliary drainage. The predictive effect of GAR was therefore limited in this group of patients. | Combination of TNM stage, GAR, preoperative serum CA19-9, and tumor differentiation enhances prognostic accuracy for PDAC patients Multivariate analysis revealed that TNM stage, GAR, preoperative serum CA19-9, and tumor differentiation were four independent prognostic factors for both OS and RFS in PDAC patients. We therefore combined these four parameters to generate a more accurate prognostic model. The C-indices and AIC values of all parameters and their combinations are shown in Table 4. The C-indices of TNM stage combined with GAR in OS and RFS prediction were 0.6727 and 0.6348, respectively. Corresponding AIC values were 5956 and 7591. When combining all four parameters, the Cindices in OS and RFS prediction were 0.6923 and 0.6559, respectively. Corresponding AIC values were 5932 and 7564. Thus, combination of TNM stage, GAR, preoperative serum CA19-9, and tumor differentiation can enhance the prognostic accuracy for OS and RFS in patients with PDAC. | DISCUSSION It is now becoming clear that inflammation is a critical component in tumor initiation and progression. 27 a series of transcription factors, which lead to expression of pro-inflammatory molecules, therefore promoting transformation of normal cells to tumor cells, tumor cell survival, proliferation, and invasion. 28 As an essential part of the cellular defense system, GGT plays a pivotal role in maintaining sufficient level of GSH, the latter of which protects the cells from oxidative damage. GGT has been demonstrated to be elevated under pathological status of oxidative stress, and it is now regarded as a robust indicator of oxidative stress. 11 Diergaarde et al 29 demonstrated that a common variation in the GGT1 gene was involved in pancreatic carcinogenesis and might affect the risk of pancreatic cancer. Compared with normal pancreas and stellate cells, pancreatic tumor cells, and tumor-associated stellate cells express higher levels of GGT. 30 In addition, Engelken et al 16 demonstrated that elevated serum GGT was indicative of shorter survival in advanced PDAC patients. Contrarily, systemic inflammation suppresses the synthesis of ALB. 31 On one hand, the pro-inflammatory cytokines released by the hepatocytes, such as interleukin-6 (IL-6), can negatively regulate the production of ALB and contribute to its decreased serum concentration, independent of patients' nutrition status. On the other hand, cytokines like tumor necrosis factor (TNF) can increase the permeability of the blood vessel walls, thus promoting the loss of ALB from the circulation. 31 Subsequent hypoalbuminemia has been demonstrated to be correlated with reduced survival of patients in different types of cancer. 32,33 With respect to pancreatic cancer, Siddiqui et al 34 demonstrated that low serum ALB could independently predict poor survival of <6 months in pancreatic cancer patients. Another study also confirmed that in stage IV PDAC patients treated with bevacizumab, those with normal range of ALB had significantly better survival compared with those who had hypoalbuminemia. 23 For these reasons, GAR is not merely a combination of parameters of liver function test as initially regarded, it acts more as a reflection of internal inflammation status and seems to be useful for estimation of survival in patients with PDAC. In this study, we first analyzed the correlations of GAR and clinicopathologic characteristics and we found that GAR was closely correlated with tumor location, tumor size, vascular invasion, obstructive jaundice, biliary drainage, ALP, ALT, AST, GGT, and ALB. These data indicated that GAR could represent the status of liver function and reflect tumor burden to some extent. In accordance with our hypothesis, univariate analysis revealed that high level of GAR was significantly predictive of poor prognosis for PDAC patients, demonstrated by 7.4 months decrease in OS and 4.8 months decrease in RFS compared with patients who had low level of GAR. The 1, 2, 3-year OS rates and RFS rates were also markedly lower in patients with high level of GAR compared with those in the low-level group. After multivariate analysis, the prognostic value of GAR still remained. Subgroup analysis demonstrated that GAR was a significant prognostic factor in patients without abnormalities of obstructive jaundice, GGT, or ALB, and in patients who did not have preoperative biliary drainage. This result indicated that the predictive efficacy of GAR was likely to be limited when patients had preoperative jaundice and impaired liver function. This is a common phenomenon in the current existing prognostic biomarkers for pancreatic cancer. For example, as the most well-established predictive biomarker, CA19-9 levels are often elevated in the presence of obstructive jaundice and some benign conditions, which limits its use in clinical practice. 35 Similarly, a research showed that another inflammation-based indicator, the systemic immune inflammation index (SIII) could independently predict survival and recurrence in pancreatic cancer patients with normal bilirubin levels, whereas no association between SIII and survival was found in patients with high bilirubin levels. 36 Taken together, our results demonstrate that high level of GAR is an independent predictor of poor OS and RFS in PDAC patients, but only in the setting of no preoperative obstructive jaundice. From this perspective, GAR should be used with caution in patients with high preoperative bilirubin levels or severely impaired liver function, until more prospective studies support or reject this hypothesis. Currently, the most reliable prognostic biomarkers, such as the TNM staging system, mainly focus on tumor tissue itself. However, it is widely recognized that not only the intrinsic properties of tumor, but also the host-related factors, are closely associated with the prognosis of patients after surgery. The integrated index GAR comprehensively reflects the balance of host inflammatory and inflammation status, which may provide more prognostic information from the F I G U R E 2 Kaplan-Meier survival curves for overall survival (OS) and recurrence-free survival (RFS) in patients with pancreatic ductal adenocarcinoma (PDAC) according to preoperative abnormalities of obstructive jaundice, γglutamyltransferase, and albumin. Low level of γ-glutamyltransferase-to-albumin ratio (GAR) was associated with significantly better OS and RFS in patients who had no abnormalities of preoperative obstructive jaundice (A and B), γ-glutamyltransferase (C and D), or albumin (E and F) F I G U R E 3 Kaplan-Meier survival curves for overall survival (OS) and recurrence-free survival (RFS) in patients with pancreatic ductal adenocarcinoma (PDAC) according to preoperative biliary drainage. Low level of γ-glutamyltransferase-to-albumin ratio (GAR) was associated with significantly better OS and RFS in patients who did not have preoperative biliary drainage (A and B) host perspective. Thus, it is interesting that pretherapeutically available host-related indicator GAR can synergize with preexisting biomarkers, and our results indeed showed that combining GAR and other significant prognostic factors could enhance the prognostic accuracy. In addition, GAR is easily calculated from preoperative parameters of liver function test for each patient, which is money saving and time saving. Apart from its prognostic value, GAR may also serve several important functions in personalized therapy. First, neoadjuvant therapy is increasingly being employed for borderline resectable pancreatic cancer. Some inflammation-based biomarkers have been demonstrated to be correlated with patient response to neoadjuvant treatment. For example, Hasegawa et al 37 reported that neutrophil/lymphocyte ratio (NLR) was significantly higher in pancreatic cancer patients who responded poorly to preoperative chemoradiotherapy compared with those who had a favorable response. This indicates that GAR may also well be a candidate biomarker in evaluating the response to neoadjuvant chemotherapy in PDAC patients. Second, it has been proved in a mouse model of pancreatic cancer that systemic inflammation can diminish the effect of gemcitabine and may thus affect patient survival by altering the response to chemotherapy. 38 In this regard, GAR can help clinicians identify those who are likely to benefit the most from postoperative chemotherapy. Third, immunotherapy represents a new therapeutic modality that complements conventional chemotherapies without increasing toxicity. As an indicator of systemic inflammation status, GAR may be useful in selecting appropriate patients for immunotherapy. Many studies have demonstrated the value of inflammatory biomarkers in predicting patient response to immunotherapy in different types of tumor. 39 It is therefore interesting to investigate whether GAR can become a potential predictive biomarker for pancreatic cancer immunotherapy clinical trials in the future. However, three limitations need to be taken into account in this study. First, this is a retrospective analysis and all the clinical data were collected from a single institution in China. Whether the cut-off value of GAR proposed by our study is suitable for other institutions and patient populations remain to be validated. A larger-scale prospective study with multicenter involved is needed to further verify our findings. Second, our study only includes patients underwent radical surgery, without considering those with unresectable PDAC or the impact of different postoperative treatments. Third, GAR has the potential to become a prognostic indicator, but only in the setting of those without preoperative biliary obstruction. GAR may lose its predictive value in patients who have obstructive jaundice and severely impaired liver function. In conclusion, our study demonstrates that as a novel and easily accessible ratio index, preoperative GAR can be used as a prognostic factor for predicting the prognosis of patients with PDAC after radical resection. Combination of TNM stage, GAR, preoperative serum CA19-9, and tumor differentiation can enhance the prognostic accuracy for survival prediction. Further independent prospective clinical trials should be evaluated to confirm these results.
2019-01-22T22:31:14.239Z
2019-01-10T00:00:00.000
{ "year": 2019, "sha1": "b5b2a323fbbad7edd63f6bcea024ad70a2738fee", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.1957", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5b2a323fbbad7edd63f6bcea024ad70a2738fee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246015548
pes2o/s2orc
v3-fos-license
Going Beyond the Cumulant Approximation: Power Series Correction to Single Particle Green's Function in Holstein System In the context of a single electron two orbital Holstein system coupled to dispersionless bosons, we develop a general method to correct single particle Green's function using a power series correction(PSC) scheme. We then outline the derivations of various flavors of cumulant approximation through the PSC scheme and explain the assumptions and approximations behind them. Finally, we compute and compare PSC spectral function with cumulant and exact diagonalized spectral functions and elucidate three regimes of this problem - two that cumulant explains and one where cumulant fails. We find that the exact and the PSC spectral functions match within spectral broadening across all three regimes. In the context of a single electron two orbital Holstein system coupled to dispersionless bosons, we develop a general method to correct single particle Green's function using a power series correction (PSC) scheme. We then outline the derivations of various flavors of cumulant approximation through the PSC scheme and explain the assumptions and approximations behind them. Finally, we compute and compare PSC spectral function with cumulant and exact diagonalized spectral functions and elucidate three regimes of this problem -two that cumulant explains and one where cumulant fails. We find that the exact and the PSC spectral functions match within spectral broadening across all three regimes. I. OVERVIEW: Electrons and holes in materials undergo numerous complex interactions among themselves, the external fields, as well as the constituent atomic lattice. The strength of such many body interactions depends on various factors such as electronic configuration of the host material, presence of doping and defects, lattice parameters etc. Such factors manifest as bosonic collective excitations that renormalize the particle states (electrons/holes) into quasiparticles states with different energy and lifetime, and even mix quasiparticle states depending on the interaction strength. Alongside the quasiparticle features in photo-emission spectra, these collective excitations show up as "shake-off" features that can be loosely separated into sharp satellites emerging from bosonic collective modes (such as plasmons and optical phonons) and continua arising from non-zero-momentum particle-hole excitations (including excitons) [1][2][3]. In calculations, the interaction strengths between collective excitations and particles are modeled as tunable electron-boson coupling parameters. In experiments, this coupling tunability is achieved by introducing doping and defects [4,5]. Although at very weak coupling the quasiparticle renormalization due to the collective modes is negligible, with stronger coupling a proportional renormalization of the quasiparticle occurs. As an example, in photo-emission spectra of strontium titanate this coupling manifests as a significant shift in quasiparticle energy, significant decrease in lifetime and intensity of quasiparticle features, strong shake-off features, as well as a strong mass enhancement of the carrier [6][7][8][9][10]. Strong electron-phonon coupling is also visible in electronic spectra in metallic cuprates [11,12] and the metal-insulator transition in undoped cuprates [13], and other correlated metals for example F eSe/SrT iO 3 epitaxial layers [14]. At extreme values of coupling constant, strong electronboson coupling can completely self-trap and localize electrons creating polaronic states. This severely alter carrier mobility in the material. This is of particular interest in the material design for photovoltaics and electronics [15][16][17]. Finally, in the presence of multiple boson species, there can be competition between their effect on the carrier which creates novel phase crossovers in materials [18]. Therefore a proper understanding and quantification of the effects of collective modes on charge carriers is vital in understanding and designing novel material with interesting engineering applications. In this work, we build on, and generalize, existing nonperturbative methods including the "GW" approximation [19] and the cumulant expansion [20] to describe the single particle dynamics of a system with multiple electronic levels interacting through common boson baths. The paper is organized as follows. In Section II, we introduce the model problem and the concepts of electron Green's function, electron self energy and cumulant corrections. In III, we briefly introduce the existing methods and their major drawbacks. In IV, we develop our correction scheme, and physically motivate the assumptions used to simplify the equations. In sections V, we outline the derivation of various flavors of cumulants through our method and elucidate the implicitly made but vaguely understood assumptions behind these approximation. Finally, in section VI, we identify three important regimes of the problem by comparing the performance of the cumulant method and the power series method with results from exact diagonalization of this problem in finite boson basis. II. INTRODUCTION TO THE PROBLEM We consider a model Hamiltonian for a single electron two orbital Holstein system with bonding/anti-bonding energy ε + /ε − such that their difference is ∆. This system is kept in baths of two dispersionless boson species (±). The bosons are quantized packets of energy ω o that the electron can interact with. Interaction of electron with (−) bosons causes an electron's inter-orbital transition. The (+) boson does not cause any electronic transition upon interaction. The electron-boson interaction strength is controlled by a coupling constant g. The fermionic ladder operators are c + /c † + and c − /c † − for bonding and anti-bonding orbitals respectively. The bosonic ladder operators for (±) bosons are b ± /b † ± . The Hamiltonian for this problem is separable into three distinct pieces. H o is the non-interacting part of the Hamiltonian. H + explicitly has (+) bosons and doesn't cause inter-orbital transitions while H − explicitly has (−) bosons and governs inter-orbital transitions. Here g is same in both H ± due to the original problem's symmetries. But, even if they are different i.e g ± in H ± , we can find corrections in powers of a dummy variable g that multiplies both g ± and set it to 1 in the end. This Hamiltonian describes the physics of a model of the dihydrogen cation (H + 2 ) -two hydrogen nuclei and a single electron. Historically, this problem was approached with clamped nuclei approximation. This crude approach completely neglects the vibronic coupling between the electron and vibrational modes of the nuclei (optical phonons in crystalline structures -see supplement) which becomes crucial when ∆ ≈ ω o . Vibronic couplings in this regime can cause inter-band transitions and severely renormalize the energy levels in the molecule [21][22][23]. Hence, this is a good model to construct the approximation scheme due of its simplicity and similarities to real multi-level systems. Furthermore, no exact analytical solution exists and the approximate methods either give incorrect boson satellites (GW) or are ad hoc, unsystematic and incorrect at strong coupling (cumulant) [24]. The Green's function: The retarded-time(RT) formalism is better suited to handle electron-hole interactions because it treats both of them in equal footing as particles [25]. For the Holstein problem (1) with fock vacuum |0 as the ground state and { , }/[ , ] as the anticommutator/commutator, The electron Green's function G(n, t) for each orbital (n = ±) and the boson Green's function D(N, t) for each boson species (N = ±) in retarded formalism is given by; For non-interacting(g=0) electrons and dispersionless bosons with energy ω o , the bare electron green's function G o and a bare boson green's function D are, The quasiparticle energies, lifetimes and the boson satellites show up as complex poles of G(n, ω) where ω is the frequency. The frequency axis spectral function, A(m, n; ω) (see supplement) is defined as; A(m, n; ω) = 1 π |ImG(m, n; ω)| (4) Electron self energy and Dyson's equation: At zero coupling (g = 0), the energy eigenvalues ε ± of (1) are real and the states have infinite lifetime owing to the lack of interaction between the orbitals. However, upon switching on the boson mediated interaction (g = 0) between orbitals, the exchange of energy and momenta between states through boson exchange causes clumping of electrons and holes to form quasiparticles. Because of time-transnational invariance, we can package this interaction information together and call it the self energy. Each orbital's self energy Σ(n, t) is complex valued unlike the bare energy. This gives rise to spectral peak broadening -an indicative of finite quasiparticle lifetime. A properly constructed self energy also incorporates boson mediated inter-orbital transitions, produces satellite peaks at the correct boson frequencies and redistributes the spectral weight from the quasiparticle to the satellites. The Dyson's equation governs the evolution of electron Green's function by repeated application of this self energy. III. GW, CUMULANT EXPANSION AND THEIR DRAWBACKS The GW approximation used to compute the quasiparticle properties are non self consistent and have abrupt truncation of Dyson's equation for computational efficiency unlike the fully self-consistent original GW Γ formalism [19]. Although GW based methods give reasonably good description of quasiparticle properties at weak coupling, the plasmon satellites are averaged and misplaced at some incorrect average energy [20]. At strong coupling, due to the lack of self-consistency, even the quasiparticle properties can be incorrect. For a single (or isolated) band of electrons in a dispersionless plasmon bath [20], an exact solution of the following form exists. The cumulant C(k, t) is calculated by comparing equation (7)'s taylor-expansion with the temporal Dyson's equation with G and Σ obtained from GW [26]. The satellites manifest as a Poisson series of peaks plasma frequency apart in the spectral function due to the exponential form of the cumulant ansatz (7). In real systems, although not all assumptions of above model hold true, an approximate cumulant correction can be found using the same recipe as above on a GW self energy. Recently, interest in the cumulant approximation has resurged [25,[27][28][29] enabled by increases in computational ability to perform GW and inspired by experiments (.e.g. [30]) on complex systems. The cumulant has the considerable merit of giving near-exact spectra for weak electron-boson coupling-∆ ω o and/or g 1. However, at strong coupling and presence of multiple electronic levels, the bosons significantly affect the quasi-particle properties in ways not reflected in the cumulant approximation. The cumulant is also not systematically improvable by design and lacks proper accounting of inter-band scattering owing to the absence of self-consistency. IV.THEORETICAL FRAMEWORK: The Power Series Ansatz: Rather than assuming an exponential correction, we assume a power series correction P(n, t) in the powers of g 2 to the n th orbital's electron bare Green's function G o (n, t) due to interaction with bosons for time duration 't'. By construction, the interacting system smoothly maps to the non interacting system as g 2 goes to zero. Here C 0 = 1 and all other C k are distinct correction functions of different orders that are 0 when t < 0. This makes physical sense because in retarded time framework-the particle doesn't exist for t < 0. This, just like cumulant, is still a diagonal approximation to the Green's function matrix because, by construction, only those corrections in which a particle eventually returns back to its initial state n are accounted for. Temporal contraction relation: For a given orbital n and time t i < t o < t f , both G and G o and hence by inheritance P have the following temporal contraction property due to the boundary value dependence on time. This property doesn't apply between these function for different orbitals. In calculations, this seemingly trivial property of P(n, t) is absolutely essential to account for bosonic crossing diagrams. Assumption on Electron Self Energy: To properly construct the electron self energy, rather than replacing G by G o inside the self energy as in GW or cumulant expansions, we replace it by power series ansatz in order to re-introduce self-consistency. Here, the n th orbital's self-energy Σ o (n, t) computed by using bare Green's function G o . The introduction of power series in Σ through G now produces corrections due to the particle's eventual return to the initial state after scattering through other possible states. Including these cyclic scattering contributions in the Green's function matrix's diagonal makes the diagonal exact. Correction Scheme: We take the temporal Dyson's equation for m th band and replace G and Σ by their power series corrected versions from (8), (10). We then use the temporal limits enforced by the RT bare Green's function (3) and simplify the equation using the temporal contraction property from equation (9). Setting t 0 = 0 and t 2 − t 1 = τ , and simplifying, we get, There are two distinct terms in this equation. The self correction (P SC ) term occurs when the interaction is within same orbital (n = m) on the right side of this equation. Here, the contraction property (9) must be used between the power series pieces on the right. The inter-band scattering term (P IC ) occurs when different orbitals interact (n = m) and here the contraction property is no longer valid. where, For numerical solution, we start with an initial guess of P = 1 on the right and self consistently compute better values for P on the left until it converges. V. DERIVATION OF VARIOUS CUMULANT SCHEMES We validate our method by deriving the exact cumulant result for the core-hole problem with single orbital of bare energy ε o in a bath of dispersionless plasmons of frequency ω o [20]. The Hamiltonian in this case is; This is an idealization of an isolated electron energy level ε o deep under the Fermi level being probed using x-ray photo-emission [31]. The energetic electron exiting the system leaves behind a hole and the electron cloud responds to this imbalance of Coulomb forces by undergoing quantized long range oscillations (plasmons) at multiples of ω o . The corrected self energy for this case is; For a single energy level, there is no inter-band scattering correction in equation (11). Expanding power series on both sides and comparing terms of same order in g 2 across the equality, we generate the following higher order corrections. Summing all of these corrections gives us the exact result for the core hole problem. The time-ordered cumulant expression in [26][27][28][29] was derived assuming that the n th orbital's cumulant C(n, t) depends only on the n th orbital's self energy Σ(n, t)thereby neglecting boson mediated inter-band scattering effects. In power series language, this translates as neglecting the effect of H − by setting P IC to be 0. In Holstein model, this means that the band gap ∆ ω o and each orbital essentially is an independent core-hole problem with corrections governed by P SC alone. In the other limit -∆ ω o , the satellites are so far away that they don't modify the quasiparticle appreciably. Hence, both P SC and P IC are small and scale roughly equally. So they can be approximated as being independent of the orbital index in (11). This orbital independence lets us use the temporal contraction (9) for P IC regardless of orbital identity thereby giving RT cumulant correction [25,30]. The details of both derivations can be found in the supplement to this paper. We now numerically compute and compare the spectral functions obtained from power series, the exact diagonalization (N ≥ 40 boson basis) and RT cumulant for problem (1) with ε ∓ = ±3, ω o from 10 to 0.1, spectral broadening of 0.1, and a strong coupling parameter of g = 1 in figure 2. Depending on the magnitude of ω o with respect to ∆, figure 2 separates into three distinct regions roughly demarcated by the dashed blue lines. The first region is the weak coupling regime of ω o ∆ -here ω o > 8. Here, both (±) plasmon satellites are far away from the quasiparticle and therefore their effect on the quasiparticle energy and weight is negligible. This is most prominently seen from the negligible change quasiparticle energy from the non-interacting energies ε ± . Here, the retarded cumulant adequately captures all the exact spectral features correctly. The second region has ω o ≈ ∆ -here 8 > ω o > 1.5. A huge shift of spectral weight occurs from bonding to the anti-bonding orbital effectively splitting the anti-bonding orbital into two (between ω o of 4 and 7). The shake-off replicas of this split level also come in pairs as seen in the exact spectra in figure 3. These are captured exactly by the power series but not by cumulant because it lacks proper accounting of inter-band interaction. The third region is when ω o ∆ -here ω o < 1.5. Here the bosonic events are extremely localized around the non-interacting energy and (+) bosons dominate the process. Therefore, inter-band interaction is vanishingly small and the solution is dominated by self-correction i.e core-hole like cumulant. We observe this in all three spectral functions although both exact and power series solution become computationally expensive -the former due to large boson number necessary and the latter due to small time-step and large convergence order. VII. CONCLUSION In this work, we derived a general power series based method which mitigates all the problems of cumulant-based methods, is practical to implement and reproduces the exact result in a finite basis for this problem within the spectral broadening used. We also identified three important regimes of this problem and elucidated where cumulant works, why cumulant works, and when it fails. We hope to extend this work to real multi-electron systems with strong plasmon resonances. Electron and Boson Green's Function In our single electron two site Holstein problem, we look at the electron addition spectra. For this problem, the ground state is the fock vacuum |0 which doesn't have any fermion or boson in it. We will now define the electron and boson Green's function for this problem with |0 as the ground state. The electron Green's function: In retarded-time formalism [Kas et al., 2014] for a two orbitals (n = ±) system described by (13), with {, } as the anti-commutator, c † n /c n as the electron creation/annihilation operators, and |0 as the fock vacuum, the RT one-particle electron addition Green's function G(n; t) is the probability amplitude for a particle injected into orbital 'n' to be in 'n' after time t [Goodvin et al., 2006]. The non-interacting(g = 0) or bare electron Green's function G o (±, t), given the bare energy eigenvalues ε ± of H o and evolution time 't', is as follows; The Boson Green's function: In RT formalism, with [, ] as the commutator, b † N /b N as the boson creation/annihilation operators, |0 as the fock vacuum, the RT one-particle boson addition Green's function D(N = ±; t) is the probability amplitude for a 'N' type boson to remain in 'N' type after time t The non-interacting(g = 0) or bare boson operator for dispersionless bosons of frequency ω o is given by; Spectral Function and Improper convergence of Delta Function In our work, the photo-emission spectral function A(m, n; ω) evaluated on the frequency axis is defined as; A(m, n; ω) = 1 π |ImG(m, n; ω)| This absolute valued definition of spectral function differs from the traditional definition and is necessary in numerical application because of the finiteness of the time axis. We explain this further in this section. The retarded time bare electron Green's function in frequency space is defined as; Here, P represents the principal value of the function it is acting on. We see that the imaginary part of this G o (k, ω) has the poles at the energy eigenvalues ε k of the non-interacting part of Hamiltonian. From this, the traditional definition of the spectral function emerges; Assuming a smooth transition from non-interacting to interacting system, we can extend this expression's validity to define interacting system's spectral function; In the context of Dirac Delta function we often use the following relationship: The delta function in the imaginary part originates from the limit-definition (Sokhotski-Plemelj Theorem or Kramers Kronig Relations) of the function in the line right above it and hence is an idealization when it comes to numerical implementation. This is because in numerical implementation, explicitly demanding that η must go to zero only from the positive side of the number line (since we demand η → 0 + ) for a continuous function (bare electron green's function) is notoriously difficult. On top of this, the negative side then requires a sign flip in the definition of the delta function. Now, we no longer have a unified definition of delta function but rather a piece wise definition. This is still manageable when we have a single delta function i.e bare electron Green's function in any one half of the real line. But when we use the bare electron Green's function to compute actual Green's function in symmetric time domain and convert back to frequency domain, we now notice that we need to enforce this piece-wise definition of Green's function at every given frequency point. Furthermore, since there is a cutoff (t max ) in time, this manifests as oscillations in the frequency space in the order of t −1 max . We are now at an impasse. We need a large t max (ideally t max → ∞) to properly capture the Green's function decay. But t max needs to be some large finite value for numerical implementation which manifests as violent small energy oscillation. In order to bypass this and reproduce correct answer for non-interacting as well as interacting fermionic systems, we can redefine the limit-definition of the function with an absolute value as follows; Doing so, we now get a consistent single definition of the delta function on both sides of the number line. This manifests in our definition of the spectral function. A curious case of the Holstein Hamiltonian The two site single electron Holstein Hamiltonian presented in the paper originates from a two site hopping model with site hopping completely determined by the hopping terms and not the bosons. Given the fermionic and bosonic creation/annihilation operators c i /c † i and b i /b † i for site i = 1, 2, the Hamiltonian for such a system is [Gunnarsson et al., 1994] ; A closer inspection of this Hamiltonian (last term) leads to the conclusion that boson emission or absorption do not cause any site hopping. Furthermore, the two sites are equivalent in energy i.e both have an energy o and there is no preference in hopping from one site to another because the hopping amplitude t is same for hopping along both directions. We can now go to the bonding and anti-bonding orbital basis with a change of variable for both fermions and bosons. With this change of variable, the Hamiltonain transforms to the one in the paper; Once this stage is reached, we can separate H into a piece without any bosons H o , a piece with only (+) bosons -H + and a piece with only (−) bosons -H − as shown in the paper. Now, we have transformed our system into two orbital system with a gap of ∆ = 2t where the hopping is entirely controlled by the (−) bosons. In case of the dihydrogen cation (H + 2 ), there literally are two sites and a single electron. In this context, we can talk about the bonding and the anti-bonding orbitals originating from the original hydrogen molecule. In this idealized molecular system, the bosons are be the vibrational mode of the nuclei which may or may not cause inter-orbital transition. In this case, we have only one such vibrational mode because of the the diatomic structure-namely, nuclear motion along the line joining the two nuclei which stretches and compresses the bond length. We can then partition this bosonic space into the bosons that do in fact cause such transitions ((−) bosons) and the ones that do not ((+) bosons). In crystalline systems, the story becomes more general. We can have optical phonons which can cause transitions and phonons which do not. In this case, we can incorporate both of these behaviors with proper couplings and bosonic frequency by defining different phonon frequency ω ± and coupling constants g ± for different phonons species. Recursive relation for corrections In the paper, we saw how we can self-consistently update the power series P to find better and better approximation for itself. The full Dyson's series along with the perturbative nature of P also gives rise to recursive relations between correction functions C k . By expanding P on both sides and comparing terms of same order in g 2 for m th orbital, we get; Here too, inside the bracket, the first term is the self correction and the second term is the inter-band correction due to the effect of a different orbital n . By construction, we start with C o = 1 for all bands. This scheme is useful for analytical proofs but cumbersome for numerical implementation. Derivation of Time-ordered Cumulant from Power Series The time-ordered cumulant is named so because of the use of time-ordered Green's function formalism. In this formalism, the electron lives in the t > 0 branch of the Green's function while the hole lives in the t < 0 branch of the Green's function. Therefore, there is no interaction between electrons and holes i.e both electrons and holes only talk amongst their own species. Furthermore, the derivation was done with the assumption that in the Dyson's equation, any n th orbital's electron Green's function G(n, t) depends only on n th orbital self energy Σ(n, t) and not the total self energy Σ(t) when it is evolving in time. This would be true if we knew the actual approximation free self-energy for the n th orbital. But in every practical case, what we have is some truncated self energy that neglects the boson mediated inter-band scattering effect mediated. Hence, using some approximate Σ(n, t) instead of power series corrected Σ(t) in Green's function evolution isolates each orbital as a core-hole problem. In multi-band system, this is an even more stringent condition because each orbital only scatters to itself regardless of it being a hole state or an electron state or there being other electron or hole states around. In the context of our single electron two orbital problem, this means that the hole and the electron states should be treated independent of each other and hence H − is neglected from the total Hamiltonian. Therefore, there is no inter-band correction term (P IC = 0) and all the dynamics is governed by the self correction term. The corrected self energy for electron/hole (e/h)for this case is; Since there is no inter-band scattering correction in power series correction equation for both electrons and holes, the sets of equation decouple. For electron, the correction equation is as follows; Expanding power series P e on both sides and comparing terms of same order in g 2 across the equality, we generate the following higher order corrections. Summing all of these corrections gives us the exact result for the core hole problem. An equivalent derivation for the hole cumulant can be performed by following the steps outlined above. Derivation of Retarded-time Cumulant from Power Series In the cited papers, the authors derive cumulant results for H o + H − rather than H because the effect of H + is like that of core-hole problem in that it causes no inter-band transition. Here we choose to use this same model for proper comparison with the literature [Zhou et al., 2018]. For bosons of frequency ω o and two bands with bare energies ε + and ε − , if the above assumptions about explicit band independence of corrections hold true, we can compute the correction series exactly. The bare band retarded self energies are [Zhou et al., 2018; In the literature [Zhou et al., 2018], the authors choose to write the total self energy without power series correction. We choose to correct the total self energy with power series correction inside as shown in our paper. The total self energy Σ(t) for such a system given each level's self energies Σ(m, t) is then; Here, both the terms originating from different boson species look identical because of the symmetry of the problem (i.e ω o and g being same in both species). We could easily change ω o to ω ± between the two boson types and repeat this analysis. If there are are two coupling constants g ± for each boson species (±), then the self energy can be written in terms of a third dummy coupling constant g as; We then include g m in the bare self energy Σ o (m, t) and expand the power series in terms of g 2 instead of g 2 m . In the end, we set this g to be 1. As mentioned in our work, for retarded cumulant derivation, we assume that since the orbital energy gap ∆ is much smaller than the boson frequency ω o , we can assume that the power series corrections are explicitly orbital independent. This greatly simplifies our power series equation because we can use the temporal contraction relation between the power series pieces without caring about the orbital index. Coming back to the problem at hand with same ω o and g, we can then write down the recursion relation for the correction power series for the n th band after using the contraction relation as ; If we solve the above equation for correction for the first orbital ε + with these band self energies, we get the retarded cumulant expressions; Here, we see two distinct terms in the cumulant C + 1 . The first term generates satellites at intervals of ω o from ε + orbital. This is a satellite generated by the electron interacting with a (-) boson and jumping down to ε + orbital from ε + orbital. The second term generates the satellite at intervals of ω o + ∆ due to the electron interacting with a (-) boson and jumping up form ε + to ε − orbital. So the satellites now appear from the final orbital rather than the initial orbital. In reality however, the satellites due to (−) plasmon from one orbital should emerge in the interval of ω o and not ω o + ∆ from the final orbital. So, the retarded cumulant is getting only the very firstω o satellite correct. Fortunately, in the limit of ∆ << ω o , theseω o satellites are so far off from the quasiparticle that they don't modify the quasiparticle spectra appreciably. And hence, the explicit orbital independence assumption becomes valid. For sanity check, we can compute the power series numerically. We see that the 20 th order Power series converges to the spectral function given by the retarded cumulant expression in the literature-here referred to as "Cumulant corrected". Any further attempt to update the power series just results in the same function which means that we have converged to the exact solution. Details of Exact Diagonalization In this section, we will briefly outline the construction of the two orbital Holstein Hamiltonian and the process of exact diagonalization. For a system with a single electron, N number of (+) bosons and N number of (-) bosons, there are three different components in the wave function -one for the electron and two for the two different bosons. For single electron, the electron wave function has three distinct entries each of which can either be 0 or 1. This is because of Pauli exclusion principle. |ψ e = |n v , n + , n − where, n v = vacuum designator n + = + orbital designator n − = -orbital designator (18) Here, if there are no electrons in the system n v = 1 denoting electron vacuum. Presence of any electron in the system implies that n v = 0. If the electron is in + orbital, n + = 1 and otherwise n + = 0. Similarly, if the electron is in − orbital, n − = 1 and otherwise n − = 0. For bosons, there is no restriction on the number of bosons that can coexist at a time. But for the sake of exact diagonalization, we need to enforce a cutoff that the maximum possible boson number is N in order to cutoff the Hamiltonian-the idea being that as N → ∞, this finite Hamiltonian's eigenvalues approaches the exact eigenvalues. Any given m th wave function denoting that there are "m" bosons in the system for the (±) boson is as follows; |Φ ± = |n 0 , n 1 , n 2 , n 3 , .., n m , .., n N −1 , n N where, Here n 0 = 1 indicates boson vacuum. Since our boson wave function is based on the boson number rather than states, at any given time, only one of the n i can be non-zero. For instance, if there are two (+) bosons, only n 2 = 1 and all other n i =2 = 0. For the entire single electron two plasmon bath system, any total wave function is then; Here, 0 ≤ b, c ≤ N by construction. In this system, there are 3(N + 1) 2 basis vectors. Because the Hamiltonian matrix scales as (N + 1) 2 ,computation becomes exceedingly expensive with increasing boson number. Once, we construct this Hamiltonian, we can then find the eigenvalues and eigenvectors for it. The eigenvalue-eigenvector pair is represented as {ε i , |i } i and there are 3(N + 1) 2 of them. The choice of boson number is dependent on the energy scale that we are looking at. With increasing N , we get the ability to resolve events closer in energy at the expense of computation time. At large plasma frequency, events happen far apart from each other and hence we only need a few bosons to resolve the system properly. At small plasma frequency however, since plasmonic shake offs are very close to each other, we need a large number of bosons to properly resolve such events. This is the Green's function from exact diagonalization that we plot in our work.
2022-01-19T02:16:17.368Z
2022-01-18T00:00:00.000
{ "year": 2022, "sha1": "2a3525c669f9b69b8d4bdc976c47d4c05c099314", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2201.06715", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "67f80a0a9845d0e7fb2dc6c756effdfecd7f8da5", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
102592777
pes2o/s2orc
v3-fos-license
Synthesis and postsynthetic anion exchange of CsPbX3 (X = Cl, Br, I) quantum dots CsPbX3 (X = Cl, Br, I) quantum dots were synthesized using hot-injection method at various temperatures (50–200°C), resulting in quantum dots size change and corresponding photoluminescence shift. During anion exchange, continuous formation of intermediate solid solutions was observed resulting in fine adjustment of photoluminescence peak position. Dynamics of photoluminescence spectra under continuous laser irradiation for anion exchange was studied. It is necessary for anion exchange that halide should be in ionic, not molecular, form. Introduction Nanomaterials never cease to amaze. Even when we think we have seen all the terrific properties and thought about all possible applications, there appears something spectacular. High quantum yields of up to 90%, possibility to tune photoluminescence over the entire visible spectra, the absence of the blinking effect and narrow emission spectral widths -all these interesting properties are present in novel optoelectronic materials, CsPbX3 (X = Cl, Br, I) quantum dots (QDs) [1]. Fully inorganic perovskites attracted emphasis since organic/inorganic hybrid perovskites, such as CH3NH3PbI3, showed staggering power conversion efficiencies above 20%, so it became the only one logical step to synthesize all-inorganic perovskite QDs because of a higher stability they show compared to hybrid ones [2]. Synthesis of CsPbX3 QDs All CsPbX3 QDs crystallize in cubic phase of the perovskite lattice ( Figure 1), which is the hightemperature state for bulk compounds. QDs were synthesized via hot injection method using Cs-oleate and PbX2 as precursors and dodecane as a medium. Since the growth of QDs happens exceedingly fast (within 1-3 s), the average size of QDs is rather controlled by reaction temperature. Photoluminescence spectra of CsPbBr3 and CsPbI3 QDs synthesized at various temperatures (50-200°C) are shown in Figure 2. Postsynthetic anion exchange The enchanting special thing about CsPbX3 (X = Cl, Br, I) QDs is that we can significantly tune photoluminescence spectra using postsynthetic anion exchange [3]. Photoluminescence peak position depends on which halogen atom (Cl, Br or I) is in the QD structure. Halogen atoms ability to easy substitute each other can be used to obtain intermediate energy gap values. Due to the significant difference between Cland Iionic radii there is no possibility to obtain CsPb(Cl/I)3, yet CsPb(Cl/Br)3 and CsPb(Br/I)3 systems are feasible. The best option, in this case, is to synthesize CsPbBr3 QDs with subsequent addition of lead chloride and lead iodide solutions to carry out anion exchange. Corresponding photoluminescence spectra are shown in Figure 3. As it can be seen fast anion exchange provides the possibility to the fine tuning of photoluminescence spectra over the entire visible spectra. In Figure 4 are shown dynamics of photoluminescence spectra under continuous laser irradiation for anion exchange from CsPbBr3 to CsPbI3. The following shift of photoluminescence spectra and broad peaks between initial and final states are connected with the continuous formation of CsPbBrxI3-x solid solutions. Note that photoluminescence intensity of CsPbI3 stays comparable to the CsPbBr3. However, change in solid solution's composition leads to absorption spectra shift, which causes absorption coefficient change on excitation wavelength (405 nm). Especially clearly it is seen in case of CsPb(Cl/Br)3 QDs: chlorine concentration increase in these solutions leads to absorption coefficient decrease on excitation wavelength and applicable luminescence intensity reduction ( Figure 5). Talking about CsPb(Br/I)3 QDs, the same effect causes absorption coefficient increase and leads to the fact that charge carrier excess energy converts to the thermal energy, accelerating diffusion balance. Therefore, the process time for CsPb(Cl/Br)3 QDs is three times slower. Note that it is necessary for anion exchange that halide should be in ionic, not molecular, configuration. When adding molecular iodine to the colloidal solution of CsPbBr3 QDs instead of anion exchange etching of surface ligands occurs leading to luminescence intensity decrease ( Figure 6). Surface etching is also confirmed by the fact that precipitation of more large-scale CsPbBr3 QDs happens as the result of interaction with molecular iodine. Thus, CsPbBr3 can be used as sensors in iodine electrolytes -shift of peak position indicates dissociated iodine presence, intensity modification -concentration of molecular iodine. Figure 6. Dependence of photoluminescence intensity maximum versus iodine concentration C (the inset shows photoluminescence spectra evolution increasing iodine concentration). Conclusion As it was shown in this work, postsynthetic anion exchange allows to use one initial halogen source solution to obtain different solid solution systems, which means capability to achieve wanted luminescence wavelength. Possibility of anion exchange using molecular iodine was studied and it can be seen that anion exchange happens only if halogen is in ionic form. Adding molecular iodine leads to luminescence intensity decrease. Taking into account all the captivating properties of these materials, it is easy to see possible applications. For LEDs and lasers, we can use the ability to adjust photoluminescence spectra, high quantum yield, and narrow emission linewidths, for solar cells -significant light absorption by these structures.
2019-04-09T13:06:39.543Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "1b0f0ab6a8c22ff291df7183d07d9b2a6999599b", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/917/6/062041", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "409bfb421831e9b8bccd5030807c8119d6199d58", "s2fieldsofstudy": [ "Materials Science", "Chemistry" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
259986995
pes2o/s2orc
v3-fos-license
Downregulation of MAL2 inhibits breast cancer progression through regulating β-catenin/c-Myc axis Purpose Myelin and lymphocyte protein 2 (MAL2) is mainly involved in endocytosis under physiological conditions and mediates the transport of materials across the membranes of cell and organelle. It has been reported that MAL2 is significantly upregulated in diverse cancers. This study aimed to investigate the role of MAL2 in breast cancer (BC). Methods Bioinformatics analysis and Immunohistochemical assay were applied to detect the correlation between MAL2 expression in breast cancer tissues and the prognosis of breast cancer patients. Functional experiments were carried out to investigate the role of MAL2 in vitro and in vivo. The molecular mechanisms involved in MAL2-induced β-catenin and c-Myc expression and β-catenin/c-Myc-mediated enhancement of BC progression were confirmed by western blot, β-catenin inhibitor and agonist, Co-IP and immunofluorescence colocalization assays. Results Results from the cancer genome atlas (TCGA) and clinical samples confirmed a significant upregulation of MAL2 in BC tissues than in adjacent non-tumor tissues. High expression of MAL2 was associated with worse prognosis. Functional experiments demonstrated that MAL2 knockdown reduced the migration and invasion associating with EMT, increased the apoptosis of BC cells in vitro and reduced the metastatic capacity in vivo. Mechanistically, MAL2 interacts with β-catenin in BC cells. MAL2 silencing reduced the expression of β-catenin and c-Myc, while the β-catenin agonist SKL2001 partially rescued the downregulation of c-Myc and inhibition of migration and invasion caused by MAL2 knockdown in BC cells. Conclusion These observations provided evidence that MAL2 acted as a potential tumor promoter by regulating EMT and β-catenin/c-Myc axis, suggesting potential implications for anti-metastatic therapy for BC. Supplementary Information The online version contains supplementary material available at 10.1186/s12935-023-02993-9. Introduction Breast cancer (BC) is the most common malignancy in women and accounts for 31% of female cancers [1]. Although molecular targeted therapy and immunotherapy have allowed revolutionary changes to BC treatment [2],but the overall progress is slow and clinical translation of this knowledge faced large challenges [3]. In addition, more than 20% of BC patients still develop metastatic disease with a poor prognosis [4]. Therefore, the identification of novel molecular regulators and mechanisms of BC progression will help to find novel targets for BC treatment. T-cell differentiation protein 2 (Also known as Myelin and lymphocyte protein 2) is a four-pass transmembrane protein composed of 176 amino acid residues. Recent studies have reported that MAL2 acts as an influential regulator in cancers [5], mainly participating in endocytosis under physiological conditions and mediating the transport of intercellular substances [6]. Previous studies have demonstrated increased expression of MAL2 in ovarian cancer [5], prostate adenocarcinoma [7], papillary thyroid cancer [8] and pancreatic cancer [9]. MAL2 has been shown to be associated with the prognosis of patients with pancreatic cancer and colorectal cancer, which can affect the overall survival of patients [10]. Interestingly, a significant association was found between MAL2 expression and the negatively infiltrating level of eosinophils and plasmacytoid dendritic cells [11], and the depletion of MAL2 in breast tumor cells significantly enhanced tumor-infiltrating CD8 + T cell cytotoxicity and suppressed breast tumor growth, suggesting that MAL2 is a potential immunotherapy target for the treatment of BC [12]. However, the role of MAL2 in BC progression and metastasis remains poorly understood. Epithelial-mesenchymal transition (EMT) is a cell trans-differentiation process in which epithelial cells acquire mesenchymal characteristics [13]. By activating this EMT program, cancer cells can invade adjacent tissues and migrate to distant organs. EMT progression is regulated by specific components of the major EMT regulators, such as E-cadherin, N-cadherin, vimentin, Snail, SOX2 and OCT4 [14,15]. Many studies have found that EMT is associated with tumorigenesis, invasion, metastasis and resistance to treatment, especially in BC [16,17]. The Wnt/β-catenin signaling pathway is involved in many cellular activities, regulating of apoptosis, differentiation, senescence, invasion, migration, and EMT [18,19]. Upon stimulation with extracellular membrane Wnt, the APC/CK1/GSK-3β/Axin/β-catenin degradation complex was inactivated and the phosphorylation of β-catenin by GSK-3β was inhibited. β-catenin translocations into the nucleus and then interacts with TCF/ LEF and activates downstream genes [20], c-Myc is a recognized target gene of β-catenin/TCF transcription factor complex, and is the main carcinogenic driver of tumor growth and metastasis [20]. Moreover, c-Myc contributes to angiogenesis, invasion, and migration [19]. In this study, we report that MAL2 is significantly upregulated in BC tissues compared with the paired noncancerous tissues and that MAL2 overexpression predicts poor prognosis in BC patients. In addition, we observed that knockdown of MAL2 decreased migration and invasion ability as well as increased apoptosis in BC cells. Furthermore, MAL2 downregulation reversed EMT, reduced downstream β-catenin and c-Myc expression in vitro, and inhibited tumor metastatic capacity in vivo. Taken together, our study reveals that MAL2 functions as a novel regulator of BC progression. Bioinformatic analysis The transcriptional levels of MAL2 in breast invasive carcinoma (BRCA) and normal breast tissue were obtained from TCGA pan-cancer view by using the GEPIA database (http://gepia.cancer-pku.cn) and UALCAN database (http://ualcan.path.uab.edu/). Kaplan-Meier survival analysis was used to evaluate the prognostic value of MAL2 genes. According to the median of the expression of MAL2, we divided patients into highly expressed group and lowly expressed group. Clinical samples and immunohistochemistry staining Tissues samples containing 20 BC tissues, 15 fibroadenoma tissues and 13 paracancerous tissues were collected from The First Hospital of Guizhou medical University with informed consent. None of the recruited BC patients received chemotherapy, radiotherapy or biological therapy. All of the patients and their families signed the informed consent. The whole process obeyed the rules of the Ethics Committee of the Guizhou medical University. Tissue sections were baked at 60°C for 1 h, dewaxed in xylene, rehydrated through a gradient concentration, and the endogenous peroxidase activity was blocked with 3% hydrogen peroxide. After antigen retrieving by citrate buffer using a microwave oven, the sections were incubated with the primary antibody MAL2 (purchased from BIOSS, BS-7175r, 1: 200 dilution) at 4 °C overnight. Then, tissue sections were incubated with primary antibody-derived secondary antibody (purchased from proteintech, SA00001-2, 1:2000). Finally, the sections were visualized after staining with DAB and counterstained with haematoxylin. IHC staining score was assessed by pathologists who were blinded to the patients' clinicopathological information. The scoring criteria according to the intensity of staining are as follows: negative (unstained), 0 points; weakly positive (yellow), 1 point; moderately positive (brown), 2 points; and strongly positive (brown), 3 points. Percentage of positively stained tumor cells was scored as follows: 1 (< 10%), 2 (10-50%), 3 (50-75%), and 4 (> 75%). The protein IHC staining index for each section was the product of the staining intensity score and the positive cell proportion score. Wound healing assay After incubation for 24 h, the cells grow to 90-100% for scratching. The cell monolayers were scratched with a 200 µL pipette tip. After washing with PBS, the cells were cultured with serum-free medium and allowed to migrate for 24 h. Images were acquired at 0 h,12 and 24 h and then analyzed with Image J software. Transwell migration and invasion assay 24-well transwell chambers with 6.5-µm pore size polycarbonate (corning) were used to test cell invasive and migratory ability. Briefly, 7.5 × 10 4 infected cells in serumfree DMEM/L15 medium were transferred into the upper chamber of an insert with Matrigel or not, and DMEM/ L15 medium supplemented with 10% FBS was added to the lower chamber. After incubation for 24 h, the cells remaining on the upper membrane were removed with cotton wool, and the cells that had migrated or invaded through the other side of the membrane surface were fixed with methanol and stained with 0.1% crystal violet (Solarbio). Three-Five random fields were imaged and counted under an inverted microscope. Flow cytometry Cell culture medium and cells were collected and washed with pre-cooled PBS, and the supernatant was discarded. The cells were suspended with 500 µL 1×binding buffer solution. 10 µL 7-AAD and 5 µL Annexin V-APC were added to each cell sample. After mixing and staining for 10 min at room temperature, cells were analyzed with a FACScan® flow cytometer (BD Biosciences) equipped with CellQuest software (BD Biosciences). Cells were classified into viable cells, dead cells, early apoptotic cells and late apoptotic cells, and the relative ratio of early and late apoptotic cells was compared with control from each experiment. Acridine orange/ethidium bromide staining The cells were resuspended with PBS after collection, and AO/EB solution was prepared afterwards (reagent A: reagent B: reagent C = 1:1:8). We added 1 µL of AO/ EB working solution every 15 µL cell suspension, mixed them, incubated them for 15 min at room temperature, and viewed them under a fluorescence microscope. Dead cells fluoresce orange, while living cells fluoresce green. Western blot Cells were harvested and lysed with RIPA and the protein concentration was detected by BCA protein assay (Solarbio,PC0020). Twenty mg total proteins were separated by 10% SDS-PAGE and transferred to PVDF membrane, then incubated with the primary antibodies against E-cadherin (BA0475, 1 China) overnight. The membranes were subsequently incubated with the appropriate HRP-conjugated secondary antibodies (purchased from proteintech, SA00001-2, 1:10000) for 1.5 h, and signals were visualized using an ECL detection system. Co-immunoprecipitation According to the DIA IP/CoIP Kit (KM0134) manufacturer's instructions, cells were collected and lysed for 20 min. Cell lysates were centrifuged at 12,000 rpm for 10 min at 4℃. Then, the beads were washed 3 times with PBS buffer, the MAL2 antibody (BS-7175r,1:50), β-catenin (ZEN, 1:50) or IgG and beads were turned over for 20 min at room temperature, after which the beadantibody mixture was washed with PBST five times. The supernatant containing proteins was resuspended with the bead-antibody mixture, incubation was then reversed at 4℃ for more than 8 h. Then, the bead-antibody-antigen mixture was washed with PBST five times, resuspended with 1x loading buffer, and heated at 100℃ for 5 min. The supernatant was used for western blotting. Immunofluorescence analysis Cells were fixation with 4% paraformaldehyde for 20 min. The fixed cells were permeabilized 10 min with 0.2% Tween-20,washed with PBS and blocked with PBS containing 5% BSA for 30 min. Immunostaining was done by incubating the samples successively with antibodies specifically recognizing β-catenin, MAL2 and fluoresceinconjugated secondary antibody. The nuclei were then restained with DAPI. The fluorescence was examined under a fluorescencemicroscope (Nikon ci-e-ds-r11). Nude mouse lung metastasis assay Twelve nude mice (BALB/C, 4-week-old, female) were purchased from Beijing Huafukang Biosciences (Experimental Animal Production License No: SCXK(Beijing) 2019-0008). The animal experiments met the requirements of the Animal Care and Use Committee of China Medical University. The mice were divided into two groups of 6 mice each, and 8 × 10 5 MDA-MB-231 cells with stable knockdown of MAL2 or control cells were injected via the tail vein. After six weeks, mice were sacrificed by cervical dislocation under anesthesia, the lung tissue was harvested, the number of nodules was counted, and the lesions were observed by HE staining. The animal experiment has been approved by the Animal Experiment Ethics Committee of Guizhou Medical University. Statistical analysis Data are presented as the mean ± standard deviations (SD). SPSS22.0 software was used to perform the statistical analysis. The Student's t-test was performed to assess the values between two groups, and Analysis of Variance (ANOVA) was performed for analysis among multiple groups. The survival rate of BC patients was analyzed by applying Kaplan-Meier method and calculated with logrank test. P < 0.05 was considered statistically significant. (*P < 0.05, **P < 0.01, and ***P < 0.001). MAL2 is highly expressed in BC tissue and cells To assess the role of MAL2 in BC, we first analyzed the MAL2 expression level in 1085 BC tissues and 291 normal tissues using GEPIA database and 1097 BC tissues and 114 normal tissues using UALCAN database. We found that MAL2 was significantly upregulated in BC tissues compared to the nontumor tissues ( tissues. It was found that MAL2 expression in BC tissues was significantly increased in comparison with that in noncancerous tissues ( Fig. 1E; P < 0.01). These results demonstrate that MAL2 is significantly upregulated in BC tissues suggesting MAL2 may function as a tumorpromoting factor in human BC. Western blot showed that the expression of MAL2 was significantly higher in a panel of BC cell lines than that in the noncancerous breast cell line (MCF-10 A) ( Fig. 2A). Based on the high expression of MAL2, human MDA-MB-231 and MCF-7 cells were selected for further MAL2 studies in BC. To find out the possible role of MAL2 in BC in vitro, MAL2 was knocked down in BC cell line MDA-MB-231 and MCF-7 cells by transfection with specific MAL2 siRNA. In addition, western blot was performed to detect MAL2 expression in MDA-MB-231 and MCF-7 cells transfected with different MAL2 siRNAs for 48 h. Greater knockdown efficiency was observed for using MAL2-siRNA 3# as compared to using MAL2-siRNA 1# and MAL2-siRNA 2# (Fig. 2B), MAL2-siRNA 3# was therefore selected to construct the shRNA interference vector lentivirus and stable low-expression cell lines. The western blot results showed that MAL2 knockdown by shRNA significantly reduced MAL2 expression as compared to the control (Fig. 2C). MAL2 downregulation inhibits migration, invasion and EMT To unravel the biological function of MAL2 in BC cells, the viability of cell migration and invasion were examined using the wound healing and transwell assays after knockdown of MAL2. The results of wound healing and transwell migration assays revealed that knockdown of MAL2 markedly reduced migration ability compared with the control group ( Fig. 3A and B). Transwell invasion assay was used to assess the invasion abilities of MDA-MB-231 and MCF-7 cells. Our results unveiled that silencing MAL2 reduced the number of cells that invaded the membrane of BC cells (Fig. 3C). In addition, the effects of MAL2 knockdown on EMT were examined by detecting the expression of EMT markers E-cadherin, N-cadherin and Vimentin. Western blot results showed that the expression of N-cadherin and Vimentin decreased, while the expression of E-cadherin increased in MDA-MB-231 and MCF-7 cells following MAL2 knockdown (Fig. 3D). Taken together, these results suggest that knockdown of MAL2 inhibits migration, invasion and EMT of BC cells. MAL2 downregulation induces apoptosis of BC cells To further investigate the effect of MAL2 gene knockdown on apoptosis of MDA-MB-231 and MCF-7 cells, flow cytometry analysis and AO-EB double staining were performed. Flow cytometry results demonstrated that the All data are represented as mean ± SD. P < 0.05; *P < 0.01;***P < 0.001; ns, not significant total apoptotic rate of MDA-MB-231 and MCF-7 cells with sh-MAL2 was higher than that of cells in sh-NC group (Fig. 4A). AO-EB double staining also proved that the number of apoptotic cells in MAL2 silencing group was higher than that in sh-NC group (Fig. 4B). At the molecular level, we detected apoptosis-related proteins, and the results showed that after MAL2 knockdown, the protein expressions of Bax, Cleaved-caspase 3, and Cleaved-caspase 8 were significantly increased, while the expression of Bcl2 decreased (Fig. 4C, D and E). These results indicated that MAL2 knockdown could induce apoptosis of BC cells. MAL2 downregulation inhibits BC lung metastasis Since MAL2 downregulation was found to inhibit the migration and invasion of BC cells in vitro, we next explored the possible role of MAL2 in lung metastasis of BC cells. BALB/c nude mice were intravenously (i.v.) injected sh-NC or sh-MAL2-transfected MDA-MB-231 cells. Two groups of mice were killed without pain 6 weeks after inoculation, and their lungs were surgically excised and subjected to detection of metastatic lung lesions (Fig. 5A). Our results showed that the lungs of MDA-MB-231 cells transfected with sh-MAL2 produced fewer nodules than the sh-NC group (Fig. 5B). H&E staining of lung tissue sections was found that the normal alveolar tissue in the sh-NC group was more disappeared and diseased, and the diseased part accounted for more, while the normal part of the alveolar tissue in the sh-MAL2 group was more,and the diseased part was less (Fig. 5C). These findings demonstrate that MAL2 downregulation inhibits BC lung metastasis in vivo. MAL2 regulates β-catenin/c-Myc Bioinformatics analysis in a previous study suggested that the high expression of MAL2 in BC enriches MYC target V1. The abnormal expression of c-Myc is generally considered to be closely related to cell migration and invasion, and β-catenin is an important factor to activate c-Myc expression in cancer cells. Thus, we aimed to investigate the relevance of MAL2, c-Myc and β-catenin in BC cells. The results showed that c-Myc and β-catenin expression was downregulated when MAL2 was silenced in MDA-MB-231 and MCF-7 cells (Fig. 6A). Consistent with the intracellular localization of MAL2, β-catenin is also expressed predominantly in the cell membrane and cytoplasm and is involved in cell-cell adhesion. Thus, we firstly identified the interaction between MAL2 and β-catenin in BC cells. The results of immunoprecipitation of whole cell lysates from MDA-MB-231 cells showed that MAL2 interacted with β-catenin in vitro (Fig. 6B). We also performed double immunofluorescence staining of MAL2 and β-catenin in MDA-MB-231 cells, and the results showed that the fluorescence signals of MAL2 and β-catenin were mainly colocalized in the cell cytoplasm and membrane (Fig. 6C), further suggesting an interaction between MAL2 and β-catenin. To explore whether β-catenin and c-Myc are correlated, we treated MAL2knockdown cells with pharmacological β-catenin inhibitor, XAV-939, and subsequently measured β-catenin and c-Myc expression. The results showed that the expression of c-Myc was downregulated along with the downregulation of β-catenin (Fig. 6D). To explore whether MAL2 regulates the expression of c-Myc by β-catenin, MAL2-knockdown cells were treated with β-catenin agonist SKL2001 (SKL), and β-catenin and c-Myc expression were subsequently measured. As shown in Fig. 6E, β-catenin agonist rescued MAL2-induced downregulation of β-catenin and c-Myc. These results suggested that c-Myc expression regulated by MAL2 is dependent on β-catenin. Stabilizing β-catenin rescues the inhibitory effect of MAL2 downregulation on migration, invasion and EMT SKL2001 protects β-catenin from proteasomal degradation by inhibition of phosphorylation at residues Ser 33/37/ Thr 41 and Ser 45. We therefore performed rescue All data are expressed as mean ± SD. * P < 0.05; ** P < 0.01;*** P < 0.001; ns, not significant experiments using the β-catenin agonist SKL2001 to investigate whether MAL2 regulates the invasion and metastasis of breast cancer cells via the β-catenin/c-Myc axis. The results of wound healing assay showed that SKL2001 treatment partially saved the diminished wound healing ability caused MAL2 silencing (Fig. 7A). In addition, BC cells treated with SKL2001 for 24 h were harvested for transwell migration and invasion assay. As shown in Fig. 7B, BC cells exhibited increased migration and invasion ability in the MAL2 silencing + SKL2001 group as compared to the MAL2 silencing group alone (P < 0.05)., Furthermore, the EMT-associated proteins were detected in MDA-MB-231 and MCF-7 cells after SKL2001 treatment for 24 h. The results showed that the down-regulation of N-cadherin and Vimentin induced by MAL2 silencing were partially reversed (Fig. 7C), indicating that stabilizing the β-catenin could partially rescue the inhibitory effect of MAL2 downregulation on migration, invasion and EMT progression of BC cells. Discussion MAL2 has been identified as a mediator of various pathological conditions, including cancers. Study have shown that high expression of MAL2 facilitates the proliferation of lung cancer cells in vitro and in vivo [21]. Furthermore, knockout of MAL2 inhibits the proliferation, invasion and migration and promotes apoptosis of ovarian cancer (OC) cells in vivo and in vitro [5]. Nevertheless, recent studies suggest that MAL2 may be a promising target for cancers such as colorectal cancer [22] and hepatocellular carcinoma [23].Therefore, the function of MAL2 in different cancers are inconsistent, indicating that the role of MAL2 may be organ-dependent. Although Bhandari et al. reported that MAL2 was able to promote BC proliferation, migration and invasion [24], the detailed biological functions of MAL2 in BC progression and the underlying Tumor metastasis is usually involved in the EMT [25]. Activation of this EMT program confers cancer cells the potential to inhibit epithelial genes that promote cell adhesion (adhesion junctions, tight junctions, and desmosomes) and invade adjacent tissues [26]. The classical epithelial marker E-cadherin (CDH1) is a key component of adhesion and is the most significant inhibitory target in the process of EMT [27]. Cells undergoing EMT must activate mesenchymal genes, including N-cadherin and All data were presented as mean ± standard deviation. *P < 0.05; **P < 0.01; ***P < 0.001; n = 3 vimentin, to promote the morphological and behavioral changes required for migration [17]. In our study, the upregulation of E-cadherin, downregulation of N-cadherin and Vimentin were found after MAL2 knockdown, suggesting that MAL2 may be involved in the EMT process. In addition, through the construction of lung metastasis model of BC, we found that MAL2 knockdown could inhibit the number of nodules transferred from BC to lung, suggesting that MAL2 may play an important role in the BC metastasis. The enrichment plots of GSEA in BC in previous study showed that the MYC targets V1 was obviously upregulated with a high-MAL2-expression [11]. The c-Myc is a gene highly correlated with cancer that is involved in tumor initiation and progression. Moreover, c-Myc contributes to angiogenesis, invasion, and migration [19,28]. Recently, many studies have shown that the transactivation of c-Myc is regulated by up-stream cytokine signaling, transcription factors and related binding proteins, among which β-catenin is an important factor in c-Myc activation [29]. Beta-catenin is a dual functional protein that both mediates cell-to-cell adhesion at adhesion junctions and regulates the transcription of target genes. Beta-catenin forms a complex with E-cadherin, which in turn, can function as an anchoring junction and act to stabilize cell adhesion [30,31]. GO enrichment analysis revealed that MAL2 mainly mediated in cadherin binding which involved in cell-cell adhesion and epidermis development [5]. In addition, β-catenin is mainly expressed in the cell membrane and cytoplasm where MAL2 is located. Thus, we hypothesized that MAL2 might have some interaction with β-catenin. The results of Co-IP and double immunofluorescence assay confirmed the speculation of an interaction between MAL2 and β-catenin. Since the correlation between β-catenin and c-Myc has been widely reported, we used β-catenin inhibitors and agonists to verify the association between β-catenin and c-Myc. Western blot results showed that, a more significant downregulation of c-Myc was observed after a combination of the MAL2 silencing and β-catenin inhibitors XAV-939. In contrast, β-catenin agonist SKL2001 rescued MAL2-induced downregulation of β-catenin and c-Myc expression. SKL2001 can up-regulate β-cateninregulated transcription by disrupting β-catenin and Axin interaction, thereby preventing β-catenin phosphorylation (Ser33/Ser37/Thr41/Ser45) and proteasomal degradation [32]. These results suggested that MAL2 induced c-Myc expression dependent on β-catenin. In addition, the results of cell scratch, migration and invasion and western blot showed that SKL2001 treatment after MAL2 knockdown could partially reverse the effects of MAL2 silencing on migration, invasion and EMT progression, suggesting that β-catenin was involved in the regulation of MAL2 on BC progression. In summary, our results show that MAL2 regulates apoptosis, invasion and metastasis in BC via the β-catenin/c-Myc axis. However, the study of the interaction between MAL2 and β-catenin is insufficient, and the related mechanism needs to be further explored. In conclusion, MAL2 plays a potential role in BC metastasis and serves as a tumor promoter in BC cells. MAL2 knockdown inhibited cancer cell migration, invasion, and promoted apoptosis in vitro, and inhibited tumor metastasis in vivo possibly by regulating the EMT and β-catenin/c-Myc pathway. Further progress in understanding the mechanism of MAL2 action in BC may be needed. Taken together, our study indicates that MAL2 could be a unique future therapeutic target for controlling the progression and metastasis of BC. Supplementary Material 1
2023-07-21T13:51:09.561Z
2023-07-21T00:00:00.000
{ "year": 2023, "sha1": "cbed52330633b476a5ec8808faf336f2ce8ccb76", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "cbed52330633b476a5ec8808faf336f2ce8ccb76", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
52206714
pes2o/s2orc
v3-fos-license
Diversity and distribution of myxomycetes in coastal and mountain forests of Lubang Island , Occidental Mindoro , Philippines A study of the distribution and ecology of myxomycetes (plasmodial slime molds or myxogastrids) was carried out in the coastal and mountain forests of the geographically isolated island of Lubang in Occidental Mindoro of the Philippines. A total of 44 species were identified from moist chamber cultures. Arcyria cinerea, D. leucopodia, D. effusum, L. scintillans, and P. cinereum were the most abundant species recorded. Most species were commonly associated with only one of the substrates examined in two forest types. The highest level of productivity (myxomycetes recorded as either plasmodia or fruiting bodies) and the highest value for taxonomic diversity were observed for samples of ground litter collected from the mountain forests. However, the highest yield of fruiting bodies was noted for samples of the same substrate collected from the coastal forests. Assemblages of myxomycetes on Lubang Island were found to be similar within a particular area or forest type. This study is the first to compare the diversity and distribution of myxomycetes from two island forest types in the Philippines. Introduction Although a major portion of the biodiversity of the planet is concentrated in tropical regions, the relative abundance and taxonomic richness of myxomycetes have been only roughly estimated.Little is known about the status of myxomycete biodiversity, although these organisms might be expected to flourish in tropical countries because of their seemingly ideal location and resultant climatic conditions relating to temperature and moisture, which are thought to be the major factors determining the distribution of myxomycetes in nature.In Southeast Asia, most of the published literature consists of species list and until recently were limited to collecting carried out in Myanmar and Thailand (Reynolds & Alexopoulos 1971), with only a few records from other countries in the region.Recently, Tran et al. (2006) surveyed the area around Chiang Mai in Northern Thailand and collected 62 species of myxomycetes representing 18 genera.From three lowland tropical forests in Vietnam, Tran et al. (2014) also recorded 43 species of myxomycetes representing 19 genera.In the Philippines, an annotated checklist of species known from the country was published by Reynolds (1981) more than 30 years go.In this checklist, the records of myxomycetes reported were based on collections made by Reynolds and from earlier studies in Davao, Cotabato and Zamboanga by E. B. Copeland; in Benguet by A. D. E. Elmer; and in Bataan, Manila, Cavite, and Laguna by E. D. Merrill.These collections are currently housed in the British Museum in London.Uyenco (1973) collected 314 specimens from Quezon City, Laguna, Basilan, and Zamboanga during the period of 1961 to 1973 and identified 18 species belonging to 10 genera.Later, Dogma (1975) listed 46 species of myxomycetes from 20 genera and stated that Martin and Alexopoulos (1969) had already credited the Philippines with 22 species in their major monograph (The Myxomycetes) in 1969.More recent studies of myxomycetes in the Philippines have included those of Macabago et al. (2010), Dagamac et al. (2012Dagamac et al. ( , 2014Dagamac et al. ( , 2015aDagamac et al. ( , 2015b)), Kuhn et al. (2013aKuhn et al. ( , 2013b)), Cheng et al. (2013), dela Cruz et al. (2014), and Alfaro et al. (2015). National parks and protected areas serve as some of the few remaining habitats catering to the propagation of endangered species of plants and animals.However, only 7.8% of the total land area in the Philippines is listed in the protected area categories (Ong et al. 2002).Therefore, the Philippine government has established conservation programs for the few remaining endangered and rare species left in its rapidly disappearing rainforests.This consciousness has spread among people who realize the need to conserve natural resources, as in the case of the Verde Passage in the province of Batangas.The Verde Passage Marine Biodiversity Conservation Corridor (MBCC) is considered to be an area with one of the very highest levels of marine biodiversity for any tropical water territory on earth.However, there are considerable dangers to this area due to habitat damage and inadequately planned coastal resources development.The Lubang group of Islands, bounded on the west by vast the South China Sea and on the south by the Calavite Passage, is part of the Verde Passage that is greatly protected for its coastal and mountain ecosystems.The Department of Environment and Natural Resources (DENR) of the Philippines identified the island as one of the biodiversity conservation priority terrestrial areas, with an extremely high critical priority level (Ong et al. 2002).Although isolated islands such as Lubang Island potentially represent living laboratories for studies of biogeography, relatively less is known about their biota-the flora and fauna, including the microbial flora.Thus, in this protected area, many new species continue to be discovered.Due to the unique habitats present on Lubang Island, the study reported herein investigated the distribution and ecology of myxomycetes in the coastal and mountain forests of Lubang Island, Occidental Mindoro.Macabago et al. (2012) reported a species list of myxomycetes from Lubang Island.The present paper is an ecological analysis of the data represented by the collections reported in the earlier paper. Materials & Methods The present study was carried out in two different types of forests.The first type is the coastal forest, with this term referring to the fact that the collecting areas are located near the periphery of the island but are not mangrove forests.The second type of forest is the mountain forest in which collecting sites were situated on Mt.Gonting, located at the center of the island (Fig. 1).The methods used in collecting specimens of myxomycetes and the analysis of data follow those described in the literature (e.g., Stephenson & Stempen 1994). The Lubang group of islands, located at 13 o 47‖ N, 120 o 12‖ E and with a total land area of 6,918.78ha, is composed of four islands near but isolated from the Island of Mindoro (Fig. 1).Their topography is generally characterized by a rugged terrain with narrow strips of coastal lowlands, a series of mountain ranges, valleys, and elongated plateaus, with rolling lands along the coastal region, as has been described by Macabago et al. (2012).As noted above, collecting sites in two distinct types of forest ecosystems (coastal forests and mountain forests) provided the substrates used to prepare moist chamber cultures. Ten (10) samples each of dried, dead twigs (TW) and ground (mostly leaf) litter (GL) were collected from ten (10) sites along the periphery of Lubang Island (coastal forests) and ten (10) sites from two slopes of Mt.Gonting (mountain forests) during May 2009.The samples of ground litter and twigs were air-dried and then used to prepare moist chamber cultures following the protocol described by Stephenson & Stempen (1994).Moist chamber cultures (MC, in triplicates or three cultures per sample) were checked regularly for the presence of myxomycete plasmodia and/or fruiting bodies.Only cultures with fruiting bodies and/or plasmodia present were noted as positive.Following incubation, individually-collected fruiting bodies of myxomycetes with their respective substrates were placed in small pasteboard boxes, and these were labeled.Productivity of the moist chamber cultures was then assessed as described in Dagamac et al. (2012).Productivity was further assessed in relation to the -replication data‖ for moist chambers. Fruiting bodies were identified using gross morphological characters of the fruiting bodies (e.g., the appearance of the capillitium, calyculus, stalk and presence or absence of lime) under a stereomicroscope (QZG series, USA & Olympus CX21).Spore morphology was determined following the protocol of Keller & Braun (1999).Morphometric data were compared with identification keys used in conjunction with the published literature (e.g., Martin & Alexopoulos 1969, Stephenson & Stempen 1994) and web-based electronic databases, (e.g., the Eumycetozoan Project [http://slimemold.uark.edu]).Nomenclature follows that available on the website http://nomen.eumycetozoa.com. Ecological statistical analysis was used to determine and/or assess the diversity and distribution of the myxomycete assemblages on Lubang Island.Data used to calculate species diversity and other ecological values were based primarily on specimens obtained from moist chamber cultures.As used herein, occurrence refers to the overall frequency of each species of myxomycete based on its occurrence in moist chamber cultures.A moist chamber culture positive for the fruiting bodies of a particular species was considered as one positive collection.As such, each collection was considered as an individual unit.Thus, the data on the occurrence of each species in each collection site and on each substrate type were then compiled and expressed as relative abundance (RA).The relative abundance value assigned to each species was (1) -abundant‖ (A) if the relative abundance of the species in question was ≥ 3% of the total of all collections, (2) -common‖ (C) if the RA was ≥ 1.5% but < 3% of the total collections, (3) -occasional‖ (O) if the RA was ≥0.5% but <1.5% of the total of all collections, and (4) -rare‖ (R) if the RA was <0.5% of the total for all collections. The series of specimens generated in the present study served as baseline data for the assessment of myxomycete ecology on Lubang Island.It was therefore necessary to estimate if the sampling was exhaustive.A species accumulation curve (SAC) was constructed using the program EstimateS version 8.2 (Colwell 2009).The Chao2 incidence-based estimator of species richness was used to construct the SAC, wherein a hyperbolic regression according to the Michaelis-Menten formula y 1⁄4 ax=ðb þ xÞ was applied to the data, with the parameter providing an estimate for the maximum number of species to be expected (Novozhilov et al. 2013). Initially, the number of species and genera for each forest type and substrate type were determined.A value for taxonomic diversity was then derived by calculating the ratio of the number of species to the number of genera (S/G ratio).A value for the S/G ratio is inversely proportional to its taxonomic diversity.As such, the lower the S/G ratio, the more diverse a particular biota is considered.It is based on the premise that a biota in which the species are divided among many genera is -intuitively‖ more diverse in a taxonomic sense than one in which most species belong to only a few genera (Stephenson et al. 1993). To provide another measure of assessing myxomycete diversity in each collecting site and for each type of substrate examined, species diversity was calculated as described in Dagamac et al. (2012).In contrast to taxonomic diversity, species diversity indices focus on species richness and evenness.The number of individuals is represented herein by the value for relative abundance.Pairwise comparisons of the myxomycete assemblages for the two forest types (coastal and mountain) and two types of substrates (ground litter and twigs) were carried out using the Sorensen's Coefficient of Community (CC) and the Percentage Similarity (PS) indices, as described by Stephenson (1989).The Coefficient of Community (CC) index is based only on the presence or absence of species in the two communities being compared.In contrast, the Percentage Similarity (PS) index considers not only the presence or absence of a species but also its relative abundance.The CC and PS values range from 0 to 1.The higher the value, the more similar the communities are in terms of their species composition and abundance. Results Moist chamber productivity: A total of 1,188 moist chamber cultures were prepared from samples of ground litter and twigs randomly collected from 20 sites in the coastal and mountain forests of Lubang Island.From the cultures prepared, 829 (70%) were positive for myxomycetes (i.e., plasmodia, sclerotia and/or fruiting bodies were observed to be present).Fruiting bodies were observed more often than plasmodia.The yield of myxomycetes was also compared between the two forest types and the two types of substrates.Substrates collected in the mountain forests yielded slightly more positive cultures (71%) than substrates collected from the coastal forests (69%).Between the two substrates, ground litter had a higher yield (76%) than twigs.Higher yields for fruiting bodies (56%) and plasmodia (46%) also were observed for ground litter.There were several interesting differences in the myxomycetes appearing on the two types of substrates collected from the two distinct forest types.In general, higher myxomycete yield (71-80%) was observed for ground litter regardless of the forest type.More plasmodia (38-54%) also were noted for ground litter, again regardless of forest type.However, more fruiting bodies (59-61%) were noted in samples of ground litter and twigs collected from the coastal forest.Although a high yield of plasmodia was observed for ground litter collected in the mountain forest, many of these never developed into fruiting bodies.As a result, a higher number of fruiting bodies was noted for ground litter collected in the coastal forests.All samples with ground litter or twigs were represented by three moist chamber cultures, which acted as replicates.To further assess the possible effect of replication, we also evaluated the moist chamber productivity in relation to the species present and the number of triplicate cultures.Results showed that only 150 (38%) of the triplicate cultures or 450 individual moist chamber cultures displayed similar results (i.e., either having two to three cultures with the same species of myxomycetes or all three cultures producing no evidence of myxomycetes).More of the cultures, 246 (62%) of the triplicate cultures or 738 individual cultures, showed dissimilar results.This means that two to three cultures of the -triplicates‖ had different species of myxomycetes present or only one plate among the triplicates yielded a particular myxomycete.Comparing the two types of substrates, the same trend was observed, such that 77 (39%) triplicate cultures or 231 individual cultures with ground litter and 73 (37%) triplicate cultures or 219 individual cultures with twig substrates yielded a particular species of myxomycetes. Species occurrence, accumulation and relative abundance: A total of 44 species representing 13 genera were identified from ground litter and twigs collected from the two forest types (Table 1).Of these, 38 were recorded in the coastal forests, whereas 35 were recorded from the mountain forests.The species collected belong to four (4) taxonomic orders.These are the Liceales (two species), the Physarales (19 species), the Stemonitales (14 species) and the Trichiales (nine species).To estimate whether or not the number of samples collected was enough to reflect the diversity of myxomycetes in the study area, a species accumulation curve for the study sites was generated (Fig. 2).Values from this analysis indicate that the survey was 95.7% complete (i.e., the Chao2 mean was 47, compared to 44 actual morphospecies identified).Individual results from the two forest types produced a similar outcome, with the sampling effort for the mountain forests calculated as 97.9% complete, which was higher than the corresponding value for the coastal forests (83.9%).As such, it can be assumed that the sampling effort was sufficient to recover the more common species of myxomycetes associated with the forest types studied on Lubang Island. Species abundance also was calculated, based primarily on the relative abundance (RA) value for each species of myxomycete.Arcyria cinerea was the most abundant of all of the species collected.Diderma effusum was the second most abundant myxomycete species, followed by Physarum cinereum, Stemonitis fusca, Lamproderma scintillans, and Diachea leucopodia.Fifteen species represented less than 0.5% of the total number of collections and thus were considered as rare.Slight differences in species occurrence and abundance were noted for the two forest types with more rare species recorded in coastal forests (Table 1).Thirty-eight species of myxomycetes were recorded from collecting sites in the coastal forest.Of these, only two species (Arcyria cinerea and D. effusum) were recorded as abundant.Twenty one species were rare.In the collecting sites in the mountain forest, A. cinerea, D. effusum, and P. cinereum (considered as abundant) were the most commonly collected of the 35 species.Thirteen species were recorded either as rare or occasional.There were more noticeable differences in species occurrence and abundance when the two types of substrates (ground litter and twigs) were compared (Table 1).Twenty-one species were common to both substrate types.Thirty-four species were recorded from ground litter, whereas four fewer species were recorded for twigs.Arcyria cinerea was the most abundant species on both ground litter and twigs, with D. leucopodia, D. effusum and P. cinereum also abundant on ground and S. fusca also abundant on twigs.Other species (14) were classified as rare on ground litter.Of the 30 species collected on twigs, 14 species were also classified as rare. Species and taxonomic diversity: In the present study, 38 species representing 13 genera were recorded in the coastal forests, whereas only 35 species from the same number of genera were recorded for the mountain forests.Although there were more different species in the coastal forests, these belonged to the same number of genera as in the mountain forests.However, taxonomic diversity was lower in coastal forests as shown in its higher S/G ratio (2.92).When comparing the taxonomic diversity of the two types of substrates, ground litter had a higher number of species (34) belonging to a higher number of genera (13), resulting to a lower S/G ratio (2.62) and thus a higher taxonomic diversity than twigs.Apart from the taxonomic diversity, species diversity also was assessed based on the richness and evenness of the myxomycete species in the two forest types and on the two substrates (Table 2).Myxomycetes in the mountain forests were found more even (E = 0.52) than in coastal forests.However, myxomycete species richness in the coastal forests was higher (Hg = 6.16) than in the mountain forests.The overall diversity as reflected by the Shannon's diversity index (Hs) showed a higher value in mountain forests (1.27).Between the two substrates collected, minimal differences in species diversity were observed in this study.Higher evenness and richness, and thus higher species diversity, were noted for ground litter than for twigs (Table 2). Table 1 Abundance indices of myxomycetes on Lubang Island in relation to substrate type and the two forest types (coastal and mountain).Note: GL = ground litter and TW = twigs. GL TW Coastal Mountain Total Arcyria afroalpina Distribution patterns and community analysis: When the myxomycete assemblages of the two forest types were compared, a CC value of 0.76 and PS value of 0.31 were derived.The former value indicates that more than 75% of the total species identified in both forest types were the same.Although there were 29 species collected in both forest types, those species shared in common mostly displayed a rare abundance, and only A. cinerea, D. effusum and P. cinereum were recorded as abundant or occasional in either or both coastal and mountain forests.Thus implies that the species were not present in equal abundance in both forest types.Some species that were restricted to a specific forest type also were recorded.Six species were found exclusively in the mountain forest, whereas ten species were recorded only in the coastal forest.More than half of the species occurring on ground litter also occurred on twigs.As was the case in the previous instance, the species found on both ground litter and twigs were not present in equal abundance.When abundance was incorporated in the analysis of the species similarities between the two substrates, a lower PS value (0.31) was observed.The highest CC value was when the myxomycete assemblage on ground litter collected from the mountain forests was compared with the corresponding value for ground litter collected from the coastal forests (Table 3).However, when relative abundance values were incorporated in the analysis, the highest similarity value was obtained between myxomycetes on ground litter from coastal forests and on twigs from coastal forests (PS = 0.17). Discussion In the study reported herein, the moist chamber culture technique was found to be exceedingly useful in assessing the diversity of myxomycetes in a particular area.This was not surprising, since moist chamber cultures have provided a useful and often very productive method of supplementing field collections in a number of other studies (Novozhilov et al. 2000, Stephenson et al. 2000, Kylin et al. 2013, Lado et al. 2013, Wrigley de Basanta et al. 2013).Seventy percent of the moist chambers yielded positive results for myxomycetes.More fruiting bodies were observed in this study than in the studies of Dagamac et al. (2012) in Mt.Arayat National Park in Pampanga.Moist chamber cultures simulate the environmental conditions (e.g., high humidity) that are necessary for the growth and development of myxomycetes (Keller et al. 2008), and this provides for a better assessment of myxomycete diversity.As already noted, a moist chamber culture positive for myxomycetes (either as plasmodia and/or fruiting bodies) was considered herein as a single collection and these data were used to assess species abundance.In the present study, three moist chamber cultures were prepared from each sample collected.Analysis of the data showed that Table 2 Species diversity (Hs), species richness (Hg) and evenness (E) values for myxomycetes in relation to substrates and forest types on Lubang Island.Diversity indices were calculated using the Shannon index (Hs), Gleason index (Hg) and Pielou's index of evenness (E).Note: GL = ground litter and TW = twigs.only 38% of the triplicate cultures produced the same species of myxomycete in two to three cultures or that all three cultures yielded no myxomycetes.More cultures (62% of the triplicate cultures) produced different species of myxomycetes or a particular myxomycete was present in only a single culture among the triplicates.This also can be observed when the two types of substrates were compared.This interesting analysis was carried out to show that each of the moist chamber triplicates should be treated as a separate sample so that the productivity of the moist chamber cultures and/or the species of myxomycetes found in the samples were not underestimated and that most-if not all-of the species present in a given sample were likely to have been recorded.In an earlier study, dela Cruz et al. (2014) also noted a higher number of collections and recorded species from all of the triplicate cultures as opposed to recording just those appearing in a single culture. The productivity of moist chamber cultures also was compared between the two forest types and the two types of substrates.Substrates collected in the mountain forests (71%) were more productive for myxomycetes than substrates collected in the coastal forests (69%).This suggests that general environmental conditions in the mountain forests are more favorable for the growth and development of myxomycetes than is the case in the coastal forests.Between the two substrates, ground litter had the higher yield of myxomycetes.In contrast, although moist chamber cultures prepared with ground litter showed more evidence of myxomycetes in a study in Costa Rica (92% of the moist chambers produced plasmodia and/or fruiting bodies), 34% of all specimens collected appeared on twigs and bark substrates.This is interesting, since only 20% of all of the substrates collected were twigs and bark, which indicates that the latter were more productive than ground litter (Rojas & Stephenson 2008).Furthermore, it was reported that twigs were the substrate characterized by the highest mean number of fruiting bodies per moist chamber culture (Rojas & Stephenson 2008).As such, the availability of particular microhabitats can influence myxomycete distribution to a considerable extent (Stephenson 1989).This microhabitat was described as a microecosystem, defined as a small specialized habitat within a larger habitat (Schnittler & Stephenson 2002).In the present study, the use of the term microhabitat refers to the substrate types with which myxomycetes are associated.Ground litter, contrary to the results reported in some studies, appears to offer a more favorable microhabitat than twigs.This can be attributed to the presence of more potential food resources (e.g., bacteria and other microorganisms) on the decaying leaves that make up much of ground litter.The trophic stages of myxomycetes feed upon these bacteria and other microbes. It was considered worthwhile to assess the diversity and distribution of myxomycete on Lubang Island since this is the first ecological study of myxomycetes in what has been designated as a biodiversity conservation priority area.Moist chamber cultures were used as the primary method of myxomycete diversity assessment, but specimens collected during field sampling were also recorded.Moist chamber cultures have been found to be exceedingly useful in assessing myxomycete diversity in a number of other studies (e.g., Härkönen 1981, Lado et al. 2003, Wrigley de Basanta et al. 2008, Kilgore et al. 2009).In the present study, A. cinerea was the most abundant myxomycetes appearing on both substrates and in both forest types.This observation is not unexpected, since A. cinerea is known to have a cosmopolitan distribution and has been reported in virtually all surveys carried out for myxomycetes in both temperate and tropical regions of the world (Stephenson et al. 2004, Tran et al. 2006, Wrigley de Basanta et al. 2013).Diderma effusum also was abundant on ground litter, so were Diachea leucopodia and Physarum cinereum.The other species were recorded as rare on ground litter.A similar result was obtained in a study of myxomycetes associated with different substrates in the state of Arkansas in the United States.Moreover, the highest species composition was noted for ground litter in a study carried out by Eliasson et al. (1988). No previous studies have attempted to compare the distribution and diversity of myxomycetes in different forest types and on different substrates in the Philippines.Dagamac et al. (2012) documented the composition and abundance of myxomycetes in relation to their collection time, substrates and sites.This represented the most comprehensive study carried out thus far but was limited to one forest type (a lowland mountain forest).Consequently, the present study was the first to compare the species diversity and composition of myxomycetes in relation to the forest types and substrates on a geographically isolated island.When the substrates in relation to the forest types were compared, ground litter from the mountain forests was characterized by the highest S/G ratio, followed by twigs from the mountain forests.The lowest taxonomic diversity was calculated for twigs from the coastal forests.This result is supported by Pielou (1975), who indicated that diversity should be considered higher for a community in which species are distributed among several genera as contrasting to one where most of the species belong to the same genus.Apart from taxonomic diversity, species diversity also was assessed based on the richness and evenness of the myxomycete species in the two forest types and on two substrates (Table 2).Myxomycetes in the mountain forests were found to be more even than in the coastal forests.However, the assemblage of species of myxomycete in the coastal forests was more species rich than the assemblage present in the mountain forests.Overall diversity as reflected by the Shannon's diversity index (Hs) showed a higher value for the mountain forests, but between the two substrates collected, minimal differences in species diversity were observed. To evaluate the similarities and differences in the myxomycete communities present on Lubang Island, community analysis was carried out by calculating Sorensen's Coefficient of Community (CC) and the Percentage Similarity (PS) values.When the myxomycete assemblages between the two forest types were compared, more than 75% of the total species identified in both forest types were the same.For the myxomycete communities on the two substrates, more than half of the species occurring on ground litter also occurred on twigs.Interestingly, the highest CC value was for the comparison of the myxomycete assemblage on ground litter collected from the mountain forests with that on ground litter collected from the coastal forests (Table 3).This implies that there were species common to the same substrate regardless of forest type.However, when relative abundance values were incorporated in the analysis, the highest similarity value was obtained between myxomycetes on ground litter from the coastal forests and twigs from the coastal forests (Table 3).In this case, similarities in the environmental conditions in this forest type presumably accounted this pattern.In contrast, the lowest similarity value was observed between myxomycetes on ground litter and twigs from the mountain forest.A lower PS value also was noted for these two communities when they were compared (Table 3).This implies that in the mountain forests, a different assemblage of myxomycetes occupies a particular substrate.It also implies that although the species composition may be similar for the two communities, these species may not be present in equal abundance.It can be further inferred that myxomycetes in the mountain forests are occurring more randomly than those in the coastal forest.So, as a general observation, the results of this study suggest that more than being forest type specific, the myxomycetes recorded on Lubang Island are more substrate type specific, as shown by the higher number of uncommon species on ground litter and twigs as well as the lower number of uncommon species when the two types of forests were compared.In summary, the project described herein evaluated the distribution and diversity of the assemblages of myxomycetes associated with ground litter and twigs collected in coastal forests and in mountain forests on Mt.Gonting on Lubang Island.Specimens obtained from moist chamber cultures represented 44 species of myxomycetes belonging to 13 genera.Arcyria cinerea, D. leucopodia, D. effusum, L. scintillans, and P. cinereum were abundant, but the majority of the species recorded were rare.The highest values for taxonomic and species diversity were recorded for ground litter from mountain forests.Myxomycete assemblages were more similar on ground litter and twigs from the mountain forests but were present in unequal abundances.PS values of ground litter and twigs from the coastal forests revealed a greater similarity.As such, it would appear that areas in close proximity support similar assemblages of species of myxomycetes.However, substrate dependency seems to be a major factor affecting these assemblages on Lubang Island. Fig. 1 - Fig. 1 -Collection sites on Lubang Island, Occidental Mindoro.Ten sites (indicated by the black dots) were located in coastal areas and ten sites (indicated by the black triangle) on Mt.Gonting (Macabago et al. 2012). Fig. 2 - Fig. 2 -Species accumulation curve of myxomycetes on Lubang Island.The sample-species curve shows the accumulation of new species in relation to the number of samples collected.Note: Chao2 (Mean) estimator = red/upper, Rarefaction curve = blue/lower. Table 3 Sorensen's Coefficient of Community (lower left) and Percentage Similarity (upper right) values of the myxomycete communities collected at the two substrates from the two forest types on Lubang Island.Note: GL = ground litter and TW = twigs.
2018-09-01T04:16:59.827Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "7d4fcce5ec56052f8c0664a475f68dd875515f2e", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.5943/mycosphere/7/1/2", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7d4fcce5ec56052f8c0664a475f68dd875515f2e", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
266202291
pes2o/s2orc
v3-fos-license
The factors influencing the growth of African migrant enterprises in the Mandeni local municipality in KwaZulu-Natal, South Africa Both developed and developing nations are seeing a growth in migrant enterprises and factors that contribute to the growth of migrant enterprises in various nations vary from one nation to another. This research study aimed to explore and seek an in-depth understanding of the factors influencing the growth of African migrant informal enterprises. The study utilised a qualitative approach with an exploratory research design. The participants were sampled using purposive sampling and the semi-structured interviews were used to collect data from research participants who were African migrant informal enterprise owners. Thematic analysis was employed as a tool for data analysis. A major finding from this study is that many of the African migrant informal enterprises are linked to social networks and together with the different entrepreneurial strategies employed by African migrants have contributed to the growth of African migrant-owned informal enterprises. Furthermore, despite the growing attention on African migrant enterprises in the academic literature but majority of the studies have focused on big cities like Johannesburg, Cape Town and Durban therefore there is limited which has focused on smaller cities. Therefore, this study occupies that gap and investigated the factors influencing the growth of these African migrant informal enterprises in the Mandeni Local Municipality. Introduction International migration has a significant history in South Africa that dates back to the pre-colonial era.The diversity of people that make up South Africa's "rainbow nation" is a result of foreign migration.The population of South Africa is estimated to be 60, 6 million by the end of June 2022 (Stats SA, 2022).South Africa have receive the number of people migrating to country, particularly those originating from the African continent, has increased since the early 1990s, and more so after the first democratic elections in 1994.The migrants primarily come from South Africa's traditional labour supply areas, which include members of the Southern African Development Community (SADC), e.g., Mozambique, Zimbabwe, Lesotho and Malawi.However, migrants have also come from other African countries, such as Nigeria, the Democratic Republic of the Congo and Kenya.As a result, more than 75% of foreign-born migrants living in South Africa came from the African continent (Statics South Africa [Stats SA], 2013).Political unrest, economic instability and even environmental degradation in the African region have contributed to increased numbers of displaced persons, which has led to a significant rise in the number of both documented and undocumented migrants in South Africa (Stats SA 2013).The majority of the migrants from African countries previously came looking for job opportunities, but, because South Africa currently experiences a high rate of unemployment, a shift has been seen in the majority of migrants opening their own businesses mostly in the informal sector (Crush et al., 2015;Ncwadi, 2010;Peberdy, 2016;Moyo, 2017;Dithebe & Makhuba, 2018).532 businesses was their ambition to be their own bosses.But in England, according to Whitehead et al., (2013), migrant enterprises have expanded partly because Africa migrants want to promote their independence and the ability to be their own boss, but the most important reason is that they had trouble obtaining suitable jobs. Similarly, Fourie (2016) discovered that African migrant entrepreneurship growth in Scotland is driven by business potential, financial gain, the desire to be their own boss, as well as migrants' failure to find good jobs and discrimination in the job market.Furthermore, Salaff (2011) noted that migrant enterprise growth in Germany is driven by the desire to make money, but migrant enterprise growth in Germany is a response to the German labour market and the goal to achieve higher income.Last but not least, Sevarajah et al., (2013) discovered that the business background and traditional values of migrants in Australia contributed to the growth of their enterprises. But also in South Africa numerous factors inspire African migrants to start enterprises (Kelley et.al, 2012).Most migrants don't start businesses out of their own free will, rather, they do so in response to a lack of other alternatives, which makes starting a business appear to be the only possible option (Barrett & Mosca, 2013).The push factors responsible for migrants starting their own businesses include lack of employment, a lack of upward mobility, loss of jobs, or low wages, essentially "forcing" people to launch micro business operations.However, pull forces are driven by necessity.Examples of pull factors include the desire to avoid working under superiors, the goal to maximize income, and the desire to apply their expertise and experience (Benzing et al., 2009). According to Peberdy (2019), the political economy of South Africa is quite constrained, which makes it challenging for African migrants to engage in the formal sector.This is due to the likelihood that African migrants' human capital will be devalued as a result of their move, preventing them from finding employment.Moyo (2017a) conducted a study on the exclusion of African migrants that contributes to African migrants entering the informal sector.In his study, he mentioned that African migrants fail to join the formal sector because their qualifications are devalued and sometimes not recognised.In the study the testimonies of African migrants regarding the devaluing of qualifications are instructive.For example, Kasango from DRC stated that he has a Degree in Psychology obtained in DRC but in South Africa, he has failed to find a suitable job with his qualification.As a result, he opted to sell clothes, cell phone accessories and hair products on the corner of Elloff and Jeppe streets in Johannesburg.Similar testimonies were shared by migrants from Malawi, Tanzania and Zimbabwe among others (Moyo et al., 2016;Moyo, 2017). Furthermore, study conducted in area called Britain in the Western Cape by Phayane (2014) pointed out that the regulations for registering a business in South Africa are quite complex, the study discovered that 61% of African migrant-owned firms were not registered with the municipality.This was due to the strict requirements for business registration and occasionally the lack of documentation required for registration among immigrant-owned businesses in Africa.The process of registering a business in South Africa takes around 38 days (Chikamhi, 2011).The most important finding was that a sizable percentage of African migrants were unaware of the registration process and were unmotivated to launch formal businesses due to the stringent municipal regulatory framework.However, there were a few migrants who were aware of the regulatory system but were unable to use it. Lastly, according to Tengeh (2013), capital can be a major constraint in starting formal businesses because of the inability of African migrants to obtain loans which leads to African migrants failing to register their businesses.Fatoki and Garwe (2010) noted that just 2% of African migrant business owners in South Africa can receive bank loans due to their poor credit histories and lack of collateral security.Furthermore, migrants from African countries who lack the required paperwork and collateral security find it difficult to get financing (Tengeh, 2013).Similar findings were made by Khosa and Kalitanyi (2014), who discovered that the Department of Trade and Industry agencies do not offer financial assistance to enterprises held by African migrants. Research and methodology Denzin & Lincoln (2018) view the focus of research methodology as the procedures involved in conducting research as well as the many instruments and techniques that should be used.This study used a qualitative research approach because it aimed to gain a comprehensive grasp of the problem.Furthermore, in this study, the researcher selected the exploratory method to gain new insights, discover new ideas and/or increase knowledge on the factors that contribute to the growth of the African migrant informal enterprises.A research design, according to Creswell (2014), is the strategy for conducting research that covers choices ranging from general hypotheses to specific techniques for gathering and analysing data.It offers a pattern for gathering, assessing, and analysing data.In this study, an exploratory research design was adopted (as informed by the research paradigm) to develop the constructs of the study. Population and sampling A population refers to all items in any field of inquiry (Kothari, 2015).People, events, or records that have the necessary data to help answer research questions are referred to as the target population (Schindler, 2019).Sources with expertise and experience were considered likely to have the answers to the study questions and were incorporated into the total population ( Van Rijnsoever, 2017).However, there is a lack of reliable data from the Mandeni Local Municipality detailing the number of African migrant's informal enterprises.The only documented record in the context of this study is the number of African migrants in ILembe District Municipality as of 2016.This information shows that there was a total of 6941 African migrants (Stats SA, 2017).Therefore, it is difficult to accurately estimate the number of African migrant informal enterprises in Mandeni.However, this does not affect the study's findings because there are no guidelines for selecting the sample size in the qualitative technique (Creswell & Plano, 2018). As a result, the researcher visited and counted about 25 African migrant enterprises in Mandeni between January and August 2019.This number of African migrant enterprises, therefore, provided a population from which a sample was determined.In this study, purposive sampling was utilised.A sample can be hand-selected for the study using purposive sampling, but the selection process must demonstrate or reflect the quality of expertise and knowledge relevant to the research subject (Fisher & Fethney, 2016). Research instrument In the present study, semi-structured interviews were used as the main method of data collection.Using predetermined questions to serve as a framework for the interview, the interviewer has control over how the discussion develops.Thus, a semi-structured interview is used to elicit in-depth information about the participant's opinions, perceptions, or experiences on a specific issue (Grbich, 2015).The interviews took place from the 21 st of October to the 21 st of December 2020.Each interview took approximately 45 minutes.In this study, the voice recorder enabled the researcher to dispense with note-taking and thus be able to concentrate on what was being said and be an active part of the interview process.The interviews were conducted primarily in the English language. Data analysis Data analysis is defined as the process of arranging and examining data so that researchers can spot trends, themes, and connections as well as come up with explanations, interpretations, and theories (Creswell & Creswell, 2018).In this study, thematic analysis was used during data analysis.Thematic analysis is a logical, repeatable method for condensing communication into fewer topics to determine the meaning of that communication (Erisen, 2015).It made it possible for the researcher to thoroughly sort through a lot of data with reasonable ease (Erisen, 2015).Thematic analysis was performed in the current study using Creswell's (2009) framework for analysing qualitative data, which states that the analysis must include the following steps: gathering and organizing the data for analysis, reading through all of the data, coding the data, developing descriptions and themes, interpreting the meaning of the findings, and conducting validation of the findings.Relevant words, phrases, statements, or observations were extracted from each participant's transcript to identify the factors that influences the growth of African migrant informal enterprise in the Mandeni local municipality.The codes were identified from portions of the data which involved breaking down the data from the memos written by the researcher and data taken by recorder during and after the interviews.By comparing and analysing the inter-relationships between the initial codes, the codes were reassembled into more abstract categories using thematic analysis.All the initial codes identified in the data fitted into these categories and this coding process became the basis of concept development. Findings and discussion This section presents and evaluates the findings of the study in light of the opinions of the 25 individuals who were interviewed.The participants were asked questions about the factors that encourage their informal growth.The content is given in quotes exactly as it is here.The study's participants all concurred that they had employed various business strategies that had aided in the expansion of their informal business.Different participants admitted using various business strategies in their informal enterprises during the data collection process. Social networking Social networks are the core of many businesses as some are established through this interconnectedness between the network members.These African migrants' social networks operate on different levels.During the interviews, participants indicated different forms of social networks.The Somalian spaza1 shop owner indicated: "I and my Somali brothers use a common transport to Stanger where most the wholesalers are to stock goods for our informal enterprises and we usually choose a certain day in a week and go together with one van to order goods for our shops.This is very good because it helps us to reduce transport and we can save money because Stanger is quite far from Mandeni and also reduce the prices of our goods and makes our goods to be lower than the local competitors.This form of a network makes us attract more customers" (Somalian, Interview, April 2021).Some of the participants stated that for them, social networks are in the form of loans they have received from other African migrant relatives and friends.In another interview, the Zimbabwean participant mentioned that: "I am so grateful for the loans I have received in the time of hardship from my relatives and friends.This, for me, is very helpful as I can supplement my savings with the loans.I source loans from my relatives and friends instead of formal banking institutions because in most cases lenders do not charge interest on these loans and formal banks require lot of paperwork from us as African migrants we end up not qualifying for loans.Also included is the fact that there is a strong sense of solidarity between me and my fellow relatives and friends and collective self-reliance between me and my fellow African migrants in Mandeni municipality.The other thing that makes me prefers these loans rather than going to the formal bank is the fact that they tend to come with favourable terms of repayment.For example, it is often the case that a debtor is expected to make repayments in irregular instalments that depend on his performance in my informal enterprise and most importantly this indicate the trust we have for one another, therefore I can service my debts without undermining the profitability of my business" (Zimbabwean, Interview, April 2021). Furthermore, the other form of social network that was mentioned by African migrant informal enterprise owners is that they have established a rotating savings credit association.The Somalian participants stated that: "I and my brothers from Somalia have established the rotating savings credit association and I have heard from my South African friends calling it a 'stokvel' which helps to raise funds to help us to grow and sustain our businesses.These funds can be used as start-up capital and also to help our business to grow by buying more goods.Each month the proceeds are given to one of the contributors for their personal use and this allows us to be sustainable, by allowing all members to have financial resources.These schemes make it possible for members to access relatively large sums of money, which they can invest in profitable informal enterprises.For example, when my spaza shop was robbed, the money from the association helped me to re-stock the shop and revive my business successfully.But also, my fellow brothers use this money from the rotating savings credit association to increase our stock during periods of booming business, so this association is important in mobilising funds for members to grow their businesses and to sustain them in times of financial need" (Somalian, Interview, March 2021). Lastly, the other form of social network that was mentioned by African migrants was that the African migrant informal enterprise owners disseminate information amongst other African migrants. "In South Africa, hairdressing is a growing business.A key resource that you can never take away from Ghanaians is the knowledge and expertise that we get from our fellow Ghanaians who are in Mandeni municipality and others back home.There are certain ideas and knowledge systems which circulate amongst Ghanaian hairdressers that you will not find in any other hairdressers in Mandeni municipality.We always help each other to improve the quality of our hairdressing.This is why Ghanaians are always very productive and not seen as lazy people like South African hairdressers.Our kind of productivity requires patience, and commitment.In this salon, we have qualified stylists who always provide training to other Ghanaian hairdressers free of charge.We continually update our skills and knowledge with the help of these qualified stylists.This has made us at the leading edge of hairdressing practice.In so doing we can positively respond to the demands of our clients" (Ghanaian, Interview, March 2021). Indeed, this confirms a study by Bashir (2016) in Mayfair in Johannesburg, Bellville in Cape Town, and Korsten in Port Elizabeth who found out that African migrants practise social networking by providing each other credit, goods and loans and sharing information about potential business opportunities such as identifying a new location to open a business.Landau (2014) highlighted that in the absence of proper legal documents African migrants are often unable to get access to financial capital from banks and African migrants often overcome this challenge by depending on their credit access networks.Also, small enterprise owners borrow goods and products from the big wholesalers owned by other African migrants or that have links with members of the African community. Long operating hours Every participant in the study mentioned long operating hours as a contributing strategy toward success.It is a common practice for these African migrant informal enterprises to open very early in the morning and close late in the evening.The African migrant informal enterprises stated that by opening for an extended period they can serve almost every customer at any time.This allows them to cater for a customer at any time including when their competitors are closed.This has become a norm associated with African migrant informal enterprises and customers know that they are served at any time they want something.In this regard, one participant added that: "I open my shop very early in the morning so that my customers can buy whatever they want before they go to work or school.I also close very late so that I can cater to those who knock off late from work because majority work at iSithebe industrial area so they leave home early and come back late, therefore my customers get serviced as soon as they wake up until their time of going to bed as I am open 6 AM until 8 PM" (Zimbabwean, Interview, April 2021). This finding corroborates the report of the Human Sciences Research Council (HSRC) (2014), which reveals that African migrant informal enterprises usually open their shops as early as 6 AM and close at approximately 9 PM.Their long operating hours enable them to achieve high volumes of sales, thereby garnering high profits.Extended business hours often result in increased revenue and improved services for customers. Stocking a variety of items and pricing The other reason why the businesses owned by African migrants performed better compared to those owned by South Africans in Mandeni is that they stock a variety of items.Some of the paticipants suggested that the growth of their enterprises was because they maintained high levels of stock, as they sell a variety of items at prices which are generally preferred by their consumers.Given the significant number of African migrants' informal enterprises in Mandeni municipality, there is a need for the informal enterprise to stock items that are relevant to the needs of the communities they serve.This was a very important consideration for the study participants as they wanted to meet the demands of the community so that they will get more customers that will buy from their enterprises.The study participants stated that they studied the community and their needs.Furthermore, they were aware of the general financial status of the community.In other words, they understood that there were high rates of unemployment in the community and they had to cater for everyone by providing for them regardless of their employment status.The type of items stocked and frequency of stocking was discussed during the interviews. The aim of the study participants was not to stock items that would never be bought by their customers but to prioritise stocking items that were regularly needed.Their stocking list varied and their informal enterprises are always fully stocked.The participants mentioned that they do not have a set frequency for stocking goods as they stock to meet the demand of their customers.Some products are stocked almost daily and the other items are on a weekly or monthly basis depending on how fast they sell.Products such as bread in informal enterprises like spaza shops are stocked daily. To cater for all the needs of the communities and bring great services to their customers, the study participants pointed out several techniques they use to stock their enterprises so that they attract a lot of customers to come and buy the goods or services they provide.This is done by prioritising the sale of cheaper brands which can be afforded by the majority of the people.In this regard, one participant added that: "In my shops, I ensure that I sell cheaper brands in things like bread or even cool drink as they tend to be required by customers very often.For example, a Albany loaf of bread sells for R18, 50 each but the sunshine bread only cost R14 each and also a 2-litre bottle of Coca-Cola costs R 25 but a 2-litre of Coo-ee is sold for R 20" (Somalian, Interview, April 2021). Again, to cater for all members of the community, some of the study participants sell hampers which are a combination of different items, sold at discounted prices.These hampers are usually for bigger products such as 10kg maize meal, rice, flour and sugar.Informal enterprises selling fruits and vegetables, their hampers include 10kg of potatoes, onions, butternut and carrots.These hampers are preferred by the communities for two main reasons.Firstly, they enjoy the discounted prices of buying these items instead of buying from the bigger supermarket.Secondly, because there are very few shopping centres in the Mandeni municipality, most residents incur transportation costs if they have to shop from a supermarket like Boxer, Spar and Shoprite.A study by African Research Bulletin (2013) in Cape Town showed that African migrant enterprises' 'hamper' offers and collections of bulk products sold at discounted prices have resulted in the African migrant enterprises becoming popular amongst township customers. Furthermore, in this study, the participants pointed out that some food items are sold in smaller quantities allowing customers to afford these regardless of their financial situations.Vegetables such as onions, tomatoes, potatoes, eggs and tea bags were sold loose and at lower a price which makes to be in high demand.Interview with the Mozambican participant stated that: "I sell small items (onions, potatoes and tomatoes for R 10.00) to cater for those customers who stay alone or do not have refrigerators in their households.These items are also affordable to students who are renting near my shop and they are studying at Umfolozi FET College and my regular customers when they do not have enough money to buy large quantities" (Mozambican, Interview, March 2021). African migrant informal enterprises are largely known for bulk stocking and this strategy has been used by many to save them costs as they can get discounts on bulk purchases.Most of the respondents in this study pointed out that bulk stocking kwere applied to bigger items.This finding concurs with that of Liedeman et al., (2013), who explain that African migrant informal enterprises make use of distribution networks to buy a variety of items cheaply in bulk, which gives them a competitive advantage over their South African counterparts.If stock is bought in bulk and the discounts are passed on to the customers, there is usually great potential for increased volumes of sales.High stock levels ensure customer satisfaction, promote confidence and protect owners against the possibility of shortages of stock owing to delayed deliveries. The participants believe that they save money with this kind of stocking as they split transport costs amongst themselves.These findings were supported by Gastrow & Amit (2015) as they highlighted that sharing transport costs is an essential factor contributing to why African migrant businesses in townships thrive compared to the shops run by locals.Shops in the same location jointly order the goods and products they need and then one delivery van delivers their order to their shops.They share the cost of the transportation of the goods.This business approach allows them to minimise the expenses that would be incurred if each shop orders its supplies separately and has its own delivery system. Small profit and quick returns African migrants mentioned that another of their strategies for their informal enterprise was selling goods at relatively low mark-ups for both cultural and commercial reasons.Commercially, lower prices draw more customers who could buy more goods, particularly in Mandeni municipality where there is a significant number of poor people.According to Stats SA (2016), Mandeni municipality has an unemployment rate of 28, 6% participants further indicated that their low mark-ups and high turnover is vital for the survival of their informal enterprises.The result of this is that they (African migrant informal enterprises) claimed a significant portion of the market in Mandeni as more people patronised the businesses which were owned by African migrants.This resulted in the relative success and indeed the growth of these businesses, particularly compared to those which were owned by South African citizens. The stocking of products in high demand and the frequent stocking does not necessarily mean the African migrants' informal enterprises are making a big profit.The African migrant informal enterprises aim to provide goods and services to the Mandeni community while making a living for themselves.The study respondents hold the strong view that their aim is not to get rich fast; if they would even get rich.If this was their sole purpose for running an informal enterprise, then it would mean that their prices would be unaffordable so that they can make a profit while pushing away the customers.So, what is important for them is to keep retaining their customers even if they will be making a small profit.Their strategy is to keep prices low, make a small profit, and have quick returns.In an interview, an Ethiopian participant mentioned that: "I always have R5, R10, R12 airtime and single eggs at all times in my shops.I sell more of these products than any other items daily even though I do not make a lot of profit but I sell them because my regular customers need not look elsewhere for their common product" (Ethiopian, Interview, March 2021). In essence, the idea of "small profit, quick returns" means that the African migrant enterprise owners will keep their prices low and make little profit but the trick to it is the quick returns.This means that the more clients or customers who require their services or purchase their goods the owner would be making a small but frequent profit.As customers buy or require their services the enterprise owner re-stocks, and the profit is made faster no matter how small it is.An example given by one participant was that of airtime and stated that: "While other informal enterprises add an R1 to the original price of the airtime, I sell airtime at its original price.But what I can say is that selling airtime gives me very little profit which is why other informal enterprises add an R1; this is for them to make a profit.The profit I make is less than R1 for airtime especially the cheaper vouchers of R5, R10 and R12, but these vouchers sell very quickly compared to R30 or R60 vouchers.My strategy with selling airtime is that people would buy more from me than they would from my competitors because they did not want to pay extra.For me, this means that my airtime is sold quickly and I stocked it frequently and so a small profit for me is better than a bigger profit with items that sell slowly, I also believe that I make more profit than the informal enterprise that adds extra R1 when it comes airtime sales" (Somalian, Interview, April 2021). This finding concurs with that of Charman et al., (2016), who explains that African migrant informal enterprises make use of a pricing strategy known as the Small Profit Quick Return strategy to improve their potential gross profits, as lower prices promote increased turnover.Given the intense competition to attract customers, the African migrant informal enterprises also devised a crucial strategy named customer retention.This meant that they do not make a profit with certain items or services but they continue to offer them as a simple strategy to attract the customer.In this regard, one participant added that: "I sell single biscuit in my shop but it doesn't give me a profit.What makes me keep selling them is that these biscuits keep attracting customers to come to my shop especially young costumers such as the school kids.Because for me to make a profit I would need to increase the price but instead I am choosing to use this as a strategy to attract customers because customers come to the shop to buy other products and use their change to buy these biscuits, so my focus is not on the biscuits but on the money they spend on other products that gives me a profit" (Somalian, Interview, March 2021). Location of the African migrant informal enterprises The other reason why the businesses which were owned by African migrants performed better compared to those owned by South Africans in Mandeni is that they located their businesses on the street corners and in high pedestrian traffic areas.This ensured that their enterprises get a lot of customers who come to their enterprises to purchase different goods or services.While the African migrant informal enterprises are generally small and do not opt for big marketing techniques like holding competitions or even having specials, they have their unique ways that they use to ensure that their enterprises a visible enough and attract customers.Several strategies were observed during the interviews.Firstly, and as has been mentioned before, African migrant informal enterprises are mostly located on the street corners and never in the middle of the road, the paintings of their informal enterprises are bright and visible from a distance, which attracts the eye of the customer and even people who are passing by.Many participants mentioned having their informal enterprises on the street corner and highly visible to many customers as well as potential customers.For example, one research participant stated that: "I operate from the corner close to the local sports ground which is also close to a secondary schools.My aim is not only to attract people in the neighbourhood but also teachers and learners but most importantly what I love about this place is that during weekends or even school holidays I also get customers who come to the sports ground to participate and others who come to watch different sporting activities".(Somalian, Interview, April 2021). Secondly, during the research, it was also noticed that the migrant informal enterprises locate their enterprises in high pedestrian traffic areas.This ensured that their enterprises get a lot of customers who come to their enterprises and purchase different goods or services.Their customers are from low-income households in townships that cannot financially sustain daily travelling to the shopping malls to buy their household needs.It is a mutual benefit for both the African migrants' enterprise owners and the local customers. A participant has this to say: "High pedestrian traffic makes townships a thriving market for my informal enterprises because of the generally lower mark-ups on goods, my enterprise relies on quick turnover from many customers.Also, many township customers do not own cars and cannot easily reach the big supermarkets like Shoprite or Spar so these residents walk past my enterprise every day" (Zimbabwean, Interview, April 2021). Indeed, this confirms a study by Bennett (2017) in Cape Town who also found that African migrant entrepreneurs trade specifically in busy pedestrian traffic areas to provide goods and services needed by the community daily.These findings can also be compared with the study done by Smith (2015) who ascertains that operation strategies used by the migrant-owned informal enterprises are now a particularly significant element in the growth of their enterprises and the landscape of Cape Town in Gugulethu and Nyanga Townships. Interest-free credit Most African migrants' informal give credits with no interest to their regular customers.All those granting credits acknowledge that this has a positive effect on their enterprises as it builds trust and loyalty towards their enterprises.Participants believe that selling on credits attracts customers and it is also a measure of customer retention.The views of the participants are indicated in the following excerpts: "I give my regular customers goods on credit especially the pensioners that reside close to my shop and this because majority of the household around here relies on pensioners or social grants because they are not working and also I understand that people sometimes go through difficult times.I have not had problems with my customers.They pay me when they have money.I know my regular customers and I want them to be happy and have food at their table at all times" (Somalian, Interview, April 2021). "The level of understanding that I have with my customers is amazing.I don't even think about what if they don't pay when I give them goods on credit.It's something I have been doing since I opened this shop.All my regular customers know that I always help whenever they are encountering difficulties" (Zimbabwean, Interview, April 2021). A major finding from this study is that many of the African migrant informal enterprises are linked to social networks that they have formed.Social network amongst the African migrants includes sharing the transport costs when stocking for their informal enterprises, disseminating the knowledge and expertise on how to improve service delivery to their clients and certain ideas and knowledge system which circulate amongst other African migrants and extending loans to start businesses amongst African migrants.Social network (such as sharing transport costs) is helpful because they led to African migrants selling their goods at a lower price than the South African competitors which attracts more customers.The sharing of financial resources and knowledge amongst African migrants showed the relevance of the social capital theory.In the case of African migrant informal enterprises owners in the Mandeni municipality social capital included trust, solidarity and ethnical ties.Such resources, although not monetary, still play a very important role in the business and establishment of African migrants in host countries.A significant number of the study participants benefited in one way or the other from these social resources which have led to the growth of African migrants' informal enterprises In addition, the different entrepreneurial strategies employed by African migrants ranging from selling at lower prices, the strategic location of African migrant informal enterprises, long operating hours, stocking a variety of items and interest-free credit corroborates existing research (see e.g., Charman et al., 2012;Gastrow & Amit, 2013;Basardien et al., 2014) and contributed to the growth of African migrant owned informal enterprises.While African migrant entrepreneurs arguably hold superior business strategies than their local counterparts who also own informal enterprises, they possess some elements of exploiting the market and understanding their customers' needs.Some cheaper products sold in African migrant informal enterprises including cold drink, and bread are largely preferred and are more affordable, especially for the unemployed local community members. Conclusions This study showed that the African migrant informal enterprises are concentrating on the development of their businesses and strive to achieve growth and succeed at varying levels.Many of the African migrant informal enterprises who were interviewed provided reasons for their good performance in business which have made their informal enterprises to grow which included social networks that are formed among each other and most important the participants pointed out that beside having social networks but also having business strategies also have helped them to grow their enterprises.A major finding from this study is that many of the African migrant informal enterprises are linked to social networks that they have formed.Social network amongst the African migrants includes sharing the transport costs when stocking for their informal enterprises, disseminating the knowledge and expertise on how to improve service delivery to their clients and certain ideas and knowledge system which circulate amongst other African migrants and extending loans to start businesses amongst African migrants.Social network (such as sharing transport costs) is helpful because they led to African migrants selling their goods at a lower price than the South African competitors which attracts more customers.The sharing of financial resources and knowledge amongst African migrants showed the relevance of the social capital theory.In the case of African migrant informal enterprises owners in the Mandeni municipality social capital included trust, solidarity and ethnical ties.Such resources, although not monetary, still play a very important role in the business and establishment of African migrants in host countries.A significant number of the study participants benefited in one way or the other from these social resources which have led to the growth of African migrants' informal enterprises In addition, the different entrepreneurial strategies employed by African migrants ranging from selling at lower prices, the strategic location of African migrant informal enterprises, long operating hours, stocking a variety of items and interest-free credit corroborates existing research (see e.g., Charman et al., 2012;Gastrow & Amit, 2013;Liedeman, 2013;Basardien et al., 2014) and contributed to the growth of African migrant owned informal enterprises.While African migrant entrepreneurs arguably hold superior business strategies than their local counterparts who also own informal enterprises, they possess some elements of exploiting the market and understanding their customers' needs.Pointed out in the study is that some cheaper products sold in African migrant informal enterprises including cold drinks, cigarettes and bread are largely preferred and are more affordable, especially for the unemployed local community members.The main challenges faced by African migrants in the Mandeni municipality are competition and criminal acts, which harmed their enterprises' growth.However, experiencing such challenges has not stopped the study participants from successfully running their informal enterprises.This shows that the African migrants are resilient as they adopted some strategies to overcome them. Lastly this study contributes to the discipline of Geography by illuminating the factors that contribute to the development of the African migrant informal enterprises in a small town like Mandeni municipality.Thus, the novelty, therefore, is that this study helps to show that migration is not always occurring in big urban cities, but that smaller towns like Mandeni are also destinations of choice.While this did not disprove, that cities like Cape Town, Durban and Johannesburg are the major attractions, it adds that smaller towns also have an influence and thus deserve appropriate attention in migration studies in the context of Human Geography. Recommendation The study discovered that African migrants outperform their local competitors.A starting point would be to establish a relationship between African migrants and the local informal entrepreneurs so that they can share knowledge on how to sustain their businesses.African migrant informal entrepreneurs and local informal entrepreneurs could collaborate through the African migrants mentoring local informal entrepreneurs.This would help to strengthen the bond between the African migrants and local informal entrepreneurs.The Mandeni municipality can organize business seminars and invite both the African migrant informal entrepreneurs and local informal entrepreneurs and this would create a positive interaction between African migrants and local informal entrepreneurs.In the long run, this will contribute to the reduction of xenophobia.Also, tolerance toward African migrants will be promoted at the same time assisting in attempts to make African migrants a part of the municipality and not to be regarded or seen as a threat.Antixenophobia campaigns should be launched by community organisations.The media should also assist in highlighting and educating the local community about the worthwhile contributions by the African migrant informal enterprises.The Mandeni municipality should incorporate policies that stipulate how they expect the media to assist them in educating the public.
2023-12-14T16:12:31.841Z
2023-12-11T00:00:00.000
{ "year": 2023, "sha1": "8ce3ead3e3da80bbc13cdd33ed3d5a44edc05525", "oa_license": "CCBY", "oa_url": "https://www.ssbfnet.com/ojs/index.php/ijrbs/article/download/2809/2094", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e13bb975ddcc4a829cd5f0e0cd5f07c4ac74a034", "s2fieldsofstudy": [ "Sociology", "Business" ], "extfieldsofstudy": [] }
267848123
pes2o/s2orc
v3-fos-license
Shrinkage Reduction in Nanopore-Rich Cement Paste Based on Facile Organic Modification of Montmorillonite The organic modification of montmorillonite was successfully achieved using cetyltrimethyl ammonium bromide under facile conditions. The modified montmorillonite was subsequently used for the fabrication of montmorillonite-induced nanopore-rich cement paste (MNCP), and the shrinkage behavior and fundamental performance of MNCP were also investigated. The results indicate that alkali cations on a montmorillonite layer surface were exchanged by using CTAB under 80 °C, successfully achieving the organic modification of montmorillonite. As a pore-forming agent, the modified montmorillonite caused a reduction in shrinkage: the 28-day autogenous shrinkage at a design density of 400 kg/m3 and 800 kg/m3 was reduced to 2.05 mm/m and 0.24 mm/m, and the highest reduction percentages during the 28-day drying shrinkage were 68.1% and 62.2%, respectively. The enlarged interlamellar pores and hydrophobic effects caused by the organic modification of montmorillonite aided this process. Organic-modified montmorillonite had a minor influence on dry density and thermal conductivity and could contribute to an enhancement of strength in MNCP. Introduction Cement-based porous materials are popular in building insulation due to their thermal insulation and energy-saving properties, light weight, simple processing, low cost, fire safety, and carbon sequestration [1][2][3].However, compared to organic thermal insulation materials, their performance is significantly lacking [4][5][6].To solve this problem, the thermal insulation performance of cement-based porous materials must be improved from the perspective of pore structure optimization [7].However, the thermal conductivity of porous cement-based materials remains high due to the large thermal conductivity value of the cement-based matrix and the fact that the decreased percentage of thermal conductivity caused by pore structure optimization is highly limited [8,9].Moreover, high porosity or low density can also be used to reduce thermal conductivity and improve thermal insulation ability [10,11], since the thermal conductivity of the increased phase (air) is extremely low.Usually, air voids, treated as macroscopic harmful pores, lead to a significant weakening of mechanical properties; therefore, achieving the balance of good thermal insulation performance and mechanical properties is difficult [12,13], greatly limiting the widespread application of cement-based porous materials. To resolve this critical problem, the replacement of macroscopic pores (air voids) with microscopic pores was proposed to mitigate damage to the mechanical properties of pores and improve their thermal insulation performance [14].This is because the decrease in pore size could cause a significant reduction in the thermal conductivity of the gas phase in pores and largely extend the heat transfer path of the solid phase [14,15].Therefore, this approach is regarded as a promising method for balancing insulation performance and other properties, such as mechanical strength.Usually, macroscopic pores (air voids) can be replaced by microscopic pores using pore-forming media, such as aerogels [16,17], but the high cost of aerogels hinders their widespread application.Jiang et al. [14] developed lowcost, montmorillonite-based, pore-forming media which can also be used to construct rich microscopic pores in a cement matrix and was successfully utilized to prepare nanoporerich cement pastes.However, due to the construction of rich nanopores, the significant shrinkage of these pastes often occurs, leading to a tremendous risk of cracking, which seriously affects their durability and applications.Significant shrinkage is currently the greatest challenge in this field of research. Researchers have adopted many approaches to reducing the shrinkage of porous materials, such as adding various types of fibers, shrinkage-reducing agents, and expansion agents [18][19][20][21].However, due to the light weight of porous materials, the contact surface between fibers and the cement matrix is minimal, and the shrinkage limitation ability of fibers is weakened.A shrinkage reducer is often used to reduce the shrinkage in concrete.For porous materials with a high number of microscopic pores, the critical dosage of the admixture for reducing shrinkage is relatively high, since the shrinkage reducer should be present in all microscopic pores and maintain a certain concentration.Expansion agents can also reduce shrinkage to a certain extent due to their expansion capacity, but a reduction in shrinkage may be difficult to guarantee during curing and serving processes, because water loss may occur at any time.Shrinkage cannot be effectively controlled using these external interventions [22,23].Thus, shrinkage reduction should be carried out based on the source of shrinkage generation.Shrinkage stemming from water loss in small pores and the Young-Laplace equation indicate that small pores (≤10 nm) in cement-based materials generate great shrinkage stress when water migrates from these pores [24].In particular, smaller pores cause greater shrinkage stress than larger pores [25].For nanopore-rich cement-based materials prepared using a montmorillonite-based pore-forming agent, these extremely small nanopores (interlaminar pores) generate high shrinkage stress and are strongly related to the structure of montmorillonite.A montmorillonite unit contains two tetrahedral silica sheets and an octahedral alumina centrally located and sandwiched between two tetrahedral sheets.Usually, a montmorillonite layer has a negative charge due to the isomorphic substitution of Al 3+ by Mg 2+ in the octahedral sheet and Si 4+ by Al 3+ in the tetrahedral sheet [26].This layer is often balanced, since it attracts alkali or alkaline earth cations at the mineral layer surface.To maintain this charge balance, these layers are rearranged into a multilayer structure: the cations interlaminate, stabilize the position of layer, and form rich and extremely small pores.However, the cations are easily exchanged for organic cationic surfactants, ultimately achieving the intercalation of organic cationic surfactants.This is often called the organic modification of montmorillonite [27,28].After being intercalated, the chains of surfactants must extend to interlamination, due to space limitation, which increases the amount of available space, finally resulting in an enlargement of the interlayer pores [29,30].Based on the shrinkage mechanism and the Young-Laplace equation [24], these enlarged pores tremendously reduce shrinkage stress, contributing to shrinkage reduction.Moreover, the carbon chain of surfactants often generates a hydrophobic effect.After organic modification, the carbon chain of surfactants can endow the interlamination with hydrophobic ability.Water cannot easily penetrate these interlayer spaces, helping to control shrinkage [31,32], since water loss is the precondition of shrinkage.Therefore, intercalation via organic modification may be effective in reducing shrinkage and preparing low-shrinkage nanopore-rich cement-based materials, demonstrating great potential in addressing the challenge of shrinkage control, which is seldom studied. In this study, the organic modification of a montmorillonite-based pore-forming agent was conducted first, and then nanopore-rich cement pastes were prepared using the modified pore-forming agents.Their drying shrinkage and autogenous shrinkage were subsequently measured to detect their effectiveness in reducing shrinkage, and their hardened performance was investigated.The results in this study could inspire innovative approaches to effectively reducing shrinkage based on its source, contributing to lowering the risk of cracking, and thus guaranteeing the durability of nanopore-rich cement-based materials and promoting their applications in building insulation fields.The successful application of these high-performance, nanopore-rich, cement-based materials will narrow the gap between organic insulation materials and fire-safe, cement-based materials, greatly improving energy efficiency, ultimately saving energy usage in buildings. Raw Materials Portland cement that met Chinese standard GB 175 [33] (P•O 42.5 R; initial setting time, 3.8 h; final setting time, 4.8 h; and 28-day compressive strength, 51.0 MPa; purchased from Lafarge Cement Plant in Jiangyou City, China) was used as a binder to prepare nanopore-rich cement pastes, and its particle size distribution is presented in Figure 1.Montmorillonite is a commercial product (bentonite) that was purchased from a Chinese company (Weifang Shengshi Co., Ltd., Weifang, China); its particle size distribution is shown in Figure 1.Montmorillonite contains quartz, feldspar, and illite.The modifier, composed of cetyltrimethyl ammonium bromide (CTAB), is of analytical grade and was provided by Fucheng Chemical Co., Ltd.(Tianjin, China). modified pore-forming agents.Their drying shrinkage and autogenous shrinkage were subsequently measured to detect their effectiveness in reducing shrinkage, and their hardened performance was investigated.The results in this study could inspire innovative approaches to effectively reducing shrinkage based on its source, contributing to lowering the risk of cracking, and thus guaranteeing the durability of nanopore-rich cement-based materials and promoting their applications in building insulation fields.The successful application of these high-performance, nanopore-rich, cement-based materials will narrow the gap between organic insulation materials and fire-safe, cement-based materials, greatly improving energy efficiency, ultimately saving energy usage in buildings. Raw Materials Portland cement that met Chinese standard GB 175 [33] (P•O 42.5 R; initial setting time, 3.8 h; final setting time, 4.8 h; and 28-day compressive strength, 51.0 MPa; purchased from Lafarge Cement Plant in Jiangyou City, China) was used as a binder to prepare nanopore-rich cement pastes, and its particle size distribution is presented in Figure 1.Montmorillonite is a commercial product (bentonite) that was purchased from a Chinese company (Weifang Shengshi Co., Ltd., Weifang, China); its particle size distribution is shown in Figure 1.Montmorillonite contains quartz, feldspar, and illite.The modifier, composed of cetyltrimethyl ammonium bromide (CTAB), is of analytical grade and was provided by Fucheng Chemical Co., Ltd.(Tianjin, China). Mix Design and Preparation According to reference [34], a slurry of 18% montmorillonite can be used as nanopore-forming agent to fabricate nanopore-rich cement pastes.Before preparation, montmorillonite was mixed with water at 7000 r/min for 1 h; the concentration of montmorillonite was 18% by weight, and the slurry temperature was maintained at (20 ± 2) °C during the mixing process.Subsequently, montmorillonite slurry was placed in a roomtemperature environment (temperature 20 ± 2 °C) for 24 h to fabricate an original nanopore-forming agent.For the facile organic modification of the nanopore-forming agent, various dosages of CTAB were added to the original montmorillonite slurry (concentration of 18%), and then this slurry was kept at 80 °C for 2 h under a mixing speed of 60 r/min to achieve organic modification.Following these procedures, the modified nanopore-forming agent was washed twice using deionized water to remove excess CTAB, and then organic montmorillonite (O-MMT) was obtained.However, when organic Mix Design and Preparation According to reference [34], a slurry of 18% montmorillonite can be used as nanoporeforming agent to fabricate nanopore-rich cement pastes.Before preparation, montmorillonite was mixed with water at 7000 r/min for 1 h; the concentration of montmorillonite was 18% by weight, and the slurry temperature was maintained at (20 ± 2) • C during the mixing process.Subsequently, montmorillonite slurry was placed in a room-temperature environment (temperature 20 ± 2 • C) for 24 h to fabricate an original nanopore-forming agent.For the facile organic modification of the nanopore-forming agent, various dosages of CTAB were added to the original montmorillonite slurry (concentration of 18%), and then this slurry was kept at 80 • C for 2 h under a mixing speed of 60 r/min to achieve organic modification.Following these procedures, the modified nanopore-forming agent was washed twice using deionized water to remove excess CTAB, and then organic montmorillonite (O-MMT) was obtained.However, when organic montmorillonite (O-MMT) was used as a nanopore-forming agent, the concentration of O-MMT slurry was adjusted to 18%. O-MMT slurry and MMT slurry were used as nanopore-forming agents for the fabrication of MNCP.The content of the nanopore-forming agent was calculated based on design density, and the density of MNCP was strongly associated with the solid phases in the system, such as cement and MMT or O-MMT.A nanopore-forming agent that replaced cement paste in multiple experiments could be used to obtain a suitable design density due to its lower density compared to that of cement paste.According to this design philosophy and reference [34], two batches of MNCPs were designed, and the mix proportions are shown in Table 1.Water was first added to a mixer, into which cement was poured and mixed with water (90 s) at 400 r/min to obtain cement paste.Subsequently, the nanopore-forming agent was introduced into the cement paste and mixed at the same speed until a homogenous MNCP slurry was obtained.Finally, this slurry was placed into molds, covered with plastic films, and cured at (20 ± 2) • C in a chamber of 90% relative humidity (RH) to achieve hardening.After hardening, these samples were unmolded, and some of them were covered with tinfoil for autogenous shrinkage test.Others were cured in the same environment in a hardened performance test, and a 28-day cured sample was used for the drying shrinkage test. Test Methods The hardened performance of MNCPs, which were fabricated using an unmodified or modified pore-forming agent, was evaluated using a strength test after 7-day, 28-day, and 56-day curing, based on ISO 679 [35].Three samples of each mixture were used to obtain the strength at a specific age, and the loading rate for all samples was 2.4 kN/s.The equipment was a microcontrolled electronic universal testing machine (SANS, CMT5105, Shenzhen, China).Three 28-day-cured samples for each composition were used to obtain the thermal conductivity using the hot-desk method (DRE-2C, Xiangyi Instrument Co., Ltd., Xiangtan, China) in accordance with ISO 22007 [36].Before the test, all samples were dried at 80 • C and polished. For the dry shrinkage test, three 28-day-cured samples (40 mm × 40 mm × 160 mm) from each mixture were completely immersed into water at (20 ± 2) • C for 3 days.Then, these samples were taken out and free water at the surface was wiped out; the initial length was determined using dial gauges (BC 300, Tianjing Jianyi Instrument Co., Ltd., Cangzhou, China).Subsequently, the samples were placed in an environment of humidity (50 ± 5) % and temperature (20 ± 2) • C, and the mass and volume changes were tested according to Chinese standard JC/T 603 [37].For autogenous shrinkage, an experiment was conducted based on ASTM C1698, and the volume change in the sample between the final setting and unmolded state was recorded using YC-BWS (Beijingyichuangshidai Technology Co., Ltd., Beijing, China).After the MNCPs (40 mm × 40 mm × 160 mm) were unmolded, they were covered with aluminum tinfoil and sealed using paraffin to prevent water loss, and then the length was continuously tested using the same dial gauges. Based on the shrinkage-generated mechanism of cement-based materials, small pores exert a great influence on shrinkage, especially small nanopores, which dominate the shrinkage of cement-based materials [24].Nitrogen adsorption/desorption isotherms can lead to these characteristics in the pores.More importantly, isotherms were obtained based on the absorption and desorption of nitrogen molecules, which are nondestructive.The isotherms were measured using an Autosorb-IQ (Quantachrome, Boynton Beach, FL, USA) at the temperature of liquid nitrogen in a range of relative pressures from 0.05 to 0.99.The measurements were conducted on samples in three replicates.After the isotherms were obtained, the pore size distributions were determined based on the level of the nitrogen adsorption and determined using the BJH method.However, before this test, these samples were dried at 60 • C in a vacuum-drying oven until their mass remained unchanged. Montmorillonite were detected using an X-ray diffractometer (XRD, Cu target, D8 ADVANCE diffractometer, Brucker, Germany, 10 • /min).Prior to this measurement, MMT or O-MMT slurry was dried in the same environment, as mentioned above, and then these dried materials were ground until all particles were smaller than 80 µm.Subsequently, these powders were loaded onto a plate sample holder via side loading to reduce preferred orientation effects, and then moved in the diffractometer.The powder diffraction curves were recorded using the equipment for further analysis.Moreover, these particles were used to obtain Fourier-transform infrared spectroscopy curves using a SPECTRUM ONE AUTOIMA (PerkinElmer, Waltham, MA, USA) in the range of 4000-400 cm −1 and at a resolution of 4 cm −1 to evaluate the modified results.The background material was FTIR-grade ground KBr. Montmorillonite Modification Single montmorillonite (MMT) layers or tactoids can be used to separate and refine the capillary space of the cement matrix to generate nanopore spaces, and these nanolayers or tactoids can be fixed in the inner pores due to the interaction between the MMT layer and hydration products; therefore, these nanopore spaces could not be destroyed by conventional drying and formed rich nanopores.However, it was difficult to make all of the layers react with hydration products and generate enough hydration products at the MMT layers or tactoids.Therefore, these layers might be transformed into multilayers, forming part of the interlaminar pores, which are extremely small and have a major effect on increasing shrinkage [24].The Young-Laplace equation indicates that the enlargement of these extremely small pores is effective in reducing shrinkage stress, and thus increasing the size of the original interlaminar pores of MMT might be a potential and effective approach to reducing their shrinkage values.The intercalation of montmorillonite is one of the most common ways of enlarging the interlayer pore size [38].As shown in Figure 2, the main mineral phase was montmorillonite.The change in the 001 peak corresponding to the layer structure indicates that adding a modifier to the original nanopore-forming agent (MMT slurry) significantly increases the distance of interlayers, because the associated peak (001) of the MMT interlaminar pore moved in the direction of the small theta degree.Specifically, when the content of a modifier increases from 0 to 37.5%, the position of the 001 peak in MMT shifts from 6.1 • to 4.7 • , while the interlaminar distance of the MMT increases from 1.44 nm to 1.93 nm (an increase of 34.0%).When the content of modifier is 25%, the interlaminar pore size increases to 1.82 nm and by 0.11 nm.An excess modifier is not necessary, and the optimized dosage of the modifier is 25%. The modifying result is also reflected in the curves of the Fourier-transform infrared spectroscopy.As shown in Figure 3, the MMT has two distinct absorption bands at the high-frequency region.Bands of 3430 cm −1 and 1620 cm −1 correspond to the vibration of H-OH [39].The band at 3628 cm −1 is associated with the functional group Al-OH [40].The peak of Si-O-Si was observed at 790 cm −1 [40].The weak absorption band at 912 cm −1 is related to the Al-OH in the vibration [41].The bands at 445 cm −1 and 526 cm −1 might be coupling vibrations of OH and Si-OH [38,42].In the O-MMT, the characteristic bands of the modifier occur.Due to multiple washing, the unreacted modifier can be removed.The characteristic bands, such as those of C-H at 2860 cm −1 and 2929 cm −1 , are only from O-MMT [43].This is attributed to the ions exchange between modifier and the MMT layer.A montmorillonite unit contains two tetrahedral silica sheets and an octahedral alumina centrally located and sandwiched between two tetrahedral sheets.Usually, a montmorillonite layer has a negative charge due to the isomorphic substitution of Al 3+ by Mg 2+ in the octahedral sheet and Si 4+ by Al 3+ in the tetrahedral sheet.This layer is often balanced by alkali or alkaline earth cations attracted to the mineral surface, and these cations are easily exchanged by organic cationic surfactants (such as CTAB); therefore, the chain of CTAB can be absorbed on the surface of the layer, as presented in Figure 4.The results of the XRD demonstrate the greater interlaminar pore size (Figure 2).Therefore, it is not difficult to infer that the absorbing site is on the layer surface and not at the end of the layer (Figure 4).This is because the interlaminar space is formed by two opposite layers and the enlarged interlaminar pore size only occurs under the conditions of an existing CTAB chain in the interlaminar space.This also reveals that the carbon chain radiated from the layer surface to the interlaminar space (Figure 4), since the distribution of this molecular chain can increase the layer space [43].Therefore, with these facts in mind, the peaks of C-H at 2860 cm −1 and 2929 cm −1 indicate that the modifier successfully entered the interlaminar space; dried O-MMT has a larger interlaminar space than that of MMT [30].The modifying result is also reflected in the curves of the Fourier-transform infrared spectroscopy.As shown in Figure 3, the MMT has two distinct absorption bands at the high-frequency region.Bands of 3430 cm −1 and 1620 cm −1 correspond to the vibration of H-OH [39].The band at 3628 cm −1 is associated with the functional group Al-OH [40].The peak of Si-O-Si was observed at 790 cm −1 [40].The weak absorption band at 912 cm −1 is related to the Al-OH in the vibration [41].The bands at 445 cm −1 and 526 cm −1 might be coupling vibrations of OH and Si-OH [38,42].In the O-MMT, the characteristic bands of the modifier occur.Due to multiple washing, the unreacted modifier can be removed.The characteristic bands, such as those of C-H at 2860 cm −1 and 2929 cm −1 , are only from O-MMT [43].This is attributed to the ions exchange between modifier and the MMT layer.A montmorillonite unit contains two tetrahedral silica sheets and an octahedral alumina centrally located and sandwiched between two tetrahedral sheets.Usually, a montmorillonite layer has a negative charge due to the isomorphic substitution of Al 3+ by Mg 2+ in the octahedral sheet and Si 4+ by Al 3+ in the tetrahedral sheet.This layer is often balanced by alkali or alkaline earth cations attracted to the mineral surface, and these cations are easily exchanged by organic cationic surfactants (such as CTAB); therefore, the chain of CTAB can be absorbed on the surface of the layer, as presented in Figure 4.The results of the XRD demonstrate the greater interlaminar pore size (Figure 2).Therefore, it is not difficult to infer that the absorbing site is on the layer surface and not at the end of the layer (Figure 4).This is because the interlaminar space is formed by two opposite layers and the enlarged interlaminar pore size only occurs under the conditions of an existing CTAB chain in the interlaminar space.This also reveals that the carbon chain radiated from the layer surface to the interlaminar space (Figure 4), since the distribution of this molecular chain can increase the layer space [43].Therefore, with these facts in mind, the peaks of C-H at The interlaminar structure is characterized by nitrogen adsorption/desorption, as shown in Figure 5, indicating a larger hysteresis loop area and higher adsorption and desorption value.This demonstrated that the pores were formed by layer stacking and enlarged.With these facts in mind, the MMT was successfully modified by using CTAB under facile conditions (as mentioned in Section 2.2; the related mechanism is described in Figure 4).When the modifier was introduced into the MMT slurry under a facile modification environment, cation exchange in the MMT occurred.Alkali cations, which are attracted to the mineral surface, were exchanged by organic cationic surfactants, ultimately achieving the intercalation of organic cationic surfactants.Moreover, due to intercalation phenomena, the carbon chain of the modifier radiated from the layer surfaces to the interlaminar space, increasing the interlayer distance, which can enlarge the pore size [30].More importantly, these chains near the layer surface improved the hydrophobic ability of the MMT layer, establishing that water could not easily penetrate the layer space, showing the hydrophobic effect of interlayer spaces, which prevented shrinkage due to water loss [31,32].The interlaminar structure is characterized by nitrogen adsorption/desorption, as shown in Figure 5, indicating a larger hysteresis loop area and higher adsorption and desorption value.This demonstrated that the pores were formed by layer stacking and enlarged.With these facts in mind, the MMT was successfully modified by using CTAB under facile conditions (as mentioned in Section 2.2; the related mechanism is described in Figure 4).When the modifier was introduced into the MMT slurry under a facile modification environment, cation exchange in the MMT occurred.Alkali cations, which are attracted to the mineral surface, were exchanged by organic cationic surfactants, ultimately achieving the intercalation of organic cationic surfactants.Moreover, due to intercalation phenomena, the carbon chain of the modifier radiated from the layer surfaces to the interlaminar space, increasing the interlayer distance, which can enlarge the pore size [30].More importantly, these chains near the layer surface improved the hydrophobic ability of the MMT layer, establishing that water could not easily penetrate the layer space, showing the hydrophobic effect of interlayer spaces, which prevented shrinkage due to water loss [31,32]. Drying and Autogenous Shrinkage As mentioned above, 25% of the modifier was suitable for fabricating O-MMT.After the modification of MMT, these modified MMT (O-MMT) and original MMT slurries were used as pore-forming agents to fabricate nanopore-rich cement pastes.The drying shrinkage of montmorillonite-induced nanopore-rich cement paste (MNCP) is presented in Figure 6.The shrinkage of low-density MNCP (design density 400 kg/m 3 ) rapidly increased within 14 days, and the shrinkage changed slightly after 14-day exposure to a dry environment.The water loss in the low-density sample also experienced a similar phenomenon; the main water loss in the low-density sample was distributed across 14 days, and Drying and Autogenous Shrinkage As mentioned above, 25% of the modifier was suitable for fabricating O-MMT.After the modification of MMT, these modified MMT (O-MMT) and original MMT slurries were used as pore-forming agents to fabricate nanopore-rich cement pastes.The drying shrinkage of montmorillonite-induced nanopore-rich cement paste (MNCP) is presented in Figure 6.The shrinkage of low-density MNCP (design density 400 kg/m 3 ) rapidly increased within 14 days, and the shrinkage changed slightly after 14-day exposure to a dry environment.The water loss in the low-density sample also experienced a similar phenomenon; the main water loss in the low-density sample was distributed across 14 days, and exposure for extended periods of time reduced the water loss rate.For the lowdensity samples, the shrinkage at the same exposure age was effectively limited by the complete replacement of the O-MMT nanopore-forming agent.For example, at 7 days of exposure, the shrinkage value of the low-density sample (MMT) was 92.50 mm/m (w/c = 0.3) and 70.33 mm/m (w/c = 0.5).Further extending the age of exposure resulted in the fragmentation of the unmodified samples (MMT).Therefore, the shrinkage values of these samples cannot be continuously measured, as shown in Figure 6.However, when a modified nanopore-forming agent (O-MMT) was used to prepare MNCPs, severe shrinkage and the broken phenomenon did not occur, and the 28-day shrinkage values of the lowdensity samples were 29.52 mm/m and 27.13 mm/m, respectively.For the w/c of 0.3 and 0.5, the reduction rates were high: 68.1% and 61.5%, respectively.nite (O-MMT). Drying and Autogenous Shrinkage As mentioned above, 25% of the modifier was suitable for fabricating O-MMT.After the modification of MMT, these modified MMT (O-MMT) and original MMT slurries were used as pore-forming agents to fabricate nanopore-rich cement pastes.The drying shrinkage of montmorillonite-induced nanopore-rich cement paste (MNCP) is presented in Figure 6.The shrinkage of low-density MNCP (design density 400 kg/m 3 ) rapidly increased within 14 days, and the shrinkage changed slightly after 14-day exposure to a dry environment.The water loss in the low-density sample also experienced a similar phenomenon; the main water loss in the low-density sample was distributed across 14 days, and exposure for extended periods of time reduced the water loss rate.For the low-density samples, the shrinkage at the same exposure age was effectively limited by the complete replacement of the O-MMT nanopore-forming agent.For example, at 7 days of exposure, the shrinkage value of the low-density sample (MMT) was 92.50 mm/m (w/c = 0.3) and 70.33 mm/m (w/c = 0.5).Further extending the age of exposure resulted in the fragmentation of the unmodified samples (MMT).Therefore, the shrinkage values of these samples cannot be continuously measured, as shown in Figure 6.However, when a modified nanopore-forming agent (O-MMT) was used to prepare MNCPs, severe shrinkage and the broken phenomenon did not occur, and the 28-day shrinkage values of the low-density samples were 29.52 mm/m and 27.13 mm/m, respectively.For the w/c of 0.3 and 0.5, the reduction rates were high: 68.1% and 61.5%, respectively.For the high-density samples (design density 800 kg/m 3 ), the main water loss focused on the first 7 days.However, when the sample was continuously exposed to a dry environment, the water loss and shrinkage remained high because more complex and smaller pores in the high-density samples made water migration more difficult, and thus the significant loss of water took longer.Shrinkage is caused by water loss; therefore, the shrinkage of MNCP (high density) rapidly increased during the first 7 days of exposure, as shown in Figure 6, and extending the exposure time resulted in relatively lower shrinkage values.For the high-density and unmodified samples (MMT in Figure 6), severe shrinkage still occurred, but the broken phenomenon disappeared due to the lower content of pores For the high-density samples (design density 800 kg/m 3 ), the main water loss focused on the first 7 days.However, when the sample was continuously exposed to a dry environment, the water loss and shrinkage remained high because more complex and smaller pores in the high-density samples made water migration more difficult, and thus the significant loss of water took longer.Shrinkage is caused by water loss; therefore, the shrinkage of MNCP (high density) rapidly increased during the first 7 days of exposure, as shown in Figure 6, and extending the exposure time resulted in relatively lower shrinkage values.For the high-density and unmodified samples (MMT in Figure 6), severe shrinkage still occurred, but the broken phenomenon disappeared due to the lower content of pores and higher strength.Similar to the changing trend in the low-density sample, the shrinkage values of the modified sample fabricated by using the O-MMT slurry were lower than those of the unmodified sample (MMT).For instance, at 28 days of exposure, the shrinkage values of the high-density MMT sample were 14.50 mm/m and 14.81 mm/m, larger than those of the O-MMT sample (5.60 mm/m and 5.60 mm/m).The reduction rates of shrinkage were 61.4% and 62.2%, respectively, following the complete replacement of the O-MMT slurry, demonstrating a significant effect on drying shrinkage reduction. Self-desiccation happened during the cement hydration process, which caused a decrease in the internal relative humidity of the cementitious system and generated shrinkage; this shrinkage phenomenon is defined as autogenous shrinkage [44].According to ASTM C1698 [45], the autogenous shrinkage value was recorded after the final setting of the cement slurry.Figure 7 shows the change in the autogenous shrinkage value of the MNCP at different curing ages.The autogenous shrinkage value of the sample rapidly increased within 14 days when an unmodified nanopore-forming agent (MMT) was used.However, for the fabrication of a low-density sample, the O-MMT slurry completely replaced the MMT slurry; the shrinkage value changed slightly after 3 days, and rapidly increased during the first 3 days.For the high-density sample, the autogenous shrinkage of the modified sample (O-MMT) continuously increased when the curing time changed from 0 to 28 days.Autogenous shrinkage stemmed from water consumption (caused by cement hydration).When O-MMT is used for low-density MNCP, the dosage of O-MMT is large, playing a vital role in the change in shrinkage.This is because the hydrophobic effect of the carbon chain of the modifier (in O-MMT) means that water cannot be present in the interlamellar pore spaces [46].However, the water in MMT is rich and sufficient time is required for its consumption, meaning that this process takes longer than the consumption of the modified sample (O-MMT), ultimately causing greater shrinkage.This shrinkage still occurred within 14 days for the MMT-fabricated sample and in just 3 days for O-MMT-fabricated sample. For the high-density O-MMT sample, autogenous shrinkage was mainly controlled by cement hydration due to the high content of cement and low dosage of the nanoporeforming agent (Table 1), and the effect of the O-MMT (because of the little water in the interlamination due to the hydrophobicity of the modifier) was minor.Generally, it is impossible to complete cement hydration efficiently, and thus the autogenous shrinkage value increases over longer periods of time for the O-MMT sample.For the high-density Autogenous shrinkage stemmed from water consumption (caused by cement hydration).When O-MMT is used for low-density MNCP, the dosage of O-MMT is large, playing a vital role in the change in shrinkage.This is because the hydrophobic effect of the carbon chain of the modifier (in O-MMT) means that water cannot be present in the interlamellar pore spaces [46].However, the water in MMT is rich and sufficient time is required for its consumption, meaning that this process takes longer than the consumption of the modified sample (O-MMT), ultimately causing greater shrinkage.This shrinkage still occurred within 14 days for the MMT-fabricated sample and in just 3 days for O-MMT-fabricated sample. For the high-density O-MMT sample, autogenous shrinkage was mainly controlled by cement hydration due to the high content of cement and low dosage of the nanopore-forming agent (Table 1), and the effect of the O-MMT (because of the little water in the interlamination due to the hydrophobicity of the modifier) was minor.Generally, it is impossible to complete cement hydration efficiently, and thus the autogenous shrinkage value increases over longer periods of time for the O-MMT sample.For the high-density MMT sample, the autogenous shrinkage caused by water consumption in interlamellar pores was large, and the consumption of the water generated the main shrinkage stress, and thus autogenous shrinkage from cement hydration could be ignored.Therefore, similar to autogenous shrinkage in the low-density sample (MMT), the continuous increasing phenomenon after 14 days was not significant, and autogenous shrinkage rapidly changed before 14 days and increased slightly after 14 days.This was mainly attributed to the shrinkage caused by water consumption, and the continuous consumption of water in MMT over short time periods was caused by the early hydration of cement.Little water remained in the MMT after longer time periods due to the high consumption of water early on. As shown in Figure 7, when the O-MMT slurry was used for the fabrication of highdensity MNCP, the autogenous shrinkage values at 28 days were 0.39 mm/m (w/c = 0.3, 800 kg/m 3 ) and 0.24 mm/m (w/c = 0.5, 800 kg/m 3 ), and the reduction rates compared with those of the unmodified MNCP were 42.6% (w/c = 0.3, 800 kg/m 3 ) and 56.4% (w/c = 0.5, 800 kg/m 3 ), respectively; therefore, the shrinkage reduction effect was excellent.At 7 days, the shrinkage values of the unmodified sample were 0.42 mm/m (w/c = 0.3) and 0.30 mm/m (w/c = 0.5), respectively.When the O-MMT slurry was used, the shrinkage values of the modified sample changed to 0.10 mm/m and 0.03 mm/m, and the reduced percentages were 76.2% and 90.0%, respectively.At 3 days, the autogenous shrinkage of the modified sample was close to 0, and the values of the unmodified sample were 0.22 mm/m (w/c = 0.3) and 0.20 mm/m (w/c = 0.5), respectively. Similar to the phenomenon of the high-density sample, when the MMT slurry was used for the low-density MNCP preparation, the autogenous shrinkage values were high: 7.76 mm/m (w/c = 0.3, 400 kg/m 3 ) and 5.35 mm/m (w/c = 0.5, 400 kg/m 3 ), respectively.However, these values decreased with the use of the O-MMT slurry and were 2.05 mm/m (w/c = 0.3, 400 kg/m 3 ) and 2.08 mm/m (w/c = 0.5, 400 kg/m 3 ), with reductions of 73.6% and 61.1%, respectively.For the 7-day curing, the autogenous shrinkage values of the modified sample were 4.11 mm/m and 2.79 mm/m at a w/c of 0.3 and 0.5, respectively.These values reduced to 1.31 mm/m and 1.41 mm/m under the replacement of the O-MMT slurry, and the reduced percentages were 68.1% and 49.5%, respectively.For the unmodified samples at 3 days, the related values were 1.46 mm/m (w/c = 0.3) and 1.20 mm/m (w/c = 0.5), replacing the MMT slurry with the O-MMT slurry and causing a reduction in shrinkage, and they were reduced to 1.05 mm/m and 1.14 mm/m, respectively. As mentioned in reference [24], the reduction in shrinkage was also related to extremely small pores (pore size ≤ 10 nm) due to the extremely large shrinkage stress caused by the small nanopores and the low shrinkage stress caused by the large pores [24].Thus, nitrogen adsorption/desorption was used to quantitatively characterize the pore structure of these small pores, as shown in Figure 8.When O-MMT was used to completely replace MMT, the cumulative volume and volume of the MNCP in the pores at the same w/c and density grade remarkably reduced.For the low-density samples, when the water-tocement ratios were 0.3 and 0.5, respectively, the cumulative pore volume (≤10 nm) reduced from 0.21 cc/g to 0.06 cc/g and from 0.08 cc/g to 0.04 cc/g; the reduction percentages were high: 71.4% and 50.0%.For the high-density samples, the pore volume (≤10 nm) reduced from 0.06 cc/g to 0.01 cc/g and from 0.09 cc/g to 0.02 cc/g at a w/c of 0.3 and 0.5, and the reducing rates were 83.3% and 77.8%.Moreover, the pore volume at the same pore size generally decreased, as shown in Figure 8.The significant decrease in small pores contributed to the reduction in the shrinkage stress and had a major effect on shrinkage reduction in the MNCP.Moreover, water loss is the precondition of shrinkage.As mentioned in Section 3.1, the carbon chains radiated from the MMT layer surface to the outside, causing a hydrophobic effect and preventing water from approaching the MMT layer.Although MMT multilayers may form in the MNCP system, water cannot enter interlamellar spaces because these hydrophobic carbon chains exist in an interlamellar matrix.No water is present in this interlayer space, and thus, shrinkage stress from water loss can be significantly reduced, which contributed to reducing shrinkage in the MNCP.When O-MMT was used to replace MMT, it contributed to the fabrication of low-shrinkage MNCP as a kind of nanoporous material, and its shrinkage value was low [47][48][49].Due to the reduced drying and autogenous shrinkage, the crack risk of the MNCP was greatly controlled, thus providing the possibility of applying cast-in-place or prefabricated MNCP in the external walls or roofs of buildings, which could further improve energy efficiency. Fundamental Performance MMT modification was effective in controlling shrinkage, and the effect of this approach on the density, compressive strength, and thermal conductivity of the samples was also investigated.As shown in Figure 9, the dry density of the MNCP changed slightly.For the low-density grade of samples, the dry density of the MNCP changed from 485 kg/m 3 to 473 kg/m 3 and from 490 kg/m 3 to 482 kg/m 3 at a w/c of 0.3 and 0.5 when the O-MMT slurry was used to replace the MMT slurry.When the density grade of the sample was high, the dry densities of the sample varied from 965 kg/m 3 to 975 kg/m 3 and from 900 kg/m 3 to 903 kg/m 3 at a w/c of 0.3 and 0.5. Fundamental Performance MMT modification was effective in controlling shrinkage, and the effect of this approach on the density, compressive strength, and thermal conductivity of the samples was also investigated.As shown in Figure 9, the dry density of the MNCP changed slightly.For the low-density grade of samples, the dry density of the MNCP changed from 485 kg/m 3 to 473 kg/m 3 and from 490 kg/m 3 to 482 kg/m 3 at a w/c of 0.3 and 0.5 when the O-MMT slurry was used to replace the MMT slurry.When the density grade of the sample was high, the dry densities of the sample varied from 965 kg/m 3 to 975 kg/m 3 and from 900 kg/m 3 to 903 kg/m 3 at a w/c of 0.3 and 0.5.Due to the reduction in shrinkage stress, the generation of cracks was severely limited, thus reducing the primary cracks, which can improve the compressive strength of MNCP.As shown in Figure 10, the use of O-MMT for MMT replacement did not generate significant adverse effects on the strength of the sample at all ages and density grades.For example, when the dry density was low, the compressive strength at 7 days changed from 0.75 MPa to 0.72 MPa and from 0.66 MPa to 0.95 MPa at a w/c of 0. Shrinkage reduction contributed to controlling the cracks.When the density of the samples was low, cracks frequently occurred due to shrinkage.The cracks were often connective, which caused high heat convection and increased the thermal conductivity of the samples [50,51].However, when a modified nanopore-forming agent (O-MMT) was used, the shrinkage value of the low-density sample significantly decreased, contributing to reducing cracks.A modified sample with a low density has a lower thermal conductivity compared to that of an unmodified sample, as shown in Figure 11.When MMT was Due to the reduction in shrinkage stress, the generation of cracks was severely limited, thus reducing the primary cracks, which can improve the compressive strength of MNCP.As shown in Figure 10, the use of O-MMT for MMT replacement did not generate significant adverse effects on the strength of the sample at all ages and density grades.For example, when the dry density was low, the compressive strength at 7 days changed from 0.75 MPa to 0.72 MPa and from 0.66 MPa to 0.95 MPa at a w/c of 0. Due to the reduction in shrinkage stress, the generation of cracks was severely limited, thus reducing the primary cracks, which can improve the compressive strength of MNCP.As shown in Figure 10, the use of O-MMT for MMT replacement did not generate significant adverse effects on the strength of the sample at all ages and density grades.For example, when the dry density was low, the compressive strength at 7 days changed from 0.75 MPa to 0.72 MPa and from 0.66 MPa to 0.95 MPa at a w/c of 0. Shrinkage reduction contributed to controlling the cracks.When the density of the samples was low, cracks frequently occurred due to shrinkage.The cracks were often connective, which caused high heat convection and increased the thermal conductivity of the samples [50,51].However, when a modified nanopore-forming agent (O-MMT) was used, the shrinkage value of the low-density sample significantly decreased, contributing to reducing cracks.A modified sample with a low density has a lower thermal conductivity compared to that of an unmodified sample, as shown in Figure 11.When MMT was Shrinkage reduction contributed to controlling the cracks.When the density of the samples was low, cracks frequently occurred due to shrinkage.The cracks were often connective, which caused high heat convection and increased the thermal conductivity of the samples [50,51].However, when a modified nanopore-forming agent (O-MMT) was used, the shrinkage value of the low-density sample significantly decreased, contributing to reducing cracks.A modified sample with a low density has a lower thermal conductivity compared to that of an unmodified sample, as shown in Figure 11.When MMT was completely replaced by O-MMT, the thermal conductivity reduced from 0.110 W/(m•K) to 0.080 W/(m•K) and from 0.100 W/(m•K) to 0.090 W/(m•K) at a w/c of 0.3 and 0.5; the reduction rates were 27.3% and 10.0%, respectively.For the high-density sample, the dosage of the nanopore-forming agent was low, as shown in Table 1.Shrinkage was relatively low and the skeleton of the pore was strong, demonstrating a strong ability to reduce cracks.Therefore, the connective cracks in the sample were scarce, ultimately causing a slight change in the thermal conductivity value, as shown in Figure 11 [13].When the water-to-cement ratios were 0.3 and 0.5 for the high-density sample (design density: 800 kg/m 3 ), the thermal conductivity values of the unmodified sample were 0.165 W/(m•K) and 0.175 W/(m•K), and the values of the modified sample were 0.170 W/(m•K) and 0.180 W/(m•K), respectively, showing a minor change.Combined with the positive effect of the modification on the strength and the minor effect on the thermal conductivity, MNCP can maintain good mechanical and thermal insulation performance.This means that it can be not only used in a sandwich structure as insulation materials, but can also be directly applied in an external building envelope as self-insulating wall or roof materials, improving the energy efficiency of buildings. Materials 2024, 17, x FOR PEER REVIEW 14 of 17 completely replaced by O-MMT, the thermal conductivity reduced from 0.110 W/(m•K) to 0.080 W/(m•K) and from 0.100 W/(m•K) to 0.090 W/(m•K) at a w/c of 0.3 and 0.5; the reduction rates were 27.3% and 10.0%, respectively.For the high-density sample, the dosage of the nanopore-forming agent was low, as shown in Table 1.Shrinkage was relatively low and the skeleton of the pore was strong, demonstrating a strong ability to reduce cracks.Therefore, the connective cracks in the sample were scarce, ultimately causing a slight change in the thermal conductivity value, as shown in Figure 11 [13].When the water-tocement ratios were 0. Combined with the positive effect of the modification on the strength and the minor effect on the thermal conductivity, MNCP can maintain good mechanical and thermal insulation performance.This means that it can be not only used in a sandwich structure as insulation materials, but can also be directly applied in an external building envelope as self-insulating wall or roof materials, improving the energy efficiency of buildings. Conclusions The organic modification of montmorillonite was successfully achieved by using cetyltrimethyl ammonium bromide under facile conditions, and montmorillonite-induced nanopore-rich cement paste (MNCP) was prepared to detect the effect of the organic modification of montmorillonite on the shrinkage behavior and fundamental performance.The main conclusions can be summarized as follows: (1) Montmorillonite can be modified by using cetyltrimethyl ammonium bromide at 80 °C for 2 h, successfully achieving organic modification, which enlarges the interlayer pores and brings the hydrophobic chain into interlamination, hindering the penetration of water molecules.(2) Autogenous and drying shrinkage were significantly reduced when organic-modified montmorillonite was used to replace original montmorillonite.The autogenous 28-day shrinkages at design density values of 400 kg/m 3 and 800 kg/m 3 were reduced to 2.05 mm/m and 0.24 mm/m, respectively, and the highest reduction percentages for the 28-day drying shrinkage were increased to at least 68.1% and 62.2%, respectively.(3) Organic-modified montmorillonite has a minor influence on the dry density and thermal conductivity of MNCP, but it contributed to enhancing the strength of MNCP. Conclusions The organic modification of montmorillonite was successfully achieved by using cetyltrimethyl ammonium bromide under facile conditions, and montmorillonite-induced nanopore-rich cement paste (MNCP) was prepared to detect the effect of the organic modification of montmorillonite on the shrinkage behavior and fundamental performance.The main conclusions can be summarized as follows: (1) Montmorillonite can be modified by using cetyltrimethyl ammonium bromide at 80 • C for 2 h, successfully achieving organic modification, which enlarges the interlayer pores and brings the hydrophobic chain into interlamination, hindering the penetration of water molecules.(2) Autogenous and drying shrinkage were significantly reduced when organic-modified montmorillonite was used to replace original montmorillonite.The autogenous 28-day shrinkages at design density values of 400 kg/m 3 and 800 kg/m 3 were reduced to 2.05 mm/m and 0.24 mm/m, respectively, and the highest reduction percentages for the 28-day drying shrinkage were increased to at least 68.1% and 62.2%, respectively.(3) Organic-modified montmorillonite has a minor influence on the dry density and thermal conductivity of MNCP, but it contributed to enhancing the strength of MNCP. Figure 1 . Figure 1.Particle size distribution of cement and montmorillonite. Figure 1 . Figure 1.Particle size distribution of cement and montmorillonite. 3 and 0.5, the related 28-day strength varied from 1.04 MPa to 0.93 MPa and from 0.85 MPa to 1.12 MPa, and the associated 56-day strength changed from 1.21 MPa to 1.15 MPa and from 0.94 MPa to 1.3 MPa, respectively.Similar to the phenomenon in the low-density sample, for the highdensity sample, the 7-day, 28-day, and 56-day compressive strength values changed from 5.61 MPa to 6.38 MPa, from 6.43 MPa to 8.88 MPa, and from 6.73 MPa to 9.09 MPa at a w/c of 0.3; the percentage increases were 13.7%, 38.1%, and 35.1%, respectively.When the w/c was 0.5, the 7-day, 28-day, and 56-day compressive strength values increased from 4.86 MPa to 5.78 MPa, from 5.90 MPa to 7.04 MPa, and from 6.40 MPa to 7.91 MPa; the percentage increases were 18.9%, 19.3%, and 23.6%, respectively. 3 and 0.5, the related 28-day strength varied from 1.04 MPa to 0.93 MPa and from 0.85 MPa to 1.12 MPa, and the associated 56-day strength changed from 1.21 MPa to 1.15 MPa and from 0.94 MPa to 1.3 MPa, respectively.Similar to the phenomenon in the low-density sample, for the highdensity sample, the 7-day, 28-day, and 56-day compressive strength values changed from 5.61 MPa to 6.38 MPa, from 6.43 MPa to 8.88 MPa, and from 6.73 MPa to 9.09 MPa at a w/c of 0.3; the percentage increases were 13.7%, 38.1%, and 35.1%, respectively.When the w/c was 0.5, the 7-day, 28-day, and 56-day compressive strength values increased from 4.86 MPa to 5.78 MPa, from 5.90 MPa to 7.04 MPa, and from 6.40 MPa to 7.91 MPa; the percentage increases were 18.9%, 19.3%, and 23.6%, respectively. 3 and 0.5 for the high-density sample (design density: 800 kg/m 3 ), the thermal conductivity values of the unmodified sample were 0.165 W/(m•K) and 0.175 W/(m•K), and the values of the modified sample were 0.170 W/(m•K) and 0.180 W/(m•K), respectively, showing a minor change.
2024-02-25T05:21:44.160Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "349681bb7c931e741ed04d57ed5ac4a404d09069", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/17/4/922/pdf?version=1708144912", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "349681bb7c931e741ed04d57ed5ac4a404d09069", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
196580254
pes2o/s2orc
v3-fos-license
The Influence of Progressive Muscle Relaxation Techniques on Depression Level of Cronic Kidney Disease Patient Undergoing Hemodialysis Therapy Hemodialysis is a renal replacement therapy for patients with cronic renal disease who are decline of renal fuction. The complex therapy and physical condition of chronic kidney disease and hemodialysis patient involve a severe stressor that lead to depression. Progressive muscle relaxation technique is one of nonpharmacoloical therapies that treat depression. This research aimed to prove the influence of progressive muscle relaxation technique to changes in depression level in cronic kidney disease with hemodialysis in Dr. WahidinSudiroHusodoMojokertohospital. In this research design used is Quasy experiment with pre-test post-test control group design. Sample of 30 people were taken by simple random sampling. 15 people from experimental group were given routine progressive muscle relaxation technique in 2 times a day of the week and 15 people from control group were given not routine progressive muscle relaxation techniques in 2 day one time in a week . The research instrument was Beck Depression Inventory. Wilcoxon  Signed Rank Test shows that p value (0.001) < α (0.05), so it is accepted  that there is an effect of progressive muscle relaxation on the depression level of cronic kidney disease patient undergoing hemodialysis. To test the U-Mann Whitney shows that p value (0.005) < α (0.05), so that H0 is rejected it means there is different of the channge depression level between experiment group and the control group.this therapy can increase the production of melatonin and serotonin, reduce stress hormone cortisol. PMR also, lowering the muscle tension ,do make positive thinking so,  throught that is influence to decrease in depression level. Routine muscle relaxationdistractify the stressor everyday in training. PRELIMINARY Chronic renal failure results from a decline in kidney function that is chronic and irreversible. The Decreased kidney function will cause fluid imbalance, electrolytes and metabolic disorders in the body (Suhartono, 2009). Thus, renal replacement therapy is needed to deal with the progressive decline in renal function. Hemodialysis therapy is necessary for patients with chronic renal failure in the long term or permanently (Suhartono, 2009). Hemodialysis is made a variety of psychological problems in patients with chronic kidney disease (CKD). Depression is a common psychosocial problem in patients undergoing hemodialysis (Amalia, 2015). Depression is often characterized by melancholy, sadness, lethargy, loss of passion, no spirit, feeling helpless, guilt, uselessness, and despair (Joseph, 2011). Factors that affect depression include loss of object/person, genetic factor, the cognitive tendency to pessimism, lack of positive reinforcement, hormonal factors and personality (Stuart, Gail, 2016). The number of patients with chronic renal failure also continues to increase in Indonesia. Data in 2013 of new patients who will undergo hemodialysis amounted to 15,128 and patients who are actively undergoing hemodialysis 9.396 people whereas In 2014 there was an increase in the number of new patients by 17,193 and active patients of 11,689 people. East Java province became the second largest contributor of hemodialysis patients in Indonesia after West Java (7th Report Of Indonesian Renal Registry, 2014). Symptoms of depression occur in hemodialysis clients may worsen over time (Bossola et al., 2012;Asti, 2014). A study from the faculty of medicine of the University of Indonesia found that the prevalence of depression in patients with renal failure who is undergoing hemodialysis reached 31.1% (Wijaya, 2005;EkaNurul, 2014). Then Dr. Andri, Sp.KJ from Psychosomatic Clinic Omni Hospital, Tangerang (Kompasiana, 2012) states that the prevalence of depression that occurs in hemodialysis patients today is about 20% -30% and even higher up to 47% (Azahra, 2013). The incidence of depression is common in all inpatient clients with physical illness. The highest intensity and frequency occurs in clients with severe pain, and end-stage renal disease is often associated with this depressed condition (Stuart, Gail, 2016).The onset of psychological symptoms, especially depression experienced by many patients with CKD, originated from the physical stress they experienced, which in turn affected psychological and psychological stress. Problems of depression are also arising due to role disorders experienced by patients with CKD. This can be a concern for the relationship with a partner, lifestyle changes due to dietary restrictions and complex therapy and the presence of feelings of isolation (Armiyati, 2014). The onset of depression is also a response to future uncertainty and fear of death (Sadock, 2010). Symptoms of depression depicted in patients with chronic renal failure who is undergoing hemodialysis are associated with increased mortality due to increased disease complications and side effects of dialysis machines and the deterioration of the quality of life of patients undergoing hemodialysis (Amalia, 2015). Pharmacologic therapy of depression is rarely given that chronic kidney disease affects both pharmacokinetic and pharmacodynamic effects of drug therapy, the use of other non-pharmacologic therapy would be better used to treat depression (Le Mone, 2015). Psychological conditions, especially depression can be reduced with an-pharmacology therapy one of them by performing progressive muscle relaxation techniques (Sholihah, 2015). The progressive muscle relaxation technique is a therapy by focusing on a muscle activity by identifying the tense muscles and then slowing down the muscle tension slowly to get the feeling of relaxation (Herod, 2010;Setyoadi, 2011).Implementation of progressive muscle relaxation techniques makes the muscles will get a stretch first and then stop the tension and feel the loss of muscle tension in a relaxed manner. Benefits or advance derived from PMR in the form of relaxation and positive mind reinforcement make PMR technique one of the effective non-pharmacologic therapy is applied to reduce depression RESEARCH METHOD In this research design used is a Quasy experiment with pre-test post-test control group design. The number of population taken by criteria of researchers amounted to 55 people. Sampling using probability sampling technique that is simple random sampling. The minimum sample size in the type of experimental research is 15 subjects per group (Kasjono H, 2009). 15 subjects for the experimental group and 15 subjects for the control group. The study was conducted on March 11 -April 18, 2017. The instrument in this research uses a BDI Questionnaire (Beck Depression Inventory) which consists of 21 questions. In the experimental group: treatment was given progressive muscle relaxation (2 times per week) routine while control group: given progressive muscle relaxation technique (once per 2 days a week) was not routine. The statistical test in this study using Wilcoxon Signed Rank Test is to know the change of depression level before and after giving treatment to experiment group and control. H 0 is rejected, if p-value <α (0,05). Meanwhile, to know the difference between the change of depression level in the experimental group with the control group in the patients with chronic renal failure who is undergoing hemodialysis used the U-Mann Whitney statistical test. H0 is rejected p-value <α (0,05). Analyze this data using SPSS 20.0 software program. Total 15 100 15 100 Based on table 4.6 it is known that in the experimental group most of the respondents had mild depression that is as much as six respondents (40,0%) while in the control group mostly had mild depression and depression limit was five respondents respectively (33,3%). Table 4.10 it is known that in the experimental group most of the respondents did not experience depression (normal) that is as much as six respondents (40.0%) while in the control group mostly experience mild depression as many as seven respondents (46.7%). DISCUSSION 1. Levels of depression before treatment in the experimental group and before treatment in the control group in patients with CKD who is undergoing HD. Based on Table 4.6 it is known that in the experimental group most of the respondents had mild depression that is as much as six respondents (40,0%) while in the control group mostly had mild depression and depression limit was five respondents respectively (33,3%). The onset of depressive symptoms experienced by many GGK patients originates from the physical stress it undergoes, which ultimately affects psychiatry and psychological stress. Depression conditions are influenced by predisposing and precipitation factors that result in a person's assessment of the stressor being negative. The predisposing factors of depression consist of genetic factors, selftransferred anger (aggression), loss, personality, cognitive, learning models, behavioral models, and biochemical factors (Sadock, 2010). The precipitating factors of depression include loss of affection, life events, role tension, stressor assessment, and physiological changes (Stuart G, 2016). Physiological factors become the main factor of depression in GGK patients undergoing hemodialysis. Some physiological factors that become the beginning of the occurrence of depression include shortness of breath, fatigue, edema, cramps, hyperthermia, pain, anemia to pruritus, etc. Some of these physical conditions that many effects and affect daily activities, such as work, sleep, and social. The role loss factor cannot be ignored either. The loss factor is the predisposition of depression (Sadock, 2010) The theory of loss associated with developmental factors (e.g., loss of objects/people) and individuals are powerless to overcome loss (Puwaningsih, 2010). In this case, a man will feel the loss of a role as a breadwinner due to the illness he suffered. Losing roles as a breadwinner causes the economy to be hampered. This is a separate stressor for respondents. Where the majority of patients are still in productive age. A physical condition that decreases the inability to perform daily activities is the cause of their job losses and roles. So it is not impossible because the loss of this role to be a predisposing factor (cause) affective disorder/mood depression. Kidney failure is a disease terminal illness. Thus, a family system is needed for positive reinforcement. The ultimate goal of this is that patients are more often think positive so that the psychic condition does not worsen his physical condition. Physical condition, loss of role, inadequate support is the dominance of factors affecting depression in patients with chronic renal failure who is undergoing depression. And several other factors also affect the condition of depression. Different personalities, Experiences, defense systems and supporters in each patient also affect the depressive level of the patient. So the depression response to each pain is different. Analysis of changes in depressive levels before and after treatment in the experimental group and analysis of changes in depressive levels before and after control group treatment. The result of statistical test Wilcoxon Signed Ranks Test using SPSS version 20.0 in Table 4.11 note that p-value (0.001) <α (0,05), it means Ho is rejected, so, there is influence of progressive muscle relaxation technique 2 times a week (routine ) Of GGK patients undergoing HD at Dr. Wahidin Sudiro Husodo Mojokerto Hospital. Influence of progressive muscle relaxation technique significantly to the change of depression level in patients with chronic renal failure who is undergoing hemodialysis in Dr. Wahidin Sudiro Husodo Hospital. This can happen because an assessment of stressors results in the stress of the muscles sending the stimulus to the brain and creating a feedback path. PMR relaxation blocks the pathway by activating the neuro-sympathetic workings of the nervous system and by manipulating the hypothalamus through concentration of mind to reinforce positive attitudes so that the stress stimulus to the hypothalamus is reduced and depression can be descended (Praise A, 2014). The results of this study support previous research from Sholihah that progressive muscle technique is effective to reduce the level of elderly depression in turigede village-like. PMR therapy can increase the production of melatonin and serotonin and lower stress hormone cortisol. The effect of serotonin is related to mood, sexual desire, sleep, memory, temperature regulation and social properties. Breathing deeply and slowly and tensing some muscles for a few minutes each day can decrease cortisol production by up to 50%. Cortisol (cortisol) is a stress hormone that, when present in excessive amounts, interferes with the functioning of almost every cell in the body. Relaxing and doing PMR can help the body cope with stress and restore the ability of the immune system (Alam & Hadibroto, 2007;N.E.Alfiyanti, 2014). The PMR technique in the experimental group has full and appropriate components of existing program with frequent and routine frequency. Respondents who perform progressive muscle relaxation techniques, individuals will be aware of the tension in the muscles of the body and achieve total muscle relaxation. Thus, it will affect changes in depressed levels of GGK patients undergoing hemodialysis. Based on table 4.12 it is known that, Wilcoxon Signed Ranks Test statistic test results using SPSS version 20.0 is known that the value of p-value (0.025) <α (0.05), meaning that Ho is rejected, so there is influence of progressive muscle relaxation technique 1 times per 2 Day (not routine) control group of patients who are undergoing CKD despite the control group given progressive muscle relaxation treatment with the same movement. However, the frequency is not the same, i.e. two days once morning and evening. And in this study, the treatment made changes in depression levels, although not as great as in the experimental group. Analyze the difference between changes in the depression level before and after treatment in the experimental group with pre-and postcontrol group treatment in patients with CKD who is undergoing HD in WahidinSudiro Husodo Mojokerto Hospital. Based on table 4.10 Mann-Whitney test results using SPSS version 20.0 is known that the value of p-value (0.005) <α (0.05), meaning Ho is rejected, H1 accepted. Thus, there was a difference in the rate of depression before and after the administration of progressive muscle relaxation two times a week (routine) in the experimental group with changes in depression levels before and after the technique of progressive muscle relaxation once per 2 days (not routine) in the control group. Implementation of progressive muscle relaxation techniques makes the muscles will get a stretch first and then stop the tension and feel the loss of muscle tension in a relaxed manner. For maximum results, it is recommended to perform progressive muscle relaxation techniques as much as two times a day for a week with a time of 20-30 minutes (Davis, 2005;Nasution, 2016). PMR can be performed by the patient in a sitting position or a supine position (Kozier, Erb, Berman & Snyder, 2011;N.E.Alfiyanti, 2014). The treatment in this study was conducted the same by the researchers, i.e., 15 movements that are regularly treated and the same procedure. Both groups also passed this PMR technique two times a day, i.e., morning and evening, but at different frequencies. Researchers only reduced the frequency of therapy. In the experimental group conducted routinely every day for one week while in the control group performed not routine that is one time per 2 days a week. And in the experimental group, one respondent performed 14 treatments in one full week while in the control group was given only six treatment times in one week. This frequency difference in therapy can make a difference in the rate of depression in the control group and the experimental group. This is seen from the difference between the p-value of the two groups; it is clear that there is a difference in the meaning value of the change of depression level between the two groups. From the previous data it is known that p-value in experimental group is (0.001) <α (0,05) whereas at p-value of control group is pvalue (0,025) <α (0,05), so when compared with progressive muscle relaxation technique Performed routinely, progressive muscle relaxation techniques performed regularly and routinely more effective effect of decreased depression level on respondents Differences change in depression rate occurs because the experimental treatment carried out according to existing program. With the same frequency as the previous reference, it gives a better effect than the control group. While in the control group some of the components that need to be met by the respondents to create maximum results are not met, if some components are less or not met during the implementation of progressive muscle relaxation techniques, then the results achieved from the implementation of the technique it will not be maximized. There are several things analyzed by researchers that relaxation techniques are more effective routinely affect changes in depression rate progressive muscle relaxation performed regularly, frequently, and routine will provide relaxed and positive conditions so that that respondent stressors can be distracted every day. PMR will give maximum effect if done in a conducive atmosphere (not crowded, quiet). In this study, the PMR is done by each house so it can better control the noise and focus the therapy for maximum effect. . From the results of observations of researchers giving therapy techniques progressive muscle relaxation with frequent and frequent frequencies, was more likely to affect changes in depression in patients. Where patients will be more relaxed each day, strengthen positive thoughts every day and patients will feel more trained to feel muscle relaxation more often than the control group. While in the control group the execution of therapy is not routine will be more likely to ignore the sense of relaxation that will be obtained. In fact, in the control group, it is more likely not to do therapy because therapy is not routine. So it can be concluded although both can lower the level of depression. There is a difference in the rate of depression in the experimental and control groups. Techniques progressive muscle relaxation is more effective if done according to program on a regular frequency. CONCLUSIONS AND SUGGESTIONS Conclusion 1. There is a significant influence on the progressive muscle relaxation technique 2 times a week (routine) in the experimental group on the change of depression level in patients with CKD who is undergoing HD at Dr. WahidinSudiroHusodoHospital-Mojokerto with PMR Therapy can increase the production of melatonin and serotonin and decrease the stress hormone cortisol that affects the mood of a person. PMR is also able to create a relaxed state, decrease muscle tension, create a positive mental/mind to affect the decrease in depression level. 2. There is a significant difference in the rate of depression before and after the administration of routine PMR techniques in the experimental group with the non-routine PMR technique in the group. This is because in the experimental group the progressive muscle relaxation technique is regular, frequent and routine will provide more relaxed and positive conditions So that the respondent's stressor can be distracted daily compared to non-routine execution. Techniques progressive muscle relaxation is more effective if done according to the procedure on a regular frequency Suggestions 1. Effective PMR implementation on a daily basis will be better if the hemodialysis unit provides a separate relaxation room or the provision of certain psychological counseling services in a special room to help improve the psychological aspects. 2. Due to this research, the monitoring of progressive muscle relaxation technique is only done on pre-post test only and using observation sheet and via phone is expected to further find alternative way to supervise the respondent every day in carrying out progressive muscle relaxation so that the respondent can carry out This technique is in accordance with existing procedures.
2019-03-18T14:04:43.444Z
2017-12-15T00:00:00.000
{ "year": 2017, "sha1": "cebb8465c79d20f83285d041708f66e6f8ace0b6", "oa_license": "CCBY", "oa_url": "http://ijnms.net/index.php/ijnms/article/download/51/26", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "02f19aadfd2f9c871157d357f2018790d97f59d8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
221123345
pes2o/s2orc
v3-fos-license
Costing analysis of a digital first-line treatment platform for patients with knee and hip osteoarthritis in Sweden Osteoarthritis (OA) constitutes a major and increasing burden on patients, health care systems and the broader society. It is estimated that around a quarter of the adult population is affected by OA in the knee and hip and that the prevalence of OA will increase over the coming decades largely due to aging and adverse life-style factors. Prevention and effective care are critical to manage the challenges posed by OA. Digital technologies offer opportunities to deliver cost-effective care for chronic diseases, including for OA. We report the results of a costing analysis of a new digital platform for delivering first-line care including disease information and physiotherapy to patients with OA and compare this with an existing face-to-face model of treatment. Both models are in accordance with National Treatment Guidelines in Sweden. The results show that overall the digital model costs around 25% of the existing face-to-face model of care. Based on existing evidence on the effects of these models, our findings also suggest that the digital platform offers a cost-effective alternative to the existing model of OA care. Depending on the extent to which the digital model substitutes for the existing model of care, significant resources can be saved. Introduction Osteoarthritis (OA), the most common joint disease, mainly affects the knees and hips. Estimates for Sweden suggest that every fourth person is afflicted by OA and that it will increase in prevalence due to aging, obesity, and other contributing life-style factors [1]. In terms of global burden of disease, OA is the 11 th out of 291 diseases to cause disability [2] whereas it in the U.S. is the fifth leading cause of disability [3]. Estimates of the economic burden of musculoskeletal diseases suggest OA economic impact to be as high as two percent of the gross domestic product (GDP) in industrialized countries, of which the largest direct costs include those for medication and surgery. The indirect costs, i.e. lost income, reduced productivity and spending on home care can reach as much as 4 600 USD per person annually [3]. A recent cost-of-illness study of OA estimated the yearly per person cost of OA at around 10 000 EUR a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 (10 800 USD) [4]. Understanding the relative cost and effectiveness of available treatments and preventive measures would therefore be of considerable policy relevance [5]. The development of digital technologies across the healthcare sector by means of digital platforms may provide the opportunity to deliver evidence-based first-line care in accordance to global guidelines to patients at lower costs compared with traditional models of care. Cost advantages are likely to exist for patients, for health care providers, as well as to the broader society [6][7][8]. Other advantages of digital telehealth innovations are user flexibility, engaging asynchronous support from health professionals and the ability to receive care at home thereby avoiding travel. Specifically, treating chronically ill patients may have the potential to be most cost beneficial for all stakeholders. Accordingly, some recently developed digital platforms include managing type II diabetes, hypertension and musculoskeletal disorders [9,10]. One of these platforms was developed to manage patients with OA of the knee or hip [11][12][13]. The aims of the present study were to assess the cost of providing digital care and best practice face-to-face care, and to compare the two models to evaluate the differences in resource use. Evaluating the resources required to deliver the alternative models of OA care would provide information for policy making. The study adopted a societal perspective by assessing all resources needed to deliver an episode of care to the patients. In particular, the analysis included costs on the health system and on the patient side. The study also measured the costs associated with carbon dioxide (CO 2 ) emissions due to transportations undertaken by the patients. Based on the results of the costing analysis and on existing evidence on the effects of the two models of OA care, the incremental cost-effectiveness ratio (ICER) was also computed. We also performed an analysis of the expenditure implications of scaling-up the most costeffective model of care. Methods The costing analysis compares the resources needed to deliver care with a digital OA treatment program, the Joint Academy 1 platform (JA) (www.jointacademy.com) [13], with the best practice face-to-face treatment, the 'Better Management of Patients with Osteoarthritis program', or the BOA model [14]. Both models provide individually tailored first-line management programs including disease information and exercises for patients that have been diagnosed with knee or hip OA. Patients self-select to receive care in either the traditional model or the digital model of care. Data suggest that there are no significant differences in age or sex between the two models of care; mean age is around 65 years of age and around 75 percent are women in both models [13,14]. In presenting the results of the study, the CHEERS recommendations for reporting the results of an economic evaluation have been followed [15]. The costing analysis adopted the general approach of identification, quantification, and valuation of the cost items for both models [23]. The perspective of the analysis is that of the society. Data were collected from the providers of the care and did not include identifiable patient related data of any kind, hence did not require ethical approval. Better Management of Patients with Osteoarthritis (BOA) The evidence-based face-to-face treatment model of a patient who has been diagnosed with OA of the knee or hip in Sweden, the BOA program, was initiated in 2008. According to Swedish National Guidelines a person diagnosed with knee or hip OA should receive a recommendation to register into one of the around 600 care units delivering the BOA-program [16]. In practice however, only around half of the Swedish hip or knee OA patients receive such a recommendation [17,18]. The BOA model is consistent with international guidelines [19][20][21] and involves several standardized activities, including two to three one-hour, physiotherapist-led face-to-face lectures with information about the condition and available treatments. One additional session involves information by a former patient (1 hour; 44% participate in such a session). The patient is then offered individually tailored one-hour group exercise sessions twice weekly led by a physiotherapist over a period of 6-12 weeks. Around 60 percent of patients that continue in the program for at least 12 weeks participate in such sessions [17]. In all, a typical episode of treatment in the BOA-model involves 16 hours of provider (physiotherapist and a co-patient) contact time. The patient may also receive care from an occupational therapist in case of need. Patients are followed-up three and twelve months after completing an episode of care in terms of mobility, pain, and health-related quality of life (HRQoL using the EQ-5D-5L instrument). The BOA model also involves additional resources for the clinic, the patient, and others, including planning and preparation of sessions, transportation to and from the site, direct costs (user-fees), and time off work for patients who are employed. In addition, physiotherapists who would like to qualify for the BOA program are required to take a one-day course led by a senior physiotherapist. Digital model The digital model was inspired by BOA and is an alternative delivery of care, likewise based on evidence and global guidelines. It consists of a patient interface that provides individually tailored information on OA and exercises for rehabilitation, and support for life-style changes. It includes a provider interface where a trained physiotherapist can follow progress of the patient and provide feedback and support throughout the treatment period. Exercises are distributed daily, with instructional videos including graphical elements coupled with text instructions, to ensure proper execution. The model has shown significant effects on key indicators including mobility, pain, and physical function in recent studies [11][12][13]22]. The digital model is initiated by the patient providing key information about his or her condition into the system platform. The information is reviewed by a physiotherapist who then contacts the patient via the application to confirm the OA diagnosis. During that visit, the patient is able to ask for any additional information about the treatment model or his or her particular concerns related to the condition. Specifically the following treatment contacts are identified as constituting the regular set of physiotherapy activities and interactions during a 12 week period (duration in minutes; means of interaction): start-up meeting (15; telephone); daily coordination and adjustment (varies as needed; digital platform); weekly follow-up (5-8; digital platform); 6-week follow-up meeting (15; telephone); monthly follow-up session (5-8; digital platform); 3-month follow-up (15; telephone); additional interactions (as needed). In all, a typical episode of care consists of at least 18 activities that take around 143 minutes (2.38 hours) to perform over a period of care of 12 weeks (based on the Terms of References for physiotherapists by Joint Academy 1 ). In contrast to the BOA program, the digital model of care is open-ended and continues as long as the patient's condition improves, until the physiotherapist deems that behavior change (the participant is exercising regularly and will continue to do so without support) has been achieved or until other treatment is needed, such as surgery (total joint replacement). Similar to the BOA model of care, the digital program requires other resources, including preparations and follow-up on the part of the provider and the patient. In addition, in order to provide care through the digital platform, the physiotherapist is required to take a mandatory online training course in the use of the platform and a short course in online physiotherapy provision. Finally, the physiotherapist is required to pass an online certification exam. These training and exam events take a total of two hours. Analysis The analysis estimates the resources needed to deliver an episode of OA care (the unit cost) by either model in 2018, the most recent full year for which data were available. Since both models provide individually tailored regimens there is significant variation across patients with respect to the scope and intensity of the treatment episodes. To ensure a fair comparison between the models, an episode was defined as care over a 12 week-period for both models. Furthermore, care is taken to avoid over-or underestimation of resource use by adopting a conservative approach to the quantification and valuation in cases when use can only be estimated by means of inexact methods, such as transportation time and technical support costs. In the first step of the analysis each cost item was listed across three main domains to reflect the societal perspective of the analysis: the health care system (i.e. clinic or provider), the patient, and other sectors of society. Identification was done by reviewing documents that describe the two models and by consulting experienced users of the two models of care. Using the same sources of information, each cost item was quantified in terms of time or other resources needed to deliver the care. Finally, valuation was done by consulting relevant sources of information for the particular cost item, such as mean gross hourly wage rates (of physiotherapists and the general public). The table below lists the main cost items for each domain and describes how they have been quantified and valued (Table 1). Time is valued according to the human capital method using average gross hourly wage rates for physiotherapists and the general population obtained from Statistics Sweden wage statistics [23]. Providers' time is valued including non-wage social fees set at the legally mandatory minimum rate of 31.42 percent of gross wage. Patients' time is valued at the reference value of leisure of 30 percent of the gross wage rate and net of any social fees. Analysis of when during the day patients are active in JA show that they predominantly log in during the morning or in the evening without any difference between the two groups in terms of age. No adjustment was made for when patients in the JA model perform their training exercises as compared with the BOA model. To account for the cost of facility rent a ten percent surcharge is added on the hourly value of staff time in the BOA model of care. In both models of care, the patient undergoes a set of instructional lessons and exercise sessions. As described above, these are face-to-face sessions in the BOA model of care and online based in the digital model. An important difference between the models is that the sessions are group-based in the BOA model. This means that to obtain the unit cost of care, these costs are divided by the average number of participants. From a payer perspective, however, the costs for a physiotherapist and office-space remain the same regardless of the number of participants in a group session. Consequently, these costs are reported separately in order to obtain a comprehensive cost profile of the models. In addition, when adjusting the time-period for the BOA model, the introduction and information sessions are only counted once as these are independent of the length of treatment. In addition to the time costs associated with the exercise sessions resources are also needed for preparatory and follow-up activities. In the BOA model they involve preparing the training facility, arranging equipment, and booking patients. In the JA program they mostly involve reading up on the patient's reporting data and preparing responses to any particular question or issue that the patient may have raised in his or her reports. These resources are reported separately as administration costs that have been measured by consulting physiotherapists from both models of care who were able to provide estimates of the time required for these supporting activities. Physiotherapists in the JA program are mandated to undergo formal training and to pass a test in order to obtain the required certificate to receive patients in this program. The training program involves three separate sessions: a 20-minute self-learning session on general OA care, a 40-minute self-learning session on technical and care-related aspects of providing OA care over a digital platform, and a final one-hour JA-staff supported test involving vignette like situations of digital OA care. As these types of costs are one-off activities they are reported separately in the results section. As also noted above, the BOA program also requires participating physiotherapists to take a one-day training course. The costs of these training events are estimated and reported below. The digital model of care requires a certain amount of technical support, both to physiotherapists and to patients. Such support is provided as needed on a stand-by basis. To quantify the unit cost of this item the total annual cost of support is divided by the total number of patients in 2018. While it is likely that also providers in the BOA model of care require a certain amount of technical and other types of support, no information and data on such support have been obtained and it is therefore assumed that the total cost of technical and other support in this model is equal to half of that of the digital model. Transportation costs for the BOA group of patients are estimated by multiplying driving time based on average distance to a health care clinic in Sweden with the average number of appointments. This estimate is based on a recent analysis by the Swedish National Audit Office of the distance and travel time to a primary care clinic by the general population [24]. Vehicle transportations are assumed to lead to CO 2 -emissions [25]. While the transportation mode varies, it can be assumed, given the debilitating nature of OA, that the majority of transportations is made using a motor vehicle (car or bus). Finally, it is assumed that all patients reach the national user-fee ceiling of 1,100 SEK per year in direct financial costs. The analysis does not consider costs for research and development and any other investment costs. The main reason for this omission is that such costs are largely unknown for the BOA model of OA care, which has been in effect more than a decade and developed over a similarly long period of time. Finally, no costs for pharmaceuticals have been included as medicines are not part of the standard physiotherapy treatment regimen in either of the programs. Incremental cost-effectiveness ratio Based on the results of the costing analysis and on the results of one previous study [12] on the effects of the digital care model, the incremental cost-effectiveness ratio (ICER) was also calculated. For the given cost and effect differences, the ICER shows the cost per effect unit of adopting the intervention compared with the existing treatment model [26]. Results Based on the estimates of the resource domains, the results of the analysis show that the most common resource is time used for various care activities, including training/rehabilitation sessions, preparations and follow-up, and transportation (Table 2). For a complete table of costs, see Supplementary S1 Table. From a societal perspective, delivering one episode of care to a patient digitally costs 2 776 SEK compared with 10 610 SEK for a face-to-face patient, a difference of 7 835 SEK. In both models of OA care, the largest costs are borne by the patient, particularly so in the BOA model where 87 percent of total societal costs fall on the patient, compared with two-thirds in the digital model. While the largest cost item for the patient in the digital model is direct financial cost in the form of user fees, such costs constitute the smallest cost item in the BOA model. Conversely, due to the on-sight nature of care in the BOA model, the patients' largest costs include the time spent on performing the sessions and on transportation to and from the clinic. From the patient perspective, a critical difference between the two models is the ability to avoid transportation costs in the digital model of care. Differences between the two models of care can also be viewed from the health care system perspective. The total unit cost of delivering an episode of care in the digital model is 766 SEK compared with 1 299 SEK in the BOA model, a difference of 534 SEK. As can be seen from Table 2, these costs are mostly driven by the training sessions, which are more frequent in the digital model but also considerably shorter. The administrative costs (preparations and followup) are higher in the BOA model compared with the digital model, even assuming that technical and other types of support costs are only half of those in the digital model of osteoarthritis care. Finally, the BOA program of care is estimated to lead to 0.014 tons of CO 2 emissions. The value of these is obtained using the current price of emissions rights from the European CO 2 emissions market, EU-ETS [27]. The total emissions amount to 133 tons based on an estimate that around 9 500 patients participated in a full episode of care in 2018. The price of one ton of CO 2 emission is around 220 USD resulting in a total cost of around 555 747 SEK in CO2 emissions due to transportation to and from the clinic in the BOA model of care. Cost-effectiveness analysis In a recent study of the effect of the digital model of care, Nero and colleagues showed that patients with knee OA receiving care in the digital model report on average a reduction in experienced pain from 5.7 to 3.2 (a reduction by 2.5 points on a 0-10 scale, or a 44 percent reduction) after 12 weeks [13]. Patients with knee OA receiving the care in the BOA model report a reduction from 5.2 to 4.1 (a reduction by 1.1 points on a 0-10 scale, or a reduction of 21 percent) after the same amount of time [14]. Combining the results from the costing analysis with the results from the effect analysis an incremental cost effectiveness ratio (ICER) can be computed which shows the cost per unit of effect improvement [26]. The following ICER is calculated: While there are no set thresholds to decide whether an ICER of this magnitude can be considered cost-effective [26], the combined findings suggest that the digital model of OA care is cost-effective compared with the standard model of care. These estimates are net of any resources saved or value gained by the estimated outcome differences between the two models, as well as other differences with respect to treatment complications, unnecessary diagnostics and surgeries and medications that may have occurred. Discussion We have here shown that first-line OA treatment delivered digitally may cost as little as onequarter of the traditional in-person care, with cost advantages both on the health system side and on the patient side, as well as on the side of the broader society. Most of the cost differences are found on the patient side as the face-to-face model imposes significant costs to the patients in terms of time and travel costs. Understanding cost differences between alternative models of care is important for effective policy making. More generally, however, managing a common disease with increasing prevalence and significant economic burden on society and healthcare is a large and complex task. First-line management globally recommended in clinical guidelines for knee or hip OA includes disease information and exercise treatment [19][20][21]. Observational studies have shown this management to improve patient pain and function [28]. Widely used structured programs, such as Joint Academy 1 , BOA, and GLA:D 1 have in reports of observational data confirmed these beneficial results in real world settings. In addition to significantly improving patient pain and function and decreasing use of medications and sick leave, Joint Academy 1 and BOA also decrease willingness for surgery [13,14,29]. So far, no trials have been published comparing outcomes from OA face-to-face programs with digital equivalents. While care model preferences may often determine patient choice, other factors such as economics, flexibility, accessibility and scalability may be important as well, in particular to the health-care provider. Aside from cost, as shown here, digital programs differ from in-person care in several aspects. One of them is scalability and the economies of scale associated with digital models of care. User flexibility, instant on-demand access, engaging asynchronous support from health care professionals, and the ability to receive care at home thereby avoiding travel are other relevant aspects [30]. Equality in access to care between regions with or without easy access to in-person care may support more widespread implementation of first-line care for OA, as well decrease the need for transportation in connection with care episodes. A digital program can with relative ease be translated and be implemented with similar quality in areas with different cultures and languages. Economic gains by increasing the use of first-line OA management delivered digitally are not limited to lower costs of the first-line treatment for patient and provider. Routine OA care includes costly interventions, some of them shown to be of high patient value, others of doubtful or low patient value [31]. Total joint replacement is of high value for those with severe knee or hip OA, but not for all [32][33][34]. In the US alone, half a million hip replacements, and 1.1 million knee replacements are projected to be performed in 2020, at an estimated cost of between 30 and 48 billion USD [35]. First-line OA management has been shown to decrease patients' interest (willingness) in joint surgery [12,14,36]. If even a small proportion of those procedures was avoided or delayed through appropriate delivery of first-line OA management at the population level, cost savings would be considerable. Arthroscopic surgery of the knee or hip is one of the most common orthopedic procedures, but of contested patient value for those with degenerative joint changes [37]. As for joint replacement surgery for OA, preceding a shared decision on arthroscopic surgery with a structured first-line OA management may help avoid a considerable number of surgical procedures with ensuing annual cost savings of billions of USD [38][39][40][41]. Our study has some limitations. The cost-effectiveness comparison of care models was based on retrospective data while use of prospective cohort data or a randomized trial would be at lower risk of bias. Our study used outcomes and costs from Sweden and generalizability to other countries and health care systems will need to be confirmed. However, the face-toface program GLA:D has shown similar outcomes for Denmark, Canada and Australia [42], suggesting that patient-relevant outcomes may be generalizable across countries. We obtained no data on the actual costs of any technical and other types of support for the BOA model of care. These were assumed to be half of those in the digital model. Removing completely these types of costs from the BOA model of care does not change the overall results in any material way. One of the largest cost items to the patient in the BOA model is transportation costs to and from the clinic. The quantity of these costs can only be estimated with some level of uncertainty, current projection is most likely an underestimate of travel time as the number of BOA clinics is less than half of the number of primary care clinics in the country [43]. Removing them altogether would most likely result in an underestimate of the patient costs as the faceto-face nature of that model of care does require the patient to spend some time and other resources getting to and from the clinic to perform the training and introductory sessions. The CO 2 emissions are estimated with some uncertainty. However, removing these would not change the overall findings in any material way given their relatively small impact on the total societal cost of osteoarthritis care by means of the BOA model. Finally, as noted above, patients self-select into either of the treatment models. This may lead to a risk of selection bias of the cost estimates. However, we also note that the two groups of patients are comparable in terms of sex and age, suggesting that this source of potential bias may be limited. Conclusion This cost comparison and cost-effectiveness analysis of digital and face-to-face modes of delivery of structured first-line treatment for OA of the knee and hip suggests that digitally delivered care can substantially decrease the economic burden of OA for patients and health care providers. Digital OA care is a cost-effective alternative to existing on-sight models of care. Substituting existing care for digital care may lead to considerable savings for both patients and health systems. However, actual savings will be influenced by differences in OA management, reimbursement mechanisms and healthcare system characteristics across countries. Supporting information S1
2020-08-13T10:05:36.105Z
2020-08-12T00:00:00.000
{ "year": 2020, "sha1": "909a74bdbca1b707b054e9d8f24bf6f29b2f255e", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0236342&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "378a0f63d5b8e64064813fea9c1b1b73ec8d57b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
73424813
pes2o/s2orc
v3-fos-license
Biocompatibility and Bioimaging Potential of Fruit-Based Carbon Dots Photo-luminescent carbon dots (CD) have become promising nanomaterials and their synthesis from natural products has attracted attention by the possibility of making the most of affordable, sustainable and, readily-available carbon sources. Here, we report on the synthesis, characterization and bioimaging potential of CDs produced from diverse extensively produced fruits: kiwi, avocado and pear. The in vitro cytotoxicity and anticancer potential of those CDs were assessed by comparing human epithelial cells from normal adult kidney and colorectal adenocarcinoma cells. In vivo toxicity was evaluated using zebrafish embryos given their peculiar embryogenesis, with transparent embryos developing ex-utero, allowing a real-time analysis. In vitro and in vivo experiments revealed that the synthesized CD presented toxicity only at concentrations of ≥1.5 mg mL−1. Kiwi CD exhibited the highest toxicity to both cells lines and zebrafish embryos, presenting lower LD50 values. Interestingly, despite inducing lower cytotoxicity in normal cells than the other CDs, black pepper CDs resulted in higher toxicity in vivo. The bio-distribution of CD in zebrafish embryos upon uptake was investigated using fluorescence microscopy. We observed a higher accumulation of CD in the eye and yolk sac, avocado CD being the ones more retained, indicating their potential usefulness in bio-imaging applications. This study shows the action of fruit-based CDs from kiwi, avocado and pear. However the compounds present in these fruit-based CDs and their mechanism of action as a bioimaging agent need to be further explored. Introduction Semiconductor quantum dots (q-dots) hold much attention for their various potential applications in optical bioimaging and biomedical devices among others [1]. Because of their unique photoelectric proprieties, q-dots are generally considered as an alternative to conventional organic dyes [2]. However, the most traditional q-dots contain heavy metal elements, which raise significant concerns about the impact of using these nanomaterials in biological systems due to their potential human and environmental toxicity [3]. Carbon dots (CD) are a novel class of nanomaterials that have lately received a high degree of attention and investigation as they present the same major advantageous characteristics of semiconductor q-dots, such as high photostability and tunable emission [4]. However, Synthesis of CD All CD were synthesized by the hydrothermal method. Briefly, 20 mL of each fruit juice (kiwi, avocado, and pear) were sonicated for 15 min at 80 kHz, 25% ultrasonication power at 30 • C temperature. Afterwards, the mixture was stirred for 5 min followed by a hydrothermal treatment at 200 • C for 12 h using Teflon-coated autoclave tubes. Then, the resultant black carbonized solution was cooled down to room temperature and filtered through 0.22 µm cellulose ester-mixed Whatman filter paper to remove the large particles. The obtained brownish-yellow filtrate solution was dialyzed for 6 h in 1 L Milli-Q water using a dialysis membrane with 3.5 kDa MWCO, replacing the water every 30 min. For citrate CD synthesis, 2.0 g of citric acid were dissolved in 20 mL of 1 mol L −1 phosphate buffer at pH = 7.2. Twenty millilitres of this solution was used for the hydrothermal process and subsequent purification steps described above. Black pepper CDs were synthesized as follows: 2.0 g of black pepper powder was diluted in 10 mL of Milli-Q water and sonicated for 30 min at 80 kHz and 25% sonication power at 30 • C. Afterwards, the mixture was stirred for 15 min followed by a hydrothermal treatment at 200 • C for 12 h using Teflon-coated autoclave tubes. The resultant black carbonized solution was cooled down to room temperature. The same purification process was carried out as in Reference [36]. Finally, 1 mL of the purified CD was aliquoted and dried at 100 • C until a stable weight was obtained. Subsequently, the concentration of CD was calculated by the weight loss method. Characterization of the CDs Fluorescence spectra were measured using a Horiba Scientific Fluoromax-4 Instrument (Horiba Scientific, Piscataway, NJ, USA), equipped with a xenon discharge lamp, and a 1 cm quartz cell at room temperature. For all the fluorescence measurements, excitation and emission slit widths were kept at 5 nm. UV-visible measurements were performed on a Shimadzu UV-2550 UV-Vis spectrophotometer (Shimadzu Corporation, Tokyo, Japan). The concentration of all CDs Nanomaterials 2019, 9,199 4 of 19 was 2 mg mL −1 for fluorescence and absorption measurements. Transmission electron microscopy (TEM) experiments were carried out with a JEOL-2100 transmission electron microscope (JEOL Ltd., Tokyo, Japan) working at 200 keV. For TEM sample preparation, the CDs were placed onto formvar-carbon-coated copper TEM grids with 400 mesh (Agar Scientific, Essex, UK) and dried. Cytotoxicity Tests HEPES E3 buffer (i.e., HEPES 15 mmol/L with 5 mmol/L NaCl, 0.17 mmol/L KCl, 0.33 mmol/L CaCl 2 and 0.33 mmol/L MgSO 4 pH = 7.2, prepared in ultrapure water) was used to prepare all the CDs stock solutions. The solutions were sterilized by filtration through a 0.22 µm pore size filter and diluted in the respective cell medium to prepare the different test concentrations. Cells were seeded in 96-well plates at an initial density of 1 × 10 5 cells/mL, and left overnight for adherence at 5% CO 2 and 37 • C. The cellular density and viability were determined by counting the cells in a Neubauer chamber using Trypan Blue dead cells exclusion. After 24 h, the medium was replaced and both cell lines were exposed to serial dilution concentrations of fruit-based CD, by duplicate, for 48 h and 72 h. The following controls were considered: negative (viability) control, i.e., cells incubated with cell culture media; positive (death) control, i.e., cells incubated with 30% (v:v) dimethyl sulfoxide (DMSO); vehicle control, i.e., cells incubated with 45 µL of HEPES E3 buffer and 55 µL of cell culture media. The referred volume of HEPES E3 simulates the highest concentration of buffer used when preparing the test concentrations of the CD. Cytotoxicity was evaluated using the PrestoBlue ® (PB) cell viability assay. Briefly, after 48 h or 72 h of exposure to the different concentrations of the fruit-based CD, PB was added to each well (at a 1:10 dilution) and the microplate was incubated for 1 h at 37 • C. Fluorescence due to the reduction of the dye by the cells' metabolism, was registered at 560 and 590 nm excitation and emission wavelengths, respectively, using a Synergy H1 microplate reader (BioTek ® ). Auto-fluorescence of the fruit-based CD in the different cell culture media was analyzed to avoid misleading results (Table S1). No significant interference of CD's with PB's fluorescence was observed (data not shown). Parental Zebrafish Maintenance Fish maintenance and egg production was carried out as previously described before [23,37]. Zebrafish Embryo Toxicity (ZET) Assay All experiments were executed in agreement with the guidelines on the protection of experimental animals by the European Council, following the Directive 86/609/EEC, which allows zebrafish embryos to be used up to the moment of free-living. Additionally, our study follows the principles of the Declaration of Helsinki. As so, ZET tests were carried out up to a time post-fertilization (tpf) of 80 h (i.e., within the regulatory limit of exposure, established at 120 h); therefore, no license was required. After the rinsing and selection process of zebrafish viable zygotes, two hpf -eggs were randomly dispensed into 24-well plates (10 embryos per well at 16-cell stadium, i.e., cleavage period, four replicates per concentration) containing 2 mL/well of incubation medium. The test solutions were renewed every day up to tpf = 80 h. Throughout the ZET experiment, the microplates were kept at 28 • C under a 14:10 light:dark photoperiod cycle. Microplate wells sanitation (i.e., dead embryos removal) was ensured to avoid cross-contamination. ZET experiments were classified as valid when the mortality percentage was inferior to 25%, in the control group (i.e., freshwater as incubation medium). The developmental age of the zebrafish embryos was measured according to the hpf, and the stage was measured according to Kimmel et al. [31]. Data collection implied microscopic observations and photographic recording at four different time points: 8 h, 32 h, 56 h and 80 h. These time points correspond to crucial developmental stages. In order to avoid bias, random observations were carried out throughout the replicates. All measurements were performed using UTHCSA Image Tool v1.49. Depending on the time post-fertilization, the parameters in the analysis varied. Abnormalities such as deformed body shape, yolk, eyes, heart, atypical cellular masses or atypical pigmentation, and hatching delays were further recorded. All tested concentrations were prepared by dilution in HEPES E3 buffer. Sample Preparation Four hours and 80 h zebrafish embryos and larvae exposed to 1 mg mL −1 fruit-based CD during 2 h, were anesthetized with Tricaine 0.04% prior preservation following a sequenced protocol of fixation (with PFA), permeabilization (with MeOH), rehydration (with milliQ water), re-fixation, glycerol impregnation and analysis in 8-well glass bottom µ-slides (Ibidi, Planegg, Germany). Fluorescence Microscopy Imaging The fluorescence microscopy analyses were performed using a wide-field upright fluorescence microscope Nikon Eclipse Ni-E equipped with a Lumencor Sola lamp and ORCA-R 2 Hamamatsu camera. Images were registered using a 2 × objective and fluorescence filters of 387/11 nm excitation and 447/60 nm emission wavelengths, respectively, and an exposure time of 500 ms. Statistical Analysis Statistics were performed using STATISTIC software (StatSoft v.8, Tulsa, OK, USA). Prior to the parametric tests, all data were evaluated for homogeneity of variances using Levene's test and for normal distribution using the Shapiro-Wilk test. In cases of non-homogeneity, data were transformed before the parametric analysis. One-way ANOVA was used to analyze the effects of fruit-based CD on zebrafish embryos epiboly (8 hpf), head trunk index (32 hpf), spontaneous movements (32 hpf), hatching (56 hpf), yolk volume (56 hpf) and free-swimming (80 hpf). Nested ANOVA was applied to investigate into the differences of zebrafish embryonic heart rate. To avoid influences associated with covariates, an ANCOVA test was performed to determinate the impact of the nanomaterials on zebrafish embryos yolk volume at tpf = 8 h and 32 h (egg volume was used as co-variable) and on pupil size at 32 hpf (eye size was used as co-variable). At 56 hpf, zebrafish embryos yolk extension (embryo length was used as co-variable) was also analyzed using this statistical approach. One-way ANOVA model was used to analyze the effect of fruit-based CD on both cell lines tested. Post-hoc comparisons were conducted using Student-Newman-Keuls (SNK). The 0.05 level of probability was considered as the criterion of significance. The graphical data from in vitro tests were generated in GraphPad Prism 6.01. Please check the Supporting Information for more details. UV-Vis Absorption and Emission Spectral Characterization of CD We used kiwi, avocado and pear as carbon source for the synthesis of fluorescent CD by a facile and ecofriendly hydrothermal method. In order to compare the properties and toxicity of the fruit-based CDs with previously reported materials, we prepared citrate and black pepper CDs using a similar process. The obtained CDs were characterized by UV-vis and fluorescence spectral measurements. Kiwi CDs reveal the absorption peak at 284 nm, avocado CDs at 285 nm, pear CD at 284 nm and citrate CD at 286 nm ( Figure 1). The observed absorption bands are attributed to the π-π* electron transition of C=C bonds (sp2 domains) [38][39][40][41]. Black pepper CDs synthesis and their characterizations were reported in our previous publication [36]. (Figure 1h), respectively, with an excitation wavelength of 470 nm. All CDs showed a brownish-yellow color in daylight and green fluorescence under UV-light (insets of Figure 1). All the obtained CDs are stable for more than 6 months in aqueous solution, without any loss of their physicochemical properties, when stored in the dark at 4 • C. Figure 2 shows the excitation wavelength-dependent fluorescence emission spectra of avocado CDs. The fluorescence emission spectra of avocado CDs showed a progressive red shift and a dramatic increase of emission intensity when excited with from 200 to 470 nm wavelengths (Figure 2a-c). Beyond 470 nm and up to 600 nm excitation wavelengths, a further red shift was obtained in the emission maximum with a progressive decrease in the emission intensity ( Figure 2d). The strongest emission intensity was observed at 529 nm using an excitation wavelength of 470 nm. Hence, we kept 470 nm as the excitation wavelength for further studies with avocado CDs. The emission profile of other synthesized CDs was also studied, and all of them exhibited a similar trend (see Figures S1-S3, Supporting Information). The obtained luminescence of CDs may be attributed to defect states (surface defect emission) and intrinsic defects (zig-zag site emission) [42]. 3.1.2. TEM, ζ-Potential, XRD and Raman Spectra of CD Figure 3 shows representative TEM images of the synthesized CDs, demonstrating that they are monodisperse and uniform in size and shape. The average diameters of the CDs were estimated to be 4.12 ± 0.03, 4.42 ± 0.05, 4.35 ± 0.04, and 3.98 ± 0.07 nm for pear, avocado, kiwi, and citrate CD, respectively. The crystal lattices observed by HR-TEM were calculated to display a lattice distance of 0.32 nm (inset of Figure 3), which perfectly matches the previous reports and confirms that the obtained CDs are of crystalline graphitic nature [41,43,44]. The surface charge critically influences the interaction of a nanoparticle with its environment [45]. The synthesized CDs contain -COOH, -OH and epoxides in their structure. These functional groups generate an electrostatic repulsion among CDs [40]. This is the reason why our CDs are stable for several months without agglomeration. All fruit-based c-dots show negative surface charges, which confirms the existence of hydroxyl and carboxylate groups at their surface. Fruit-based CDs' zeta potential were (ζ mean± SD)/mV): −14.950 ± 2.871, −8.925 ± 2.167 and −10.100 ± 1.197 for kiwi, avocado and pear CDs, respectively. Figure S4 (Supporting Information) exhibits the XRD pattern of pear, avocado, kiwi, and citrate CD. The diffraction peaks are observed at 9.5°, 10.1°, 10.2°, and 9.9° respectively, corresponding to Figure 3 shows representative TEM images of the synthesized CDs, demonstrating that they are monodisperse and uniform in size and shape. The average diameters of the CDs were estimated to be 4.12 ± 0.03, 4.42 ± 0.05, 4.35 ± 0.04, and 3.98 ± 0.07 nm for pear, avocado, kiwi, and citrate CD, respectively. The crystal lattices observed by HR-TEM were calculated to display a lattice distance of 0.32 nm (inset of Figure 3), which perfectly matches the previous reports and confirms that the obtained CDs are of crystalline graphitic nature [41,43,44]. (Figure 4d). The obtained D band (sp3) corresponds to the A1g symmetry photons near the K-zone boundary, and the G band (sp2) corresponds to the E2g vibrational mode of sp2 carbon [35,[49][50][51]. The relative intensities of D and G bands (ID/IG) for pear, avocado, kiwi, and citrate CDs were 1.15, 1.09, 1.08, and 1.16, respectively, and reveal the existence of vacant lattice sites of sp3 carbon [35,49,50]. Quantum Yield Measurements The fluorescent quantum yield of each type of synthesized CD was calculated by using the The surface charge critically influences the interaction of a nanoparticle with its environment [45]. The synthesized CDs contain -COOH, -OH and epoxides in their structure. These functional groups generate an electrostatic repulsion among CDs [40]. This is the reason why our CDs are stable for several months without agglomeration. All fruit-based c-dots show negative surface charges, which confirms the existence of hydroxyl and carboxylate groups at their surface. Fruit-based CDs' zeta potential were (ζ mean ± SD)/mV): −14.950 ± 2.871, −8.925 ± 2.167 and −10.100 ± 1.197 for kiwi, avocado and pear CDs, respectively. Figure S4 (Supporting Information) exhibits the XRD pattern of pear, avocado, kiwi, and citrate CD. The diffraction peaks are observed at 9.5 • , 10.1 • , 10.2 • , and 9.9 • respectively, corresponding to the graphitic carbon (001) plane. A broad band was observed around 20 • , which corresponds to the graphitic carbon (002) plane [41]. These XRD peaks have a good matching with the characteristic peaks of graphene oxide [41,[46][47][48], and are in good agreement with the HR-TEM lattice distances (inset of Figure 3). Raman spectra of the synthesized CD are shown in (Figure 4d). The obtained D band (sp3) corresponds to the A1g symmetry photons near the K-zone boundary, and the G band (sp2) corresponds to the E2g vibrational mode of sp2 carbon [35,[49][50][51]. The relative intensities of D and G bands (ID/IG) for pear, avocado, kiwi, and citrate CDs were 1.15, 1.09, 1.08, and 1.16, respectively, and reveal the existence of vacant lattice sites of sp3 carbon [35,49,50]. Quantum Yield Measurements The fluorescent quantum yield of each type of synthesized CD was calculated by using the William's comparative method [51]. For this purpose, quinine sulfate was employed as a reference and the quantum yield was calculated according to Equation 1. Fs is the integrated fluorescence emission of the sample, Fr is the integrated fluorescence emission of the reference, Ar is the absorbance at the excitation wavelength of the reference, As is the absorbance at the excitation wavelength of the sample, QYs is the quantum yield of the sample, and QYr is the quantum yield of the reference fluorophore (quinine sulfate QY = 54%). The calculated fluorescence quantum yields of Quantum Yield Measurements The fluorescent quantum yield of each type of synthesized CD was calculated by using the William's comparative method [51]. For this purpose, quinine sulfate was employed as a reference and the quantum yield was calculated according to Equation (1). F s is the integrated fluorescence emission of the sample, F r is the integrated fluorescence emission of the reference, A r is the absorbance at the excitation wavelength of the reference, A s is the absorbance at the excitation wavelength of the sample, QY s is the quantum yield of the sample, and QY r is the quantum yield of the reference fluorophore (quinine sulfate QY = 54%). The calculated fluorescence quantum yields of pear, avocado, kiwi, and citrate CD are 20, 35, 23, and 35%, respectively. The obtained high quantum yield values confirm that the synthesized CDs are highly fluorescent. Avocado and citrate CD showed the highest quantum yield among all other synthesized CDs. A summary of the characteristic parameters studied for each CD is collected in Table 1. In Vitro Cytotoxicity of Fruit-Based Carbon Dots One of the most important parameters to evaluate the applicability of the fruit-based CD is the toxicity level induced by these nanomaterials when interacting with cells. Therefore, the cytotoxic effect of the fruit-based CD was evaluated. Also, the potential anticancer activity of novel fruit-based carbon dots was investigated by studying their differential activity against normal epithelial cells (HK-2) and colorectal cancer (Caco-2) cells. The cytotoxicity of the nanomaterials was evaluated using PrestoBlue ® cell viability reagent (PB) after exposure for 48 and 72 h to growing concentrations of the CD. In all cases, the highest concentration of HEPES E3 buffer (medium control) was 45% and it did not produce any significant effect on cellular viability, for either of the tested cell lines (data not shown). PB fluorescence was directly proportional to cell density, thus it was used to calculate the percentage of cell viability, assuming 100% as the viability obtained for the vehicle control. The results obtained were compared with CD synthesized from citric acid as a commercial source control and black pepper as a non-fruit control with previously reported potential anticancer activity and bioimaging application. Figure 5 shows the dose-dependent in vitro cytotoxicity of CD for the tested cell lines at 48 and 72 h. The fruit-based CD-induced obvious cytotoxicity to the Caco-2 cell line when their concentration was higher than 1.5 mg mL −1 can be noted. These results are in agreement with a previous work using mango-based CD, where A-549 (human lung carcinoma) cells showed nearly 100% cell viability up to 2 mg mL −1 [52]. Also, HeLa (human cervix epithelial) cells viability remained unchanged when the concentration of glycerol-based CD increased from (0 to 1.14) mg mL −1 [53]. When compared to the citric acid-derived (citrate) CD used as a commercial reference (synthesized as described in previous work [54]), they did not induce any effect, as expected. Unlike the other CDs prepared in this study, citrate CDs were not prepared from a food matrix but rather citric acid. Thus, only the products of citric acid decomposition could be present in the CD. We consider these CDs to be representative controls due to the fact that the source would not influence much of the biological activity, as already reported in our previous work, where no toxicity was observed in any of the cell lines tested in vitro [36]. However, pepper CD [24] (used as non-fruit reference) were more toxic for cancer cells, where concentrations above 0.5 mg mL −1 lead to more than 50% mortality, which is in agreement with previous studies using other spicy-based CDs [7,35] and a recent study that reported this effect [24]. Nanomaterials 2019, 9 FOR PEER REVIEW 11 up to 1 mg mL −1 . Sun's group also demonstrated that bare CD were not toxic to normal cells up to a relatively high concentration of 0.4 mg mL −1 [9]. It also became evident that the effects of pepper CD in HK-2 cells were clearly less pronounced than the ones observed for Caco-2 cells. The fact that citrate CD did not induce any significant effect on cell viability, neither on Caco-2 nor HK-2 cells, together with the different toxic profile obtained with pepper CD versus fruit-based CD suggests that the inhibition effect on cellular growth can be attributed to the different sources employed for the CD synthesis. In fact, very recently, Pierrat et al. claimed that the toxicity of the CD is mainly determined by the synthesis source [55,56]. The results in Figure 5 show that, globally, the toxicological profile obtained at 48 h exposure was maintained after 72 h suggesting that there was no progression on the cellular toxicity and that the toxic effect of the fruit-based CD was accomplished during the first 48 h. In general, pear CD showed to be the less toxic, while kiwi CD demonstrated to be the most toxic. It is clear, that overall, the toxicity of the fruit-based CD in cancer cells was lower than in normal cells, as it can be observed comparing the lethal dose 50 (LD50) values obtained ( Table 2). The unique proprieties exhibited by CD are known to result in a variety of interactions with cells, leading to potential necrosis, apoptosis, inflammation, oxidative stress, and other toxic responses [56][57][58][59]. However, in the present work, the mechanisms ruling the cytotoxic effect exerted by these fruit-based CDs were not studied and should be further explored. When testing the same concentrations of the fruit-based CD in HK-2 cells, in vitro cytotoxicity analyses demonstrated the stronger negative effect of the fruit-based CD on normal cells proliferation, although, in general, no more than 25% of mortality was observed for concentrations up to 1 mg mL −1 . Sun's group also demonstrated that bare CD were not toxic to normal cells up to a relatively high concentration of 0.4 mg mL −1 [9]. It also became evident that the effects of pepper CD in HK-2 cells were clearly less pronounced than the ones observed for Caco-2 cells. The fact that citrate CD did not induce any significant effect on cell viability, neither on Caco-2 nor HK-2 cells, together with the different toxic profile obtained with pepper CD versus fruit-based CD suggests that the inhibition effect on cellular growth can be attributed to the different sources employed for the CD synthesis. In fact, very recently, Pierrat et al. claimed that the toxicity of the CD is mainly determined by the synthesis source [55,56]. The results in Figure 5 show that, globally, the toxicological profile obtained at 48 h exposure was maintained after 72 h suggesting that there was no progression on the cellular toxicity and that the toxic effect of the fruit-based CD was accomplished during the first 48 h. In general, pear CD showed to be the less toxic, while kiwi CD demonstrated to be the most toxic. It is clear, that overall, the toxicity of the fruit-based CD in cancer cells was lower than in normal cells, as it can be observed comparing the lethal dose 50 (LD50) values obtained ( Table 2). The unique proprieties exhibited by CD are known to result in a variety of interactions with cells, leading to potential necrosis, apoptosis, inflammation, oxidative stress, and other toxic responses [56][57][58][59]. However, in the present work, the mechanisms ruling the cytotoxic effect exerted by these fruit-based CDs were not studied and should be further explored. In Vivo Toxicity of Fruit-Based Carbon Dots Zebrafish eggs of two hours were exposed to different concentrations of the fruit-based CD over 3 days (Figure 6). The results obtained with fruit-based CD were also compared with two other CDs synthesized from sources other than fruit sources: citric acid as a reference previously reported as innocuous, and black pepper. Black pepper CDs were recently studied in vitro and postulated as anticancer materials [36]. When analyzing the lethal effect of the fruit-based CD, a dose-dependent mortality (check Table S2 for statistical significance) only from 1.5 mg mL −1 onwards was verified. In agreement with the data obtained with in vitro experiments, kiwi CD showed higher embryotoxicity than the other fruit-based CDs, while citrate CD did not cause any apparent toxicity. For the pepper CD, a clear embryonic lethality was obtained above 0.5 mg mL −1 . Then, the sub-lethal toxicity of the CD was studied. Concentrations that induced more than 25% mortality were excluded from the sub-lethal toxicity study [60]. Nanomaterials 2019, 9 FOR PEER REVIEW 13 Figure 6. The cumulative survival rate of zebrafish embryos at 80 h when exposed to different concentrations of kiwi CD. Results represent the CSR mean ± SD. Statistical significance represented for tpf = 80 h. Imaging of Zebrafish Embryos Incubated with Fruit-Based Carbon Dots Zebrafish embryos with tpf equal to 4 and 80 h exposed to 1 mg mL −1 of fruit-based CD for 2 h were used as a model to validate the in vivo imaging application of the fruit-based CD. Considering the toxicity results obtained in vivo, 1 mg mL −1 was defined as a suitable concentration. No significant fluorescent signal was observed in 4 h exposed zebrafish embryos, although a slighter fluorescence was observed in the zebrafish embryos incubated with avocado CD, mainly around the chorion but inside as well (data not shown). This may be indicative of the fact that fruit-based CDs were retained in this structure or only partially internalized into the embryos, at least in the first 2 h of exposure. These results could suggest low retention of the fruit-based CDs in the zebrafish embryos or provide an indication of the aggregation of the fruit-based CD, which would make their internalization difficult [66]. In Figure 7, a more intense fluorescence was observed in the zebrafish embryos incubated with avocado and citrate CD. Huang et al. demonstrated similar fluorescent images collected at a similar excitation wavelength (555 nm) using zebrafish larvae incubated with 1.14 mg mL −1 of glycerol-based CD in real time [40]. Our data further demonstrates that yolk and eyes are indeed hot-spots for fruitbased CD bioaccumulation. It has already been shown that CD seemed to enter into the zebrafish larvae body through skin adsorption [52,66] and accumulate especially in the yolk sac, yolk extension and eye. This reveals their high affinity for lipids, which, together with their fluorescent properties, could be useful to elucidate different aspects of lipoprotein and nutritional biology in lipid transport and metabolism [67]. Overall, the different fluorescence observed is in agreement with the quantum yield (i.e., efficiency of fluorescence) values of the CD tested [40]. Avocado CD presented the higher Figure 6. The cumulative survival rate of zebrafish embryos at 80 h when exposed to different concentrations of kiwi CD. Results represent the CSR mean ± SD. Statistical significance represented for t pf = 80 h. Table 3 summarizes the effect on the different parameters monitored during the zebrafish embryo's development (8 h to 80 h), giving an overall and important perspective of the fruit-based CD' potential sub-lethal toxicity. At the initial stages of zebrafish embryogenesis, none of the fruit-based CDs seemed to cause any delay in development. Even so, further effects on spontaneous movements, free swimming and heart rate were identified, suggesting that these nanomaterials have the potential to disrupt features of the zebrafish early life neuro-motor coordination [61]. Moreover, the results obtained with pear and avocado CD point to an inhibition of zebrafish embryos' nutrient absorption from their yolk sacs. Due to the nutrients present in their yolks, zebrafish embryos do not need food for up to 7 days. As a consequence of using this nutritional reserve, their yolk volume tends to decrease over the embryonic development [29,62]. The yolks of zebrafish embryos exposed to pear and avocado CD from 1 mg mL −1 and higher were statistically different (i.e., larger) from the control group, implying that the embryos were not getting the required nutrients [24] (pear: One-way ANOVA: F (3,76) = 18.626; p < 0.05; avocado: One-way ANOVA: F (5,109) = 3.889; p < 0.05). Also, a delay in the hatching rate was observed, which may be a sign of disruption in chorionase activity [63]. In line with the in vitro results, zebrafish embryos exposed to citrate CD did not show morphological malformations, not even for 7 days at the higher concentration tested. Pepper CD were found to be non-toxic up to 0.5 mg mL −1 since no retardation and development defects were detected at this concentration, but highly toxic at higher concentrations. Interestingly, in vitro we obtained similar results to the work of Reference [36]; black pepper CDs exhibited higher toxicity against cancer cells than normal cells and were almost innocuous to normal cells for concentrations of up to 3 mg mL −1 . This activity pointed to a potential anticancer activity of black pepper CDs that could be related to the presence of piperine (or its decomposition products), an alkaloid with antioxidant properties that was recently reported as potential anticancer agent in in vitro studies. We demonstrated the presence of a trace amount of piperine in the black pepper CDs solution in our previous work. Some studies showed that piperine induces higher toxicity in vivo than in vitro due to its inhibitory effect on P450 cytochrome, which is responsible for the metabolism of many drugs [64]. Additionally, piperine was shown to disturb lipids' metabolism, which is crucial in the zebrafish's (and all vertebrates) embryo development [65]. Taking into account only the in vitro results with cells, black pepper CDs could be considered to be used up to 3 mg mL −1 for bioimaging or therapeutics. However, our in vivo results allowed us to re-evaluate the Non-Observed Adverse Effect Level (NOAEL) for those CDs. The diverse fruit-based CD showed slightly different in vivo toxicity, with kiwi CD being the most toxic (LD50 = 1.44 mg mL −1 ) and pear CD the last toxic (LD50 = 2.22 mg mL −1 ) ( Table 2). The fact that citrate CD did not induce any significant effect on zebrafish embryo development and the different profiles obtained with pepper and fruit-based CD reinforce the idea that depending on the starting material employed for CD synthesis, different toxic responses are obtained. Imaging of Zebrafish Embryos Incubated with Fruit-Based Carbon Dots Zebrafish embryos with tpf equal to 4 and 80 h exposed to 1 mg mL −1 of fruit-based CD for 2 h were used as a model to validate the in vivo imaging application of the fruit-based CD. Considering the toxicity results obtained in vivo, 1 mg mL −1 was defined as a suitable concentration. No significant fluorescent signal was observed in 4 h exposed zebrafish embryos, although a slighter fluorescence was observed in the zebrafish embryos incubated with avocado CD, mainly around the chorion but inside as well (data not shown). This may be indicative of the fact that fruit-based CDs were retained in this structure or only partially internalized into the embryos, at least in the first 2 h of exposure. These results could suggest low retention of the fruit-based CDs in the zebrafish embryos or provide an indication of the aggregation of the fruit-based CD, which would make their internalization difficult [66]. In Figure 7, a more intense fluorescence was observed in the zebrafish embryos incubated with avocado and citrate CD. Huang et al. demonstrated similar fluorescent images collected at a similar excitation wavelength (555 nm) using zebrafish larvae incubated with 1.14 mg mL −1 of glycerol-based CD in real time [40]. Our data further demonstrates that yolk and eyes are indeed hot-spots for fruit-based CD bioaccumulation. It has already been shown that CD seemed to enter into the zebrafish larvae body through skin adsorption [52,66] and accumulate especially in the yolk sac, yolk extension and eye. This reveals their high affinity for lipids, which, together with their fluorescent properties, could be useful to elucidate different aspects of lipoprotein and nutritional biology in lipid transport and metabolism [67]. Overall, the different fluorescence observed is in agreement with the quantum yield (i.e., efficiency of fluorescence) values of the CD tested [40]. Avocado CD presented the higher quantum yield (35%), followed by citrate CD (35%) and kiwi CD the lowest (23%), using quinine sulfate as the reference standard. Pear CD were not analyzed for their bioimaging proprieties because of their low quantum yield value compared with the others (20%). The fruit-based CD fluorescence intensity was clearly higher for zebrafish embryos at 80 h than those at 4 h, which could indicate a more efficient uptake by the more developed embryos. On the other hand, the contribution of chorion as a barrier to the entrance of the CD changes along development. Therefore, it would be interesting to further investigate the bioimaging of 4 h zebrafish embryos without chorion (i.e., dechorionated) exposed to the fruit-based CD. Fluorescence microscopy represents a highly useful tool to investigate the chorion in its function as a potential barrier to the uptake of chemicals [68]. Despite many existing studies in the literature reporting on CD synthesized from natural sources and used in bioimaging, very few of them have been demonstrated as good candidates for in vivo imaging and even, more limited number of studies have reported their in vivo toxicity (Table S2). Conclusions In the present study, three different CDs were synthesized from kiwi, pear and avocado fruits, by one green-pot hydrothermal method obtaining materials with a relatively high fluorescent yield. Fruit-based CD from pear, avocado and kiwi showed slightly lower toxicity in human epithelial cancer cells than in normal cells. Opposite results were obtained with black pepper CD (other food-based CD), and showed potential as anti-cancer materials. In vitro data showed that only high doses of fruit-based CD, i.e., above 1.5 mg mL −1 , induced noteworthy cell death, suggesting their biocompatibility for lower concentrations. Also, when monitoring the early life of zebrafish in vivo, no sub-lethal signs of toxicity were detected for concentrations up to 1.5 mg mL −1 , demonstrating the low toxicity of fruit-based CD in comparison with metal q-dots reported in the literature [13]. In this way, CD turns out to be an equivalent, low-cost and eco-friendly substitute for metal q-dots. Differential toxicity of CD from different food sources was demonstrated and should be further explored, as it could be dependent on the food matrix used for CD synthesis. Despite the fact that kiwi, avocado and pear CDs did not present any significant anti-cancer activity in this report, we observed significant differences in their toxicity. Further studies are envisioned in order to unravel the possible mechanisms of their observed biological activity. Furthermore, avocado CD demonstrated high potential for in vivo fluorescence bioimaging, as shown in zebrafish embryos with t pf = 80 h. Without any modification in their surface for tissue specificity, avocado CD are internalized and retained especially in the eyes and yolk sac, thus being potentially useful as a fluorescent contrast agent and/or lipid metabolism fluorescent probe. Finally, very few studies in the literature have investigated the toxicity profile of newly synthesized CDs in vitro and in vivo before their application for bioimaging. In our study, the importance of this step is stressed by the results obtained with the black pepper CDs, which despite showing low toxicity at mg mL −1 concentrations in vitro, showed great toxicity in vivo in that range. Diverse studies report cytotoxicity of CDs and move forward directly to the in vivo imaging studies using mice and applying concentrations of CDs that might be unsafe for the animal. Supplementary Materials: The following are available online at http://www.mdpi.com/2079-4991/9/2/199/s1, Figure S1: Emission spectra of pear CD under different excitation wavelengths, Figure S2: Emission spectra of kiwi CD under different excitation wavelengths, Figure S3: Emission spectra of citrate CD under different excitation wavelengths, Figure S4: XRD pattern of CDs, Figure Table S1. Autofluorescence of CD in Caco-2 and HK-2 culture medium. Table S2. Properties of the natural sourced CDs used in bioimaging and reported in the literature. All of them were tested for cell bioimaging. Table S3. Statistical analysis equations for the diverse sub-lethal toxicity parameters studied in zebrafish embryos. Funding: This article is a result of the project Nanotechnology Based Functional Solutions (NORTE-01-0145-FEDER-000019), supported by Norte Portugal Regional Operational Programme (NORTE2020) under the PORTUGAL 2020 Partnership Agreement through the European Regional Development Fund (ERDF).
2019-02-03T20:17:38.574Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "0f17655fa639dbdacec91b7aefff107e539a5fbc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/9/2/199/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49505731aa75d2d097d36ef38959f56a2e27786a", "s2fieldsofstudy": [ "Materials Science", "Medicine", "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
258482610
pes2o/s2orc
v3-fos-license
Identity of the holotype and type locality of Rhabdophis leonardi (Wall, 1923) (Colubridae: Natricinae), with notes on the morphology and natural history of the species in southwestern China Abstract The original description of Natrix leonardi (currently Rhabdophis leonardi) by Frank Wall in 1923, based on a specimen from the “Upper Burma Hills,” lacked important morphological details that have complicated the assignment of recently collected material. Furthermore, although the holotype was never lost, its location has been misreported in one important taxonomic reference, leading to further confusion. We report the correct repository of the holotype (Natural History Museum, London), together with its current catalog number. We also describe key features of that specimen that were omitted from the original description, and provide new details on the morphology of the species, including sexual dichromatism unusual for the genus, based upon specimens from southern Sichuan, China. Rhabdophis leonardi is distinguished from its congeners by the following characters: 15 or 17 DSR at midbody and 6 supralabials; distinct annulus around the neck, broad and red in males, and narrow and orange with a black border in females; dorsal ground color light green or olive; some lateral and dorsal scales possessing black edges, the frequency of black edges gradually increasing from anterior to posterior, forming irregular and ill‐defined transverse black bands; eye with prominent green iris; black ventral spots with a red edge, most numerous at midbody but extending halfway down the length of the tail. In southwestern China, this species is frequently found at 1730–2230 m elevation. It has been documented to prey upon anuran amphibians, including toads. A recently published phylogenetic analysis showed this species to be deeply nested with the genus Rhabdophis, as a member of the R. nuchalis Group. That analysis also revealed the existence of two closely related but geographically distinct subclades in the molecular analysis, one of which may represent an unnamed taxon. | INTRODUC TI ON The natricine genus Rhabdophis Fitzinger, 1843 is widely distributed across southern and eastern Asia, from northeastern India and Sri Lanka through China to Japan, and south to the islands of Malaysia and Indonesia. Note that the Natricinae, considered a subfamily here and by many authors, (e.g., Zheng & Wiens, 2016), is regarded as a family by other recent authors (e.g., Burbrink et al., 2020;Zaher et al., 2019). At either rank, the content and relationships of the clade are equivalent. Malnate (1960) assigned 15 species to Rhabdophis when he partitioned it from the expansive nominal genus Natrix. He characterized Rhabdophis as having a terrestrial habitus, with enlarged posterior maxillary teeth, usually following a diastema. He noted that many of the species reportedly possessed nuchal or nucho-dorsal glands. Those integumentary glands had first been described in Rhabdophis tigrinus by Nakamura (1935) and soon thereafter were reported in other congeners by Smith (1938). Initially of unknown function, those glands were only known from Rhabdophis, some species of Macropisthodon, and the monotypic genus Balanophis Boulenger, 1893, erected by Smith to accommodate Natrix ceylonicus (Smith, 1938). In the years since Malnate's resurrection of Rhabdophis, the genus has expanded to include 32 currently recognized species, several of them identified or included on the basis of molecular analyses. Formerly widespread species, such as the nominal R. subminiatus (the type species), R. tigrinus, and R. nuchalis, have been found to harbor cryptic diversity at the specific level (David & Vogel, 2021;Takeuchi et al., 2012;Zhu et al., 2022). Furthermore, a comprehensive analysis of the Asian natricines that possess nuchal or nucho-dorsal glands, together with their nominal congeners, was conducted by Takeuchi et al. (2018), who determined that the genus Macropisthodon, as constituted at that time, was paraphyletic. Two of the species included in that study (M. flaviceps, the type species, and M. plumbicolor) were found to be nested among the species assigned to Rhabdophis, as was Balanophis ceylonensis, whereas M. rudis lay far outside the so-called "nuchal gland clade" of Natricinae. Meanwhile, Figueroa et al. (2016) had found that M. rhodomelas also lies within Rhabdophis. Therefore, Takeuchi et al. (2018) formally synonymized Balanophis and Macropisthodon with Rhabdophis, uniting all species with nuchal or nucho-dorsal glands within that single genus, together with a few populations that appear to have lost the integumentary glands secondarily. Rhabdophis spilogaster was recently sequenced, and it was moved to the genus Tropidonophis (Deepak et al., 2022), whereas the phylogenetic positions of Rhabdophis auriculatus, R. chrysargus, and R. conspicillatus are unclear at this time and remain to be resolved (Deepak et al., 2022). Of the 32 currently recognized species of Rhabdophis, 14 occur in China, including Taiwan: R. adleri Zhao, 1997;R. chiwen Chen, Ding, Chen and Piao, 2019;R. confusus David & Vogel, 2021;R. formosanus (Maki, 1931); R. guangdongensis Zhu et al., 2014;R. helleri (Schmidt, 1925); R. himalayanus (Günther, 1864); R. lateralis (Berthold, 1859); R. leonardi (Wall, 1923); R. nigrocinctus (Blyth, 1855); R. nuchalis (Boulenger, 1891); R. pentasupralabialis Jiang & Zhao, 1983;R. siamensis (Mell 1931); and R. swinhonis (Günther, 1868). A recent study by Zhu et al. (2022) revealed additional cryptic diversity within the derived worm-eating clade of southwestern China, designated the R. nuchalis Group, to which R. leonardi belongs. Meanwhile, the morphology and chemistry of the nuchal and nucho-dorsal glands have been extensively studied and substantially clarified. The glands are now known to serve a defensive function by releasing noxious steroidal compounds known as bufadienolides that the snakes sequester from toxic prey (Hutchinson et al., 2007;Mori et al., 2012). Superimposing both the anatomical distribution of the integumentary glands (i.e., whether they occur along with the entire length of the body, as nucho-dorsal glands, or are limited to the neck, as nuchal glands) and the dietary source of the toxins (whether from amphibians or insects) on the phylogeny, it is clear that Rhabdophis likely ancestrally possessed nucho-dorsal glands that contain bufadienolides derived from a diet of anuran amphibians (Takeuchi et al., 2018;Yoshida et al., 2020). Importantly, the phylogeny also reveals that some members of a deeply nested clade that occurs in southwestern China and adjacent regions underwent a shift in their primary diet from frogs to earthworms. Accompanying that change in diet was a shift in the source of the sequestered defensive toxins from toads (Bufonidae) to lampyrid firefly larvae, both of which contain bufadienolide steroids (Yoshida et al., 2020). In the course of identifying specimens belonging to the wormeating clade of Rhabdophis from Sichuan Province, China, we encountered inconsistencies in our comparison of the new specimens with the original description of R. leonardi (Wall, 1923). Furthermore, our attempts to compare our specimens to the holotype of that species were confounded by erroneous information on the repository of the holotype, a problem exacerbated at the time by travel restrictions and museum closings associated with the COVID-19 global pandemic. Wall's (1923) original description of the holotype was insufficient to distinguish the species from among the greater diversity of Rhabdophis recognized today, and some critical attributes of the type specimen had been omitted, notably its sex. Ironically, this species appears to have the most extreme sexual dichromatism of any member of the genus. Finally, after the holotype had been located and our specimens had been determined to conform morphologically with R. leonardi, our efforts were further complicated by the presence of two well-differentiated clades within that nominal species, as documented in the molecular assessment of the R. nuchalis Group by Zhu et al. (2022). Here we correctly identify the repository of the holotype of Natrix leonardi Wall, 1923, describe the historical context behind the discovery of that specimen and the significance of the type locality, and provide important additional details on the morphology of the holotype, including its sex. We also address the phylogenetic relationships of species based on the recent work of Zhu et al. (2022) and a new molecular phylogenetic analysis. We describe in detail the morphology of a male specimen (for comparison with the female holotype; for the determination of sex of the holotype, see below), briefly describe several additional female specimens based on recent material, and describe a hatchling from the Sichuan population, including the coloration of these specimens in life and in preservative. We report on two additional localities in neighboring Yunnan Province, based upon field observations and photographs, and provide information on the natural history of the species in China. Finally, we discuss the likely association of the name Rhabdophis leonardi with one of the two molecular clades that currently bear that name and suggest fruitful directions for future studies. | Phylogenetic analysis A phylogenetic tree was inferred from analyses of concatenated sequences of one mitochondrial (cyt b) and one nuclear (c-mos) gene. Sequences were aligned by Mega 5.0 (Tamura et al., 2011). A total of 1599 base pairs for 50 samples were analyzed in this study, and Natriciteres olivacea (Peters, 1854) was selected as an outgroup. The sequences used for constructing our phylogenetic tree are listed in Table 1. The phylogenetic analysis employed maximum likelihood (ML) and Bayesian inference (BI) methods. The best-fitting model of sequence evolution for the BI analysis was determined using PartitionFinder 2.1.1 (Lanfear et al., 2017), separating all genes by codon position and identifying the best-fitting partitions scheme, employing the Akaike information criterion (Akaike, 1974), and the phylogenetic analysis was performed using MrBayes 3.1.2 (Huelsenbeck & Ronquist, 2001 | Morphological data The holotype of Natrix leonardi Wall, 1923, once located, was closely examined at its repository, and the data were compared to the original description to verify its identity. Photographs of the holotype were taken by Kevin Webb of the Photo Unit of the Natural History Museum, London. Rhabdophis leonardi The topology of the maximum likelihood (ML) tree was consistent with that of the Bayesian inference (BI) tree ( Figure 1). With respect to the position of Rhabdophis leonardi and its substructure, our phylogenetic results largely conform to those of Zhu et al. (2022), who focused on relationships among the R. nuchalis Group, to which R. leonardi belongs. | Type locality of Natrix leonardi In 1923 has several spellings and appears in two physical locations on older maps, whereas it does not appear at all in searches of recent online maps. In the 1920s P. M. R. Leonard, a member of the Burma Frontier Service, traveled repeatedly to the region near the Chinese border. By the time a survey of the aquatic mollusks had been conducted in the region, to assess the presence Schistosoma, Leonard was cited in the acknowledgments as Assistant Superintendent of the Northern Shan States, serving at Kutkai (Rao, 1928). Today, Kutkai is approximately 240 km by road (via Highway 3) southeast of Bhamo, the closest major city to the type locality (Google Maps, 2022) and, even in colonial times, an important port along with the Irrawaddy (or Ayeyarwady) River. Wall (1923) The type locality, Sinlum Kaba, lies in a triangular mountainous region above and to the east of Bhamo, in southeastern Kachin State. Recently recorded family history, published in a series of articles in the Katchinland News, sheds some light on the still confusing history of Sinlum Kaba (Lahpai, 2017(Lahpai, , 2020Pangmu, 2011), from which the following account is synthesized. From those published clan histories, it appears that Sinlum Kaba was established by the Gauri Lahpai tribal leader Zau Bawm in the early to mid-1800s (Lahpai, 2017). By the late 1800s Baptist missionaries William Henry Roberts and Ola Hansen had established a church and school in Bhamo, where they baptized Zau Tu, a member of the Gauri Lahpai chieftain clan, and Hka Jan (Lahpai, 2017;Pangmu, 2011 (Pangmu, 2011). The British outpost grew in importance, with a fortified military station, a center for handling legal matters involving the Kachin people, and plantings of fruit trees imported from England (Lahpai, 2020). Meanwhile, a high school that had been established in Pangmu by Zau Tu and Hka Jan in 1903 was moved to Sinlum Kaba in 1928. Lahpai (2020) notes that Hka Jan was recognized for her contributions as an educator at a Durbar (a gathering of regional dignitaries) in Sinlum Kaba in 1923. Such an event may well have involved a visit by colonial authorities such as Leonard, affording him an opportunity to acquire the specimens reported that year by Wall. A search for "Sinlum Kaba" or "Sinlumkaba" in current online maps (Google Maps or Google Earth) fails to return any records. clearings, which are located just over 2 km apart. Both clearings appear to contain substantial settlements, including several large structures, especially at the southern site, which also appears to include a helipad. These two clearings presumably represent the two | Location of the holotype Recent information on the location and identity of the holotype of Natrix leonardi presented similar ambiguities. Both of Wall's reports on collections of snakes from Sinlum Kaba (Wall, 1921(Wall, , 1923, each describing one new species of snake as a patronym for P. M. R. Leonard, imply that the specimens were donated to the Bombay Natural History Society (BNHS), whose museum collection was earlier recognized by the acronym BNHM (Leviton et al., 1985). Since 2020 the standard acronym of the BNHS museum collection is BNHS, and the acronym BNHM is now obsolete (Sabaj, 2020). However, only one of the holotypes of those two species has been specifically described in the literature as being deposited at the BNHS (Das & Chaturvedi, 1998), and even that designation may be in error. Importantly, the Reptile Database cites the holotype of Natrix leonardi as "BNHS = BNHM 466" (Uetz et al., 2022;accessed 25 March 2023). One of us (RK) examined and photographed that specimen and found it to be a colubrine snake, not a natricine. Indeed, that specimen is listed in the BNHM (now BNHS) type catalog (Das & Chaturvedi, 1998) as the holotype of Coluber leonardi (Wall, 1921; Figure 3), described in the Journal of the Bombay Natural History Society, and now considered a synonym of Archelaphe bella (Schulz et al., 2011). We note, however, that BNHM 466 appears in the photograph to be considerably larger than the holotype of Coluber leonardi (279 mm SVL; Wall, 1921). We suggest that BNHM 466 may, in fact, be the third and much larger specimen of Coluber leonardi (685 mm SVL), which was reported by Wall (1923) in the same paper in which he described Natrix leonardi. Thus, the identity and repository of the holotype of Coluber leonardi remains to be clarified, but resolution of that question is beyond the scope of this report. Because the holotype of Natrix leonardi was not found in the BNHM and was not listed in the type catalog for that collection, we next examined the holdings of the Natural History Museum, London (formerly British Museum (Natural History)), whose catalog number prefix for historical material is BMNH, and identified a specimen of Rhabdophis leonardi, BMNH 1946.1.12.86 (formerly BMNH 1923, from the Upper Burma Hills (Figure 4). Upon examination by two of us (DG and VD) and comparison with Wall's (1923) description, that specimen was determined to be the holotype of N. leonardi and, indeed, it was listed as a "type" in the BMNH catalog. As with other type specimens in the London collection, that specimen Green squares represent the locality of referred material, including two localities described in Wall (1921Wall ( , 1923 (Smith 1921) (now Archelaphe bella) and of Natrix leonardi (Smith 1923) (now Rhabdophis leonardi). The specimen is definitively not the holotype of Natrix leonardi and may not be the holotype of Coluber leonardi. See text for a full discussion of this specimen. was re-cataloged and assigned a new number after the end of World War II when specimens were returned from safekeeping. Although designated as a type specimen and bearing the more accurate coordinates for the type locality than those published by Wall (1921Wall ( , 1923 to the confusion over the identity and repositories of the two type specimens is the fact that the recommended acronym for the herpetological collection of the Bombay Natural History Society, BNHM, is an anagram of that for the Natural History Museum, London, BMNH (Leviton et al., 1985). Furthermore, the catalog number of the presumed holotype of Coluber leonardi in Mumbai is BNHM 466 (Das & Chaturvedi, 1998), which coincidentally is the same number as the page on which the description of Natrix leonardi appeared 2 years later, in the same journal (Wall, 1923). | Partial redescription of the holotype of Natrix leonardi The original description of Natrix leonardi (Smith, 1923), based on a single specimen, provides sufficient detail to confirm with confi- We note that Parker (1925), in a study of dorsal scale row reduction in specimens he identified as Natrix nuchalis, placed the reduction from 17 to 15 dorsal rows at approximately 45% of "Body Length" in the "Type" of Natrix leonardi (presumably then cataloged as BMNH 1923.10.13.39) and at over 50% of "Body Length" in a second specimen recognized, doubtfully, as the same species. Importantly, Parker did not define "body length," but he distinguished the body from a region he identified as the "neck" (but also did not define). Parker considered that all the species he examined were conspecific with what is now Rhabdophis nuchalis, but that conclusion has not been supported by recent analysis of that complex species group . | Diagnosis Rhabdophis leonardi is characterized by the following combination (Figure 7). | Description of females Morphological data for the other female specimens are listed in Table 2 and Figures 5,6. The female specimens differ most importantly from the male in the size and color of the nuchal collar. In adult females the collar is narrower (extending for about three to four scales) and is orange with narrow, irregular anterior and posterior black borders ( Figure 6). Although our sample is small, we believe this represents sexual dimorphism in the color of the nuchal collar, a feature that should be confirmed by future studies. The female hatchling generally resembles the adult females in nuchal coloration, although the collar is yellow rather than orange, contrasting more strongly with its prominent black borders ( Figure 5c). | Comparisons The number of dorsal scale rows in R. leonardi (17-17/15-15) distinguishes it from all congeners except R. auriculatus (Table 3), in which the scale rows reportedly also are reduced from 17 to 15 at about the midbody (Leviton, 1970). However, R. auriculatus, which is endemic to the Philippines, has white lines on the sides of the body and behind the eyes, as well as white spots on the dorsum. Within China, males of Rhabdophis leonardi most closely resemble R. helleri and R. siamensis in coloration. R. helleri was recently elevated from a subspecies of R. subminiatus to a full species by David and Vogel (2021), an action supported on molecular evidence by Liu et al. (2021), and the species has a wide range across southern China. R. siamensis, also resurrected from the synonymy of R. subminatus by David and Vogel (2021), oc- (David & Vogel, 2021;Zhao, 2006). Table 3. | Habitat, behavior, and distribution Residents of Xiaoshanbao Village (Figure 8) report that they occasionally encounter this species. Therefore, we suggest that this species occurs in southern Sichuan and adjacent Yunnan at approximately 1730-2230 m elevation, although it occurs at higher elevations at more northern localities. A blurry photograph of a toad being consumed by this species (Figure 5d) also was obtained from a resident of Huangcao Village. R. leonardi, therefore, is known to feed on earthworms and slugs (Yoshida et al., 2020;Zhao, 2006;Zhao et al., 1998), as well as anuran amphibians. We also note a catalog entry at the California Academy of Science indicating that CAS 215027, identified in the catalog as R. leonardi, contained snails in its stomach. Like other Rhabdophis, R. leonardi is oviparous. We also note here the presence of eight specimens at the | DISCUSS ION Species of the genus Rhabdophis are widely distributed across southern and eastern Asia, with an elevational range from sea level to more than 3000 m (Zhao, 2006). The existence of cryptic species of Rhabdophis has been suspected (Liu et al., 2021;Takeuchi et al., 2018), and several of those have recently been described (David & Vogel, 2021;Zhu et al., 2022). R. leonardi, as currently recognized, has a wide distribution in southwestern China and adjacent Myanmar. However, our data and those of Zhu et al. (2022) suggest that the two well-defined molecular lineages of nominal R. leonardi, referred to here as Clades B and C of Zhu et al. (2022) Rhabdophis leonardi appears to prey in part upon anuran amphibians, including bufonids, in contrast to at least some of the other, and generally smaller, members of the earthworm-eating R. nuchalis Group to which R. leonardi belongs (Piao et al., 2020;Yoshida et al., 2020). For example, R. pentasupralabialis has a total length of approximately 483 mm and R. chiwen has a total length of about 536 mm (Piao et al., 2020), versus about 600-700 mm in R. leonardi. In all these aspects of diet and size, R. leonardi appears intermediate between other members of the R. nuchalis Group and more general- Finally, our molecular results confirm previous findings that earlier concepts of Rhabdophis were not monophyletic. Specifically, R. ACK N OWLED G M ENTS This study was supported by a grant from the National Natural History. We are grateful to Yangyang Liu for assistance with specimen collection and examination. We also thank Gernot Vogel for assistance with revisions and for professional advice. We are grateful to Kevin Webb for the photographs of the holotype, and we thank Zhijie Jian for help with some additional photos. VD's contribution was supported in part by Humboldt fellowship hosted by Uwe Fritz at the Senckenberg Dresden. CO N FLI C T O F I NTER E S T S TATEM ENT The authors declare no conflicts of interest. DATA AVA I L A B I L I T Y S TAT E M E N T Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
2023-05-05T05:09:53.647Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "6f7d6ba91ae421c9aa79d278546020cd2c5ac9ed", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1002/ece3.10032", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6f7d6ba91ae421c9aa79d278546020cd2c5ac9ed", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
251040737
pes2o/s2orc
v3-fos-license
Information Processing Equalities and the Information–Risk Bridge We introduce two new classes of measures of information for statistical experiments which generalise and subsume ϕ -divergences, integral probability metrics, N -distances (MMD), and ( f, Γ ) divergences between two or more distributions. This enables us to derive a simple geometrical relationship between measures of information and the Bayes risk of a statistical decision problem, thus extending the variational ϕ -divergence representation to multiple distributions in an entirely symmetric manner. The new families of divergence are closed under the action of Markov operators which yields an information processing equality which is a refinement and generalisation of the classical data processing inequality . This equality gives insight into the significance of the choice of the hypothesis class in classical risk minimization. Introduction A key word in statistics is information. . . But what is information? No other concept in statistics is more elusive in its meaning and less amenable to a generally agreed definition. -Debabrata Basu (1975, p. 1). Machine learning is information processing. But what "information" is meant? Choosing exactly how to measure information has become topical of late in machine learning, with methods such as GANs predicated on the notion of being unable to compute a likelihood function, but being able to measure an information distance between a target and synthesised distribution (Bińkowski et al., 2018). Commonly used measures include the the Shannon information/entropy of a single distribution and the Kullback-Leibler divergence or Variational divergence between two different distributions. Csiszár's ϕ-entropies and ϕ-divergences (Csiszár, 1963(Csiszár, , 1967 subsume these and many other divergences, and satisfy the famous information processing inequality (Ziv and Zakai, 1973) which states that the amount of information can only decrease (or stay constant) as a result of "information processing." The present paper presents a new and general definition of information that subsumes many in the literature. The key novelty of the paper is the redefinition of classical measures of information as expected values of the support function of particular convex sets. The advantage of this redefinition is that it provides a surprising insight into the classical information processing inequality, which can consequently be seen to be an equality albeit one with different measures of information on either side of the equality. The reformulation also enables an elegant proof of the 1:1 relationship between information and (Bayes) risk, showing in an unambiguous way that there can not be a sensible definition of information that does not take account of the use to which the information will be put. The rest of the paper is organised as follows. In the remainder of the present section, we introduce the ϕ-divergence, summarise earlier work on extending it to several distributions, and sketch a philosophy of information which our main theoretical results formally justify and support. In §2 we present the necessary technical tools we use; §3 presents the general "unconstrained" information measures (with no restriction on the model class); §4 presents the bridge between information measures and the (unconstrained) Bayes risk; §5 presents the constrained measures of information (where there is a restriction on the model class), as well as the generalisation of the "bridge" to this case; §6 concludes. There are four appendices: Appendix A relates our definition of D-information to the classical variational representation of a (binary) ϕ-divergence. Appendix B shows how our measure of information is naturally viewed as an expected gauge function. Appendix C examines the different entropies induced by the F-information, showing how they too implicitly have a model class hidden inside their definition. Finally, Appendix D summarises earlier attempts to generalise ϕ-divergences to take account of a model class 1 . The ϕ-divergence Suppose µ, ν are two probability distributions, with µ absolutely continuous with respect to ν and let For ϕ ∈ Φ, the ϕ-divergence between µ and ν is defined as Popular examples of ϕ-divergences include the Kullback-Liebler divergence (ϕ(t) = t log t) and the Variational divergence (ϕ(t) = |t − 1|) among others; see (Reid and Williamson, 2011). There are two existing classes of extensions to binary ϕ-divergences -devising measures of information for more than two distributions, and restricting the implicit optimization in the variational form (see Appendix A) as a form of regularisation. We summarise work along the first of these lines in the next subsection, and the second in Appendix D after we have introduced the necessary concepts to make sense of these attempts. Beyond Binary -"ϕ-divergences" for more than two distributions Earlier attempts to extend ϕ-divergences beyond the case of two distributions include the ϕ-affinity between n > 2 distinct distributions; this is also known as the Matusita affinity (Matusita, 1967(Matusita, , 1971), the f -dissimilarity Nemetz, 1975, 1978), the generalised ϕ-divergence (Ginebra, 2007) or (on which we build in the present paper) Ddivergences (Gushchin, 2008). One could conceive of these as "n-way distances" (Warrens, 1. Part II of the present paper (Williamson, 2023), to appear in due course, will contain an explanation of the relationship between our information processing theorems and the traditional inequalities (usually couched in terms of mutual information); the derivation of classical data processing theorems for divergences (with the same measure of information on either side of the inequality) from the results in part I; relationships to measures of informativity of observation channels; and relationships to existing results connecting information and estimation theory. 2010) but most of the intuition about distances does not carry across, and so we will not adopt such an interpretation, and in the body of the paper refer to the objects simply as "measures of information." Generalisations of particular divergences to several distributions include the information radius (Sibson, 1969) R(P 1 , . . . , P k ) = 1 k k i=1 KL (P i , (P 1 + P 2 + · · · + P k )/k) where KL(P, Q) is the Kullback-Leibler divergence and the average divergence (Sgarro, 1981) K(P 1 , . . . , P k ) = 1 k(k−1) k i=1 k j=1 KL(P i , P j ). Some other approaches to generalising ϕ-divergences to more than two distributions are summarised by Basseville, 2010. The general multi-distribution divergence has been used in hypothesis testing (Menéndez et al., 2005;Zografos, 1998). Györfi and Nemetz (1975) bounded the minimal probability of error in terms of the f -affinity; see also (Glick, 1973;Toussaint, 1978). These results are analogous to surrogate regret bounds (Reid and Williamson, 2011, section 7.1) because there is in fact an exact relationship between I ϕ and the Bayes risk of an associated multiclass classification problem; see §4. Multidistribution ϕ-divergences have also been used to extend rate-distortion theory (primarily as a technical means to get better bounds) (Zakai and Ziv, 1975) and to unify information theory with the second law of thermodynamics (Merhav, 2011). The estimation of these divergences has been studied by Morales, Pardo, and Zografos (1998). The connection to Bayes risk suggests alternate estimation schemes. Going in the opposite direction, it is worth noting that the entropy of a single distribution can be viewed as the ϕ-divergence between the given distribution and a reference (or "uniform") distribution (Torgersen, 1981); see also Appendix C. Information is as Information Does In developing a philosophy of information, Adriaans and Benthem (2008, page 20) adopted the slogan "No information without transformation!" They asked "what does information do for each process?" We reverse this to: "what does each process do to information?" We avoid an essentialist claim of "one true notion" of information, but do not feel it necessary to follow the example of Csiszár (1972) of eschewing the word "information" for the neologism "informativity." We believe that the elements of our field need to prove their mettle by their relationships. Barry Mazur (2008, section 3) observed that "mathematical objects [are] determined by the network of relationships they enjoy with all the other objects of their species" and proposed to "subjugate the role of the mathematical object to the role of its network of relationships -or, a further extreme -simply replace the mathematical object by this network". 2 One could argue that such systematic study of the elements and their fundamental transformations is essential to achieve the called for transition of machine learning from alchemy to a mature science (Rahimi, 2017). We make a small step in this direction, focusing upon the transformation that measures of information of an experiment undergo when the experiment is observed via a noisy observation channel. This is a return to roots, since the very notion of Shannon information information was motivated by communication over noisy channels (Shannon, 1948(Shannon, , 1949, and that of the Kullback-Leibler divergence motivated by notions of sufficiency (Kullback and Leibler, 1951). That a sufficient statistic can be viewed as the output of a noisy observation channel is made precise in the general definition of sufficiency and approximate sufficiency due to LeCam (1964). Our perspective is motivated by the largely forgotten conclusion of DeGroot (1962), that even if one is only seeking some vague sense of "information" in data, ultimately one will use this "information" through some act (else why bother?), and such acts incur a utility (or loss), which can be quantified 3 . Thus any useful notion of information needs to take account of utility. Our general notion of information of an experiment is consistent with DeGroot's utilitarian premise; we suggest that it is the most general such concept consistent with the precepts of decision theory and statistical learning theory. This philosophy is made precise by our results showing the equivalence of the measures of information (which subsume most of those in the literature) and the Bayes risk of a statistical decision problem. Significantly, this means that the choice of a measure of information is equivalent to the choice of a loss function (plus, potentially, the choice of a convex model class) -thus any notion of information subsumed by our general measures really encodes the use to which one envisages the information being put, as De Groot admonished 60 years ago. If f is proper, closed, and convex, it is equal to its biconjugate: f = (f * ) * . The epigraph and hypograph of f are the sets The function f is closed and convex if and only if the set epi(f ) (or equivalently hyp(−f )) is also. The subdifferential of f at x ∈ X is the set The domain of the differential is the set dom ∂f def = {x ∈ X | ∂f (x) ̸ = ∅}. A selection is a mapping ∇f : dom ∂f → X * that satisfies ∇f (x) ∈ ∂f (x) for all x ∈ dom ∂f , and it is commonly abbreviated to ∇f ∈ ∂f . If ∂f is a singleton, then ∂f corresponds to the classical differential which we write D f . For f : X →R and α ∈ R, the α below level set of f is If f : R → R then its perspective is the functionf : R×R → R given byf (x, y) = yf (x/y). The perspectivef is positively homogeneous and is convex whenever f is. Observe that f (x, 1) = f (x). The halfspace with normal 1 n (and zero offset) is H ≤ 1n def = lev ≤0 ⟨ · , 1 n ⟩ 3. Interestingly, DeGroot was motivated to extend the attempt of Lindley (1956) to quantify the "amount of information" in an experiment, but unlike Lindley, did not presume that this was necessarily Shannon information. 4. See (Aliprantis and Border, 2006;Bauschke and Combettes, 2011;Hiriart-Urruty and Lemaréchal, 2001;Penot, 2012;Rockafellar, 1970). Since notation in the literature varies, we spell out our choice in full. We use · for the Hadamard product: that is, if X ∋ f, g is a function space then f · g is the regular function product (f · g)( · ) def = f ( · )g( · ); if X has dimension n < ∞ then element-wise vector product is written (f · g) def = (f 1 g 1 , . . . , f n g n ), For S, T ⊆ X and x ∈ X, S + x def = {s + x | s ∈ S}, and S + T def = {S + t | t ∈ T } (the Minkowski sum). For S ⊆ X we associate two functions: the S support function, and the S indicator function where p = 1 if p is true and 0 otherwise, and we adopt the convention that ∞ true = 0. If S is closed and convex then the support function is the Fenchel conjugate of the indicator function and vice versa. The recession cone of S is the set If S is convex then rec S is convex. If X is finite dimensional and S is bounded then rec S = {0}. The polar cone of S is the set The dual cone (negative polar cone) of S is the set The convex hull of S is the set the closed convex hull of S is the set which we abbreviate as cl co S def = cl(co S). For two measurable spaces (X, Σ X ) and (Y, Σ Y ) the notation f : (X, Σ X ) → (Y, Σ Y ) means that f is a measurable function with respect to the respective σ-algebras, which it is often convenient to abbreviate to f : X → (Y, Σ Y ). The Borel σ-algebra on a set X with some topology is B(X), and we write (X, B) def = (X, B(X)). The set of proper, closed, convex and measurable sets S ∈ Σ X is K(X). The subcollection of these that recess in directions at most T ⊆ X is Let P(X) be the set of probability measures on a measurable space (X, Σ X ). If X has dimension n < ∞ this is isomorphic to the set of vectors {p ∈ R n | p i ≥ 0, i p i = 1} and its relative interior rint P(X) is the subset of vectors for which p i > 0 for each i ∈ [n]. If f : X → R and µ ∈ ∆(X), we write µf = µ(f ) := f dµ. Conventionally a Markov kernel is a function M : Y × Σ X → R which is Σ Y -measurable in its first argument and a probability measure over X in its second. We use the notation of Çinlar (2011) to more compactly write M : Y ⇝ X. When Y has dimension n < ∞ we call a Markov kernel E : Y ⇝ X an experiment. It is convenient to stack the distributions E(1), . . . , E(n) induced by E into a vector of measures (one for each y ∈ Y ), the notation for which we overload: E def = (E 1 , . . . , E n ). Note that while E is an experiment (Markov kernel), E i (i ∈ [n]) are measures. If µ is a measure that dominates each E i , then the vector of Radon-Nikodym derivatives with respect to µ is and as a function maps X → R n ≥0 . 5 An experiment E tni : (Torgersen, 1991). When X = Y = [n], E : Y ⇝ X can be represented by an n × n stochastic matrix. For the following definitions, fix measurable spaces (Ω 1 , Σ 1 ) and (Ω 2 , Σ 2 ). The measurable functions Ω 1 ⇝ Ω 2 are L 0 (Ω 1 , Ω 2 ) and L 0 (Ω) def = L 0 (Ω, R) refers to the real measurable functions. The signed measures on Ω are M(Ω), the subset of these which are probability measures is P(Ω). To a probability measure µ ∈ P(Ω) we associate the expectation functional There are two operators associated to and conventionally overloaded with E 6 : The definitions above make it convenient to chain experiments: It is common in the information theory literature to write X → Y → Z to denote random variables X, Y and Z which form a Markov chain; that is, Z is independent of X when conditioned on Y. For our purposes however, it is more convenient to eschew the introduction of random variables, and to consider the kernels simply as mappings between spaces as defined above. Thus rather than writing a Markov chain in terms of the random variables X, Y and Z, X → Z, we will write the "chain" as a string of experiments operating on spaces X, Y and Z as X Unconstrained Information Measures -D-information In this section we introduce the "unconstrained" information measure I D (E). The name is in contrast to the "constrained" family we introduce in §5. The unconstrained information measures subsume the classical ϕ-divergences and their n-ary generalisations (see §3.2). D-information For a set D ⊆ R n and an experiment E : [n] ⇝ Ω, the D-information of E is where ρ ∈ P(Ω) is a reference measure that dominates each of the (E i ), 7 and d = (d 1 , . . . , d n ). The definition above was first proposed by Gushchin (2008) and is analogous 5. While this overloading may appear overeager, it provides substantial simplification subsequently. 6. Note the postfix notation for action of E on probability measures. 7. It always easy to find such a ρ, For example one may take ρ def = 1 n i∈ [n] Ei. to the approach used by Williamson (2014) and Williamson and Cranko (2022) where loss functions are defined in terms of a convex set, and which forms the basis of the bridge in §4. Remark 2. The form of (6) indicates we can equivalently write where dE/ dρ def = (dE 1 / dρ, . . . , dE n / dρ) is the vector of Radon-Nikodym derivatives, and σ D is the support function of D (3). This suggests, using standard polar duality results, that the D-information can be viewed as an expected gauge function, a perspective developed in Appendix B. Observe that (6) places no requirements on the continuity of the distributions (E y ) y∈ [n] with respect to one another. Thus the D-information is more than just a multi-distribution ϕ-divergence; when defined as the D-information, Proposition 4 below guarantees that I hyp(−ϕ * ) agrees with I ϕ on all measures E 1 , E 2 with E 1 ≪ E 2 and is a natural extension to compare measures that don't have this absolute continuity condition 8 . Existing generalisations of ϕ-divergences to n > 2 (Duchi, Khosravi, and Ruan, 2018a;Garcia-Garcia and Williamson, 2012;Nemetz, 1975, 1978;Keziou, 2015;Matusita, 1971) are subsumed by D-information. From ϕ-divergence to D-information Before proceeding with a more thorough study of (6) we justify its introduction as a generalisation of the ϕ-divergences. It is convenient to slightly refine our definition of Φ as follows: This is a very mild refinement of Φ and all ϕ used in the literature on ϕ-divergences are in fact contained inΦ. Observe that demanding ϕ be a proper function to R defined on all of R >0 implies that R ≥0 ⊆ cl(dom ϕ). Assuming lower semi-continuity is a mere convenience since one can enforce it by taking closures, and, as we shall see, the information functionals will not change in this case since they are expressible in terms of 8. There are definitions of ϕ-divergences that hold in the general case (Liese and Miescke, 2007, p. 35). The approach we take further generalises to be applicable to comparisons of measures that are only finitely additive instead of countably additive, as explained by Gushchin (2008), whose work was a major inspiration for the present paper. support functions of the epigraph of functions related to ϕ, which remain invariant under taking closures of the sets concerned. In any case, cl f and f coincide on rint dom f (Hiriart-Urruty and Lemaréchal, 2001, Proposition B.1.2.6). If we simply require that f (x) < ∞ for all x ∈ (0, ∞) then lower semicontinuity and the claim re domain follow as logical consequences. Suppose µ, ν ∈ P(Ω), with a common dominating measure ρ. Choose some ϕ ∈Φ. Then the ϕ-divergence (2) has the following representation using the perspective function ϕ, 9 Equation (9) is symmetric in µ and ν, in contrast to (8), with any intrinsic asymmetry relegated to the choice of sublinear functionφ. By the same argument used in Remark 1, the choice of ρ does not matter. Observe that upon substituting the definition of the perspective into (9) we obtain the formula as recently observed in (Agrawal and Horel, 2021, Remark 19), and which of course remains invariant to the the choice of ρ. Remark 3. It is a common result in nonsmooth analysis (due to Hörmander (Penot, 2012, Corollary 1.81, p. 56)) that the mapping taking a set to its support function, D → σ D , is an injection from the family of closed convex subsets to the set of positively homogeneous functions that are null at zero. Thus it is natural, as well as meaningful for our subsequent analysis, to parameterise (9) by a convex set as in (6) or (7). That is, given ϕ, we will work with the convex set D ∈ K(R 2 ) such thatφ = σ D ; an explicit formula for such a D in terms of ϕ is provided in Proposition 4 below. Since we will be considering n-ary extensions of I ϕ , it is convenient to number the measure arguments and stack them into a vector E def = (E 1 , E 2 ), in which case the pair (E 1 , E 2 ) may interpreted, equivalently, as a binary experiment [2] ⇝ Ω. Then Proof. The assumptions on ϕ ensure that it is closed. We have 9. This observation is due to Gushchin (2008). We also have the following converse result: Proof. Support functions are convex and thus it is immediate that ϕ D is too. We have Since Proposition 4 shows every ϕ-divergence corresponds to a D-information, it is natural then to ask which D-informations correspond to ϕ-divergences. Similarly to Proposition 4 we may obtain, from any D ⊆ R 2 a convex, lower semicontinuous function ϕ D : R →R by the mapping D → σ D ( · , 1). Ensuring that this function is finite on R ≥0 and normalised appropriately to be consistent with (1) is more subtle. Since σ D ≥ 0 (owing to the assumption 0 ∈ D), we must have inf σ D = σ D (1 n ) = 0. Positive homogeneity of σ D implies this holds along the ray Z too. Observe that (6) places no requirements on the continuity of the distributions (E y ) y∈Y with respect to one another. Thus the D-information is more than just a multidistribution ϕ-divergence; when defined as the D-information, Theorem 4 guarantees and it is a natural extension to compare measures that don't have this absolute continuity condition 10 . Most existing generalisations of ϕ-divergences to n > 2 (Garcia-Garcia and Williamson, 2012;Nemetz, 1975, 1978;Keziou, 2015;Matusita, 1971) are subsumed by D-information; the one exception (Birrell, Dupuis, et al., 2022) is discussed in Appendix D. Thus the D-informations that correspond to a normalised ϕ-divergence (with ϕ strictly convex) are those strictly convex D ∈ D(R n , R n ≤0 ) which lie in the half space with outer normal vector 1 n and pass through the origin at their boundary. This normalisation corresponds to the well known fact that ϕ-divergences are insensitive to affine offsets: Observe that transforming D to D c corresponds to "sliding" D along the supporting hyperplane x ∈ R 2 | ⟨x, 1 2 ⟩ = 0 , i.e. the boundary of H ≤ 1 2 . 10. There are more general definitions of ϕ-divergences that hold in the general case; see e.g. (Liese and Miescke, 2007, p. 35). The approach we take further generalises to be applicable to comparisons of measures that are only finitely additive instead of countably additive, as explained by Gushchin (2008), whose work was a major inspiration for the present paper. Proof. From (7) we have where the second equality follows from additivity of support functions of Minkowski sums, and the fact that σ {p} (x) = ⟨p, x⟩. A special case of this result is when n = 2 and p = (c, −c) which corresponds to the situation of Proposition 9, showing that translating D in the manner of Lemma 10 corresponds to the classical result that an affine offset to ϕ does not change I ϕ . Remark 11. With this result it is clear that we can always canonically assume that for any D such that σ D (1 n ) = 0, we have 0 n ∈ bd D. To see this, suppose 0 n ̸ ∈ bd D, and denote by s def = D σ D (1 n ), the support point of D in direction 1 n . Then using v = −s in the above proposition to determine D v ensures I Dv = I D and that 0 n ∈ bd D v . Requiring 0 n ∈ bd D and σ D (1 n ) = 0 corresponds, in the case that n = 2, to choosing the affine offset for ϕ such that ϕ is everywhere non-negative. Proof. Let D be such that for all totally non-informative experiment E tni , I D (E tni ) = 0. This means that for all measures µ, and all functions c : R → R + . Hence, σ D (1 n ) = 0. By definition of the support function, σ D (1 n ) = 0 means that the hyperplane {x | ⟨x, 1 n ⟩ = 0} supports D and thus In light of the above arguments, we define the class of such normalised D by 11 Observe that σ D (1 n ) = 0 and rec D = R n ≤0 together imply that D ⊆ lev ≤0 ⟨ · , 1 n ⟩. Given Remark 11, we could always restrict ourselves to 11. These sets are also called "comprehensive" ("downward" and convex); see (Martinez-Legaz, Rubinov, and Singer, 2002). Although many of the results below hold for more general choices of D, one loses nothing (in terms of the expressive power of I D ) in restricting D to D n or indeed D n 0 . In the case where n = 2, so Y = {1, 2}, for D ∈ D n , the corresponding function ϕ such that I ϕ = I D can be obtained as the mapping ϕ D : x → σ D ((x, 1)) and one is guaranteed that ϕ D ∈Φ (Proposition 6). Some further observations on the relationship between D-information and ϕ-information are given in Remark 26. Properties of D-information The D-information is insensitive to certain operations on D: taking closed convex hulls; and taking Minkowski sums with the negative orthant: ⇝ Ω be an experiment. Then Proof. Using some elementary properties of the support function (Auslender and Teboulle, 2003;Hiriart-Urruty and Lemaréchal, 2001) Appealing to Definition 7 this shows the first equality. In order to prove the second we use the fact that σ C = ι C + , where C is a cone and C + is its dual cone (4). Thus where the last step is a consequence of (R n ≥0 ) + = R n ≥0 (Hiriart-Urruty and Lemaréchal, 2001, p. 49). Since the function dE/ dρ maps into R n ≥0 , appeal to the alternate definition (7) completes the proof. Proposition 14. The D-information induces a quotient space on the closed convex sets This quotient space is isomorphic to D(R n , R n ≤0 ). Remark 15. Proposition 14 has a simple interpretation since for all bounded subsets D ⊆ R n , rec D = {0}; and thus the equivalence relation applies to these in addition to any set (unbounded) that recesses in directions R ⊆ R n ≤0 . Thus D(R n , R n ≤0 ) is the natural parameter space for I D . Some ϕ divergences (e.g. variational) are always bounded, and others (e.g. Kullback-Leibler) are not. There is a simple characterisation of when I D is guaranteed to be bounded: Then then we can always choose E * such that (dE * / dρ) (z) = cx * for some c > 0 for all z, and thus I D (E * ) = ∞. Furthermore, if I D (E * ) = ∞ for some E * then it must be the case that for at least one z, we have σ Remark 17. We can express the Blackwell-Sherman-Stein theorem (Ginebra, 2007, section 3.2.2) in terms of I D . Say one experiment E : [n] ⇝ X is better than F : [n] ⇝ X, and write E ≽ F , if there exists a Markov kernel T : X ⇝ X such that F = ET ; that is, experiment F can be obtained from experiment E by applying some corruption kernel T . The theorem states: (18) (As usual, the choice of dominating measure ρ does not matter). We now argue that we can replace f by σ D with D ∈ D n . Since dE dρ (x), dF dρ (x) ∈ R n ≥0 for all x, it suffices to ensure dom σ D = R n ≥0 which is guaranteed by the fact that rec D = R n ≤0 . Since f appears on both sides of (18), an additive offset is cancelled, and thus we can always subtract σ D (1 n ) from both sides which is tantamount to assuming σ D (1 n ) = 0. Thus we can replace (18) by That is, E is better than F if and only if, for all D, the D-information of E is greater than or equal to the D-information of F ; one cannot compare E and F in the absolute sense of ≽ by using only one measure of information. The Bridge between Information and Risk Having introduced the D-information, in this section we show its connection to the Bayes risk, and present the corresponding information processing equality. D-information and Bayes Risk Classically, a loss function is a mapping ℓ : P([n]) × [n] →R + , where the quantity ℓ(µ, y) is to be interpreted as the penalty incurred when predicting µ ∈ P([n]) under the occurrence of the event y ∈ [n]. A loss function is said to be proper if the expected loss is minimised by predicting correctly, and strictly proper if it is minimised by predicting respectively. Considering a product space Ω×[n] and measures µ ∈ P(Ω×[n]), ν ∈ P([n]), we introduce two classical quantities, the Bayes risk, and conditional Bayes risk: and These are related by where µ X is the law of X and µ Y|X is the conditional distribution of Y given X. Since the Bayes risk and D-information can both be written in terms of a support function, it is unsurprising that there is a relationship between them, and in fact it is simple. In order to demonstrate this, we need some technical results first. Lemma 19 ((Rockafellar and Wets, 2004, Theorem 14.60)). Suppose (Ω, Σ) is a measurable space and let F ⊆ L 0 (Ω, R n ) be decomposable relative to a sigma-finite measure ρ on Σ. Let ψ : Ω × R n →R be a normal integrand 14 , and let Lemma 20. Let (Ω, Σ) be a measurable space, and ρ, a sigma-finite measure on Σ. Let D ⊆ R n be nonempty, closed and measurable. Let k : is measurable for all d ∈ R n and −k(x, · ) is convex and lower semi-continuous for all Proof. In order to apply Lemma 19, let ψ( Since ψ is the sum of the indicator function of a closed measurable set and an appropriately measurable, lower semicontinuous map, it is normal (Rockafellar and Wets, 2004, Proposition 14.39). The collection L 0 (Ω, R n ) is trivially decomposable. Therefore 14. The technical terms "normal integrand" and "decomposable" are defined by Rockafellar and Wets (2004, Definition 14.27, 14.59), to which we refer the reader for details. Proposition 22. Suppose E : [n] ⇝ Ω is an experiment and D ⊆ R n is closed. Then Proof. The proposition follows from Lemma 20 with k(x, d) def = ⟨(dE/ dρ)(x), d⟩ and dρ def = 1 n i∈ [n] dE i , along with the observation that since D is closed, it is also Borel measurable. It will be convenient to write the prior π as a vector (π 1 , . . . , π n ) ∈ R n . We now present the relationship between D-information and the Bayes risk 15 . ⇝ Ω, and π ∈ P([n]). Let π · spr(ℓ) def = {(π · f ) | f ∈ spr(ℓ)} denote the Hadamard vector product π · f = (π 1 f 1 , . . . , π n f n ) for each element of spr(ℓ). Then Proof. Equation (23) is obtained from Lemma 21 and Proposition 22 as follows: Remark 24. The relationship D = −π · spr(ℓ) is a generalisation of that developed for ϕ-divergences (n = 2) in (Reid and Williamson, 2011) as we now elucidate. Inverting the relationship we have spr(ℓ) = − 1 π · D, where 1 π def = 1 π 1 , . . . , 1 πn . It is elementary 15. This theorem (sans the geometric insight) was presented by Garcia-Garcia and Williamson (2012), and restated in a related form by Duchi, Khosravi, and Ruan (2018b). It both extends and simplifies the version for n = 2 presented by Reid and Williamson (2011), which itself extended beyond the symmetric (margin loss) case the version due to Nguyen, Wainwright, and Jordan (2009), which first appeared in (Nguyen, Wainwright, and Jordan, 2005), and which in turn extended the observations of Österreicher and Vajda (1993) and (Gutenbrunner, 1990). Earlier attempts to connect measures of information to Bayes risks include Fano's inequality (Fano, 1961, Section 9.2)(Polyanskiy and Wu, 2019, Section 5.3), and the inequalities derived by Pérez (1967) and Toussaint (1974Toussaint ( , 1977Toussaint ( , 1978. Other precursors are the generalised entropies of Dupuis et al. (2014) defined in terms of a Neyman-Pearson hypothesis testing problem (and thus equivalent to generalised variational divergence). In the binary case, with Variational divergence and 0-1 loss, the bridge is classical (Devroye, Györfi, and Lugosi, 2013). There is now quite a literature on information-theoretic statistical inference based on divergences (Pardo, 2018); the bridge described in the present section suggests that such methods can be profitably viewed as a re-parametrisation of classical decision-theoretic methods based on expected losses. The relationship between measures of information and the Bayes risk was also observed in (Chatzikokolakis, Palamidessi, and Panangaden, 2008) for information security problems, and in (Alvim et al., 2012) for general information leakage problems. The witness to the supremum in I D I D is defined via a supremum. There is insight to be had by examining the function that attains this. Let ∇σ D be a selection of ∂σ D . Euler's homogeneous function theorem: and the 1-homogeneity of σ D implies ∂σ D is 0-homogeneous (so for any c > 0, ∂σ D (cx) = ∂σ D (x)). We can thus determine the argmax in (6): ⇝ Ω is an experiment and D ⊂ R n . Let ρ be a measure that dominates each of the measures (E y ) y∈Y . Then if σ D is finite on R n >0 there exists a selection ∇σ D ∈ ∂σ D over R n >0 , and Note that the requirement is only that σ D is finite on R n >0 , not on R n ≥0 which would exclude standard unbounded information measures such as Kullback-Leibler divergence. where in the fourth equality we apply Lemma 20 with k = ⟨ · , · ⟩. This proves the first equality since ∇σ D is 0-homogeneous. Euler's homogeneous function theorem implies This shows the second equality. Remark 26. It is instructive to evaluate the witness of the supremum in Proposition 25 in the case of Y = {1, 2} in terms of the ϕ-divergence parameterisation of I D . With D ϕ as in (10), we have σ Dϕ =φ. Assume ϕ is differentiable, so g ϕ def = Dσ Dϕ = Dφ exists and by direct calculation we obtain with the witness is given by which is the classical form of the ϕ-divergence (2). The Family of D-informations In Theorem 23 we observed an interesting connection between the Bayes risks associated to a risk minimisation and the negative D-information associated with its negative superprediction set. There is also a similar asymptotic characterisation of the superprediction sets of positive loss functions. Proposition 28 is a special case of a much more general result stated for superprediction sets on general outcome spaces in (Cranko, 2021, Proposition 4.6 (a), p. 67). We include its short proof for completeness. Proof. We first use the property that A ⊆ B implies rec(A) ⊆ rec(B) and ℓ(P([n])) This shows rec(spr(ℓ)) ⊆ R n ≥0 . Next, using the associativity of the Minkowski sum which shows R n ≥0 ⊆ rec(spr(ℓ)), and completes the proof. After observing that − rec(D) = rec(−D), Propositions 27 and 28 yield another characterisation of the connection between the D-information and Bayes risks with nonnegative proper loss functions, this time in terms of the asymptotic geometry of these sets. Although it may seem coincidental that-despite very different origins and motivating definitions-the sets spr(ℓ) and −D look very similar from afar, this relationship is not at all surprising when parameterising these functionals using a set, as we have done. The bilinearity of the expectation operator means that we are working with a pointwise infimum or supremum over linear forms, that means that, without loss of generality, we can replace the set by its closed convex hull. This explains the natural characterisation in terms of the support function (Remarks 2 and 18). Since both of these functionals operate on sets of probability measures, in order for them to be meaningful they should be sufficiently finite, this is the essence of the asymptotic characterisations in Propositions 27 and 28. Remark 29. We have shown that the recession cone of spr(ℓ) is such that the induced D has the right recession cone for D information, but what about normalisation? In the same way that there is some freedom in normalising D, we have freedom in normalising ℓ. In previous work (Vernet, Williamson, and Reid, 2016;Williamson, 2014;Williamson and Cranko, 2022) we have normalised proper losses ℓ such that ℓ(e i ) = 0 for i ∈ [n] (where e i is the canonical unit vector). This implies that spr(ℓ) ⊂ R n ≥0 . For the present paper it is more convenient to normalise such that ℓ(1 n /n) = 0 n and σ spr(ℓ) (1 n /n) = 0. The bridge from risks to information requires the specification of the prior π which can be seen to effectively scale spr(ℓ) separately in each dimension. Of course the simplest case to consider is that π = 1 n /n, in which case it follows immediately that if ℓ satisfies (26) then D := −π · spr(ℓ) satisfies σ D (1 n ) = 0 ⇒ D ⊆ lev ≤0 ⟨·, 1 n ⟩ and 0 ∈ bd D, and consequently D ∈ D n . Given an ℓ that does not satisfy (26), it can be made to do so by translation and scaling. Thus the normalisation conditions we impose upon D can always be met by suitable adjustment of ℓ. Adopting the normalisation in (26) means that for π = 1 n /n, the statistical information of DeGroot (1962) is simply the negative Bayes risk, because σ spr(ℓ) (1 n /n) = 0 implies the "prior Bayes risk" L(π, M ) is zero; see (Reid and Williamson, 2011, Sections 4.6 and 4.7). Remark 30. The bridge result (Theorem 23) implies that any means by which multiple loss functions are combined, by combining their superprediction sets, provides an analogous combination scheme for information measures, by combining D i ∈ D n , i ∈ [m]. The combination schemes in (Williamson and Cranko, 2022) based upon M -sums (Gardner, Hug, and Weil, 2013) suggest one can simply take M -sums of the D i . This generalises the combination schemes proposed by Kůs (2003) and Kůs, Morales, and Vajda (2008). D-Information Processing Equality One of the most basic results in information theory is the information processing inequality (Cover and Thomas, 2012). It is often stated in terms of mutual information, but there is a version, which is equivalent, in terms of divergences (Polyanskiy and Wu, 2019). We defer until part II of the paper (Williamson, 2023) a detailed statement and examination of the connection between the two types, and indeed the connection with what we present below. Rather than an inequality, below we present and information processing equality, with, however, a different measure of information on either side of the equation. Also, the result below is for what in the machine learning community is called "label noise". The traditional information processing inequality is for the situation of "attribute noise" and is treated in §5 below. Proposition 31. Suppose E : [n] ⇝ Ω, R : [n] ⇝ [n], and D ⊆ R n . Then where R T D def = {R T d | d ∈ D} uses the representation of R as a matrix. Proof. Identifying R with its representation as a stochastic matrix, and writing dE / dρ for the vector ( , and Remark 32. Observe that a n × n permutation matrix P can be thought of as a Markov kernel P : [n] ⇝ [n]. Let P D := {P d | d ∈ D} and say that D ⊆ R n is permutation invariant if for any such P , P D = D. Then Proposition 31 implies for such D that I D (P E) = I P T D (E) = I D (E), since P T is also a permutation matrix. Thus, in this situation, I D is permutation invariant. An equivalent, but less elegant, version of this observation was given in (Garcia-Garcia and Williamson, 2012). When n = 2, and D is parametrised by ϕ as in Theorem 4, this invariance corresponds to the requirement that for all x > 0, ϕ(x) = ϕ ⋄ (x) = xϕ(1/x) (the Csizár conjugate of ϕ) which implies I ϕ (P, Q) = I ϕ (Q, P ). Example 33 (Label Noise). Let E : [n] ⇝ Ω be an experiment and R : [n] ⇝ [n] a Markov kernel. We can thus form the product experiment RE as per the diagram This corresponds to "label noise" -that is noise in the observations of Y . Instead of learning from (X, Y ) one only gets to observe (X,Ỹ ) for some corrupted versionỸ of the true label Y . For example, when n = 2, and Y ∈ [2] one might have a label flip with probability p. This corresponds to R having the representation as the stochastic matrix Then for D ⊆ R n , with Proposition 31 When n = 2 one can translate the result of Proposition 31 to the language of ϕ divergences, in which form the result is less perspicuous than (27): is the Markov Kernel parametrised as R = r 1 1 − r 1 1 − r 2 r 2 , and ϕ R (z) = ((1 − r 2 )z + r 2 ) ϕ r 1 z + 1 − r 1 (1 − r 2 )z + r 2 . Remark 35. Example 33 corresponds to previous work on loss correction, whereby learning with a given loss with noisy labels is equivalent to learning with a "corrected loss" with noiseless labels; see e.g. (Patrini et al., 2017;van Rooyen, Menon, and Williamson, 2015;van Rooyen and Williamson, 2018). Constrained Information Measures -F-information As we saw in §4.1, Theorem 23 shows how the D-information can be connected to risk minimisation. In practice one can never actually attain the Bayes risk, because with access only to a finite number samples rather than the exact underlying distribution, one needs to restrict the hypothesis class (Vapnik, 1998) in order to make the optimisation in (19) well-posed (when using empirical measures). It is helpful to consider the formula for the (unconstrained) Bayes risk in (19) repeated below for convenience. Embracing the above viewpoint, we modify this by optimising over H ⊊ L 0 (Ω, P([n])), and call this restriction the constrained Bayes risk: There is a slight redundancy in (28) since the function class H only appears via composition with the loss function ℓ. When viewed in terms of information rather than risk, (28) is precisely the measure of information which we now introduce. F-information Recall the expression for I D (E) in (22) (swapping the integral and the sum for convenience in what follows): If now we restrict the supremum to be over a set F ⊆ L 0 (Ω, D), the F-information of an experiment E : [n] ⇝ Ω is 16 With notation that is consistent with Theorem 23, from the definition of the constrained Bayes risk (28), a prior π ∈ P([n]), a loss function ℓ, an experiment E, and a hypothesis class H ⊆ L 0 (Ω, P([n])), the F-information is related to the constrained Bayes risk by 1), . . . , π n ℓ(h(x), n)) | h ∈ H}, that is the composition of H with ℓ, and scaled by π. (This is proved below in Theorem 37.) Consider the collection L 0 (X, D) of measurable mappings from X to some D ∈ Every D-ranged F is a collection of appropriately measurable mappings from X into D, and every d ∈ D may be attained by f (x) for some f ∈ F and x ∈ X. The maximal (by subset ordering) D-ranged F is simply L 0 (X, D), the set of all measurable mappings from X to D. Choosing smaller sets is equivalent to working with restricted hypothesis classes in normal statistical decision problems (an assertion we make precise below). The extra flexibility of working with such constrained function classes is necessary to capture the effects of attribute noise on information measures. 16. There are a number of precursors of IF which are summarised in Appendix D. where ρ is an arbitrarily chosen reference measure. If D ∈ D(R n , R n ≤0 ) and F D def = L 0 (X, D), then it is apparent from (6) that I F D = I D ; for any other F ⊆ L 0 (X, D) we obviously have I F ≤ I D , since the supremum is further restricted. If furthermore D is normalised (i.e. D ∈ D n ) and F is D-ranged, then 0 ≤ I F (E) as can seen by considering F = C D whence Thus for D ∈ D n and D-ranged F , The F -information is invariant under convex hulls and closure (Müller, 1997): We need to justify the interchange of order of summation at (31)-(32). The reordering can only fail if there are two subsequences, one diverging to −∞ and one to +∞ which cancel each other out. But this is impossible because I F (E) ≤ I D (E) < ∞ and thus there can be no terms that diverge to +∞ (even though it is possible that f i (x) = −∞, but such f would not be chosen by the supremum operation, and all the α i ∈ [0, 1]). This proves the first equality. We can now assume F is convex. We need to show I cl E) is also linear and thus continuous for any E. The supremum of a continuous real-valued function over the closure of a set is equal to the supremum over the set, which proves the second equality. Thus there is no loss of generality in henceforth assuming that F is closed and convex, as has been observed in the special case when n = 2 and D = D var (defined in Lemma 48) corresponding to "integral probability metrics" which are variants of variational divergence with a restricted function class (Müller, 1997). Equivalently, if F was not closed and convex, one can take the closed convex hull and not change the value of I F (E) (nor indeed change the Rademacher complexity of F ( Bartlett and Mendelson, 2002)). Since convex function classes enable fast rates of convergence (Mendelson and Williamson, 2002;van Erven et al., 2015) and optimization is in principle simpler, this is an appealing restriction, and one which is receiving practical attention in the form of infinitely wide neural networks (Ergen and Pilanci, 2021). If F is closed and convex then so is F(X). F-Information Processing Equalities The definition of F-information (29), implies an immediate, similar result to Proposition 31. Proposition 39. Suppose E : [n] ⇝ Ω, R : [n] ⇝ [n], and F ⊆ L 0 (Ω, R n ). Then The additional generality of F-information yields another kind of information processing equality, one which is more aligned with the traditional formulation of information processing inequalities. Rather than the "processing" being on the labels (the Y in usual terminology) as in Proposition 39: it is applied to the output (the X of the experiment): Proof. From the linearity of the integral, for each i ∈ [n], and all f ∈ F In the final equality we apply Tonelli's theorem (Fubini's theorem for sign-definite integrands) to exchange the order of integration. We can do so since by assumption, all f ∈ F are non-positive, and for all x ′ and i the measures S(x ′ , ·) and E(i, ·) are probability measures and thus σ-finite. Thus Remark 42. More generally, the condition in Theorem 40 that F ⊆ L 0 (Ω, R ≥0 ) can be relaxed whenever there is (ES)f = E(Sf ). For example: 1. F ⊆ L 0 (Ω, R n ≤0 ), using Tonelli's theorem with −f . Proof. Fix µ ∈ P(Ω 2 ). Then The term in the second underbrace is 1 because, observing the integrand is nonnegative and S is a Markov kernel, we can apply Tonelli's theorem to obtain k(x, y)µ(dx)λ(dy) = µ(dx) = 1. By hypothesis ∥f ∥ < ∞, which completes the proof. ⇝ Ω. This corresponds to "attribute noise" -that is noise in the observations of X. For example, instead of learning from (X, Y) one only gets to observe (X + N, Y) for some independent noise random variable N. More general (non-additive) corruptions are possible, but this additive one will be of particular interest. The Information Processing Equality in terms of Constrained Bayes Risk We can express the information processing equality in terms of Bayes risks: Corollary 44. Suppose ℓ : ∆ n → R n ≥0 is a continuous proper loss, π ∈ P([n]) a prior distribution on [n], and H ⊆ L 0 (X, P([n])) an hypothesis class. Then Proof. Let F = co(−π · ℓ • H ). By combining Theorem 37 with Theorem 40 we have The last equality is justified as follows. For any set A ⊂ R n and S : [n] ⇝ [n], we have S co A = co(SA) since Furthermore, for any v ∈ R n , any set C ⊂ R n and S : These two facts together imply SF = S co(−π · ℓ • H ) = co(S(−π · ℓ • H )) = co(−π · S(ℓ • H )), and a second appeal to Theorem 37 concludes the proof. Remark 45. Kernel methods in machine learning (Schölkopf and Smola, 2001) are so named because of the kernel of the integral operator T k : L 2 → L 2 given by One can view the usual hypothesis class in kernel ML methods as the image of the unit ball under this operator (Williamson, Smola, and Scholköpf, 2001). But Markov kernels can also be written in a similar form. As Çinlar (2011, pages 38 and 46) observes, we can express a Markov kernel K : Y ⇝ X as where k is known as a kernel density relative to the reference measure ρ and the operation of K on a function f : X → R can be written as Comparing (37) and (38) we see that the Markov kernel performs a similar smoothing operation to T k . When one takes account of Theorem 40, one concludes that the choice of a kernel in a kernel learning machine is in effect an hypothesis about the type of noise the observations will be affected by. For example, using a Gaussian translation invariant kernel is an inductive bias which implicitly assumes the X measurements are corrupted by additive Gaussian noise. (This last statement is perhaps misleading; we stress that it is ℓ • H which is smoothed by the kernel K, not H itself. Understanding the effect of K directly on H seems challenging.) Conclusion There is no in-formation, only trans-formation. -Bruno Latour 17 Motivated by the epigram at the beginning of the paper, we have used Grothendieck's "relative method", whereby one understands an object, not by studying the object itself, but by studying its morphisms. We have seen that by construing information processing as a transformation on the type of information, rather than as a manipulation of the amount of some fixed type of information, one obtains new insights into the nature of information. In doing so, we formulated a substantial generalisation of information, which subsumes existing measures, including ϕ-divergences and MMD. The D and F informations also induce corresponding notions of entropy (see Appendix C). Their naturalness is manifest by the general bridge to Bayes risks and constrained Bayes risks. By working with the variational form in which we define them, we can readily determine the effect of noisy observations. We have shown that for both label noise and attribute noise, the effect of the noisy observations can be captured by a change of the measure of information used. This leads to information processing equalities instead of the traditional inequalities (which themselves are one of the basic results in information theory, underpinning the notion of statistical sufficiency). The new measures of information provide insight into the variational representation of ϕ-divergences, as well as a new interpretation of the choice of kernel in SVMs and MMD. The bridge results offer a way to avoid duplicate analytical work: for example, one does not need to separately analyse the estimation properties of statistical divergences (Sreekumar and Goldfeld, 2022); one can simply convert to the equivalent statistical decision problem for which many results already exist. In light of the bridge result, one should hardly be surprised that the constrained variational representation of ϕdivergence has generalization performance controlled by the Rademacher complexity of the discriminator set (Zhang et al., 2017). The information processing equality for F -information generalises an insight developed by Bishop (1995) that the addition of noise (in training) is equivalent to a form of regularization 18 . It goes beyond Bishop's result in that it applies to any "noise" (not necessarily additive) and explains the effect of "adding" noise precisely in terms of the effect on the hypothesis class. The information processing results in the paper differ from the classical ones in that they change the measure of information used. This is metaphorically changing the "ruler" used to measure information on either side of the noisy channel. The bridge between information and expected loss shows that there is no reason to expect there is a single canonical measure of information (as soon as one accepts there is no single canonical loss function). Taken as a whole, the results show that at least for questions relating to prediction and learning, it makes no sense to talk of "the" information in one's data. While it is widely accepted that different problems demand different loss functions, it is also often assumed that Shannon information is the only measure of "information" available 19 . For example Rauh et al. (2017) make much of the fact that although the worst Bayes risk (over all losses) of an experiment may be made worse after passing through a channel, particular measures of information may not be degraded at all. Given the bridge between risks and measures of information, this can be seen as simply a mistake about quantification; the Blackwell-Sherman-Stein (BSS) theorem (recall Remark 17) to which they appeal, is stated in terms of either all loss functions or all measures of information. Similarly, in much recent work in ML, the choice of a particular measure of information is taken to be essentially one of convenience, and not related to the underlying problem to be solved 18. Bishop's result is not quite the whole story, as explained by An (1996). But the general conclusion is correct: adding noise to the input data encourages the learned model to be smoother than it would have been otherwise; confer (Grandvalet, Canu, and Boucheron, 1997) 19. This is a point well acknowledged by information theorists: [T]he fact that entropy has been proved in a meaningful sense to be the unique correct information measure for the purposes of communication does not prove that it is either unique or a correct measure to use in some other field in which no issue of encoding or other changes in representation arises (Elias, 1983, page 500). (Terjék, 2021) which has further examples. In all cases, ϕ * (0) = 0, and thus hyp(−ϕ * ) ∈ D 2 . We adopt the convention that false · ∞ = 0. (in the way that one's choice of loss function ideally is). The results of the paper show that choosing one's measure of information is literally equivalent to choosing one's loss function in a statistical decision problem, and thus is significant, consequential, and not a mere matter of convenience or convention. Appendix A. The ϕ-divergence and its Variational Representation In this appendix we present some facts concerning the classical ϕ-divergences and its variational representation and their relationship to our D-and F-informations. A.1 Some examples of D ϕ When n = 2, we can compute some examples for classical ϕ divergences; see Table 1. Figure 2 illustrates D ϕ and C ϕ = (D ϕ ) • for three different ϕ (for such figures, it is helpful to use (− hyp ϕ * ) • = lev ≤1φ ). It is apparent from the proof of the above that the asymmetry in the usual variational representation (39), whereby ϕ * appears in only one of the terms, arises from the choice of E 2 as the dominating measure and the parametrisation of D ∈ D 2 by ϕ ∈ Φ. Such a choice is problematic if E 2 does not dominate E 1 , leading to less elegant general definitions being necessary for I ϕ Vajda, 2006, 2008). The one advantage of (39) over (6) when Y = [2] is that the optimisation is over R-valued functions rather than R 2 -valued functions. However, as seen in Section 3, the symmetric representation (6) has significant advantages in understanding the effect of the product of experiments (in the form of observation channels). When ϕ = ϕ var def = t → |t − 1|, I ϕvar is known as the variational divergence which is examined in detail in §A.3. Finally, the form of (39) suggests the variant where H ⊊ L 0 (X, R). The functional I H is what is estimated in practice by virtue of choice of a suitable class over which to empirically optimise (39), often replacing E i by their empirical approximationsÊ An alternate way of expressing the general form of I F (E) that is similar to the classical variational representation of a binary ϕ-divergence is given below. Let H ⊆ L 0 (X, P([n])). For D ∈ D n , assume there is a measurable selection ∇σ D ∈ ∂σ D (confer Proposition 25). We can thus write Observe that . This is a way to use classes of functions mapping to R n in an elegant manner to define a restricted version of I D . Observe that (40) is symmetric in the appearance of ∇σ D , in a manner that (39) is not, but one needs to work with vector valued functions h : X → R n . Given a function class R ⊆ L 0 (X, R), one could induce H R def = {X ∋ x → (r 1 (x), . . . , r n (x)) | r i ∈ R ∀i ∈ [n]}, allowing us to define I R,D (E) def = I ∇σ D •H R (E). Lemma 47. The Legendre-Fenchel conjugate of ϕ Var is given by If, for any x ∈ X, g(x) ̸ ∈ [−1, +1], then the second term will be infinite which will push the whole value to −∞. Since the objective is linear, and the constraint set convex, the supremum is attained at the boundary and hence I Fϕ Var (E) = sup g:X→{−1,1} = 2 sup where the last step is shown in (Strasser, 1985). Observe that (41) denote the negative halfspace with normal vector n and offset r. The set D var can be written Proof. We have Now Lemma 13 implies I D (E) = I co D (E), and hence we can take the convex hull of the above to obtain We can thus more compactly write D var as in (43). Lemma 48 suggests the following generalisation which we now take as a definition H − e i ,1 . Observe that D var is a intersection of half spaces and thus its support function is the same as the support function of its extreme points, which is the union of the n vertices created. Denote the vertices v j for j ∈ [n]. We have = {x ∈ R n | ⟨x, e i ⟩ − 1 = 0 ∀i ̸ = j and ⟨x, 1 n ⟩ = 0} = x ∈ R n | x i = i ∀i ̸ = j and k∈ [n] x k = 0 Thus for x ∈ R n ≥0 , using (Hiriart-Urruty and Lemaréchal, 2001, Theorem C.3.3.2 (ii)) we have σ co(∪ j∈ [n] {v j } = sup j∈[n] σ {v j }) and hence x i − n j∈ [n] x j . We can now determine an explicit expression for I D (n) var (E). Let (X 1 , . . . ,X n ) be a measurable partition of X (i.e.X i are measurable for i ∈ [n]) defined viā (The additional min is to break ties.) It is immediate that this is indeed a partition of X, i.e. k∈[n]Xk = X andX i ∩X j = ∅ for i ̸ = j. Consequently using the properties of the partition (X 1 , . . . ,X n ). Observe that choosing any other partition of X would result in a larger value of the second integral in (45) and thus a smaller value for the overall expression. Thus if P n (X) denotes the set of all measurable n-partitions of X, we can write When Y = [2], we obtain which can be recognised as being equivalent to (42). Finally we observe a special case of (27) for D = D (n) var when S takes the particular symmetric form S α where the jth column of S * α is s * j = αe j + 1−α n 1 n . When α = 1 this is the identity matrix, and for α ∈ [0, 1] it corresponds to the observation channel providing the correct label with probability α and with probability 1 − α a label chosen at random from [n] is chosen (which could in fact be correct). The set S * α D (n) var can be readily determined by exploiting the fact we need only determine its support function σ S * α D (n) var (x) for x ∈ R n ≥0 . Thus we can exploit (44) and we need only compute (for j ∈ [n]) Thus σ S * α D (n) var = ασ D and so for any α ∈ [0, 1] and any n, we have the homogeneous relationship which we note has the same measure of information on either side of the equality (analogous to the typical strong data processing inequalities one finds in the literature). Appendix B. D-Information as an Expected Gauge Function Classical binary information "divergences" are sometimes supposed to be "like" a distance (a metric). In this appendix we show that there is an element of truth in this supposition. Metrics (as a formal notion of "distance") are often (not always) induced by norms, and norms are particular examples of convex gauge functions (Minkowski functionals). In this appendix we show that it follows almost immediately from our definition of D-information that it is indeed an expected gauge function, albeit one where the associated "unit ball" of the gauge is neither symmetric nor compact. The restriction of D ∈ D allows an insightful representation of I D making use of the classical polar duality of closed convex sets containing the origin. The conic hull of a set C ⊂ R n is pos C def = (0, ∞) · C. Given C ∈ K(R n ), the polar of C is defined by We will make use of the following from (Rockafellar, 1970, Theorem 14.6): Proposition 50. Suppose C, C • ∈ K(R n ) are a polar pair both containing the origin. Given C ∈ K(R n ), the gauge of C is defined by Obviously given the gauge γ C one can recover C via C = lev ≤1 γ C = {x | γ C (x) ≤ 1}. (If C is symmetric about the origin, then γ C is a norm.) Let Proof. If D is convex then so is D • . By Proposition 50, since 0 ∈ D, rec D is the largest cone contained in D and (rec D) • = cl pos C is the smallest cone containing C. Thus when rec D = {x ∈ R n | ⟨x, 1 n ⟩ ≤ 1}, cl pos C = {α1 n | α ≥ 0}. Regardless of the choice of D, we always have 0 ∈ bd D • . The final condition in the definition of C n 0 follows since D ⊆ C ⇒ D • ⊇ C • , and (lev ≤0 ⟨, ·, 1 n ⟩) • = pos 1 n . Gauges and support functions are dual to each other in the polar sense (Hiriart-Urruty and Lemaréchal, 2001, Corollary C.3.2.5): Proposition 53. For any D ∈ D n , D • ∈ C n and for any E : [n] ⇝ X, and any reference measure ρ, Figure 3: The corresponding polars for R r D Hell for r = 1, 0.8, 0.6 (restricted to [0, 10] 2 ) corresponding to the set-up as in Figure 1. Observe that for any E : [2] ⇝ X, as r ↓ 0.5, the composition R r E approaches the totally non-informative experiment E tni , and I R * r D approaches what we might (oxymoronically) call the totally noninformative information measure I tni = I D tni , where D tni = {x ∈ R n | ⟨x, 1 n ⟩ ≤ 0} and C tni = D • tni = {α1 n | α ≥ 0}. The name is justified since I tni (E) = 0 for all experiments E. Proof. The first claim is just lemma 51. The second claim follows by applying Lemma 52 pointwise. Expressing I D as an average of a gauge function as in (46) justifies the oft made claim that divergence are "like" distances in some sense; the fact that D • is not symmetric is why it is merely "like". One can see that I D is "gauging" the average degree to which the vector dE dρ (x) is "close" to one of the canonical basis vectors e i , i ∈ [n] since for D • ∈ C n 0 , γ D • (e i ) > 0 in that case. Conversely, since cl pos D • = {α1 n | α ≥ 0}, we always have γ D • (1 n ) = σ D (1 n ) = 0, corresponding to situations where dE dρ (x) = 1 n , and consequently it being impossible to distinguish between the outcomes of the experiment at that x -in other words a complete absence of "information." Some example of polars of D illustrated in Figure 3. measure is taken for granted as being Lebesgue measure, but we shall see it is an arbitrary choice and the choice matters 22 . This perspective offers an insight into why the entropy is difficult to estimate: one is implicitly attempting to determine the Bayes risk for a statistical decision problem where the two class conditional distributions are the given µ and the reference (uniform) measure υ using a loss ℓ induced by ϕ as in Remark 24. This insight also offers an effective approach to estimating the entropy as we now explain. The constrained entropy of µ relative to υ is defined similarly, and simply amounts to regularising the ϕ-entropy (where F (X) ⊆ D ϕ ). This immediately suggests ways to estimate the entropy of a random variable defined on X (especially when X is high dimensional): use the bridge between F -information and the H -constrained Bayes risk and simply exploit the wide range of extant methods for solving binary classprobability estimation problems. That is given a random sample {x 1 , . . . , x m } drawn iid from µ, estimate the entropy from the empirical measure µ m (A) def = 1 m i∈ [m] x i ∈ A via H υ F (µ m ) = I F (µ m , υ). The estimate is regularised by the choice of F . Observe that one can immediately define a generalised mutual information using I F when n = 2: given two random variables Z and Y defined on X with joint distribution µ ZY and marginal distributions µ Z and µ Y , define the experiment E MI : [2] ⇝ X via E MI (1, ·) def = µ ZY (·) and E MI (2, ·) def = (µ Z × µ Y )(·), and then define the F-Mutual Information between Z and Y as MI F (Z; Y) def = I F (E MI ). While this seems more complex then the usual notion of mutual information, we observe that this is what is typically computed in practice since one cannot ever find the Bayes optimal hypothesis implicit in the definition of the usual mutual information, but rather only optimises over a restricted model class. 22. This idea that unary properties are intrinsically relative to some implicit reference has been developed for the notion of Lorenz curves (Buscemi and Gour, 2017), themselves related to ROC curves (Schechtman and Schechtman, 2019) which are intimately related to certain families of ID (Reid and Williamson, 2011, §6.1). 23. This is not a new idea; see (Chafai, 2004, page 329). Given that entropy can be reduced to binary divergences relative to an arbitrarily chosen uniform measure, and further given the multitude of binary divergences that make decision-theoretic sense, axiomatic arguments for a single preferred entropy are less compelling, nothwithstanding their mathematical elegance (Baez, Fritz, and Leinster, 2011). One can apply Proposition 40 to F -entropies where a given distribution µ is pushed through a Markov kernel T to give µT . Since H υ F (µ) = I F (E υ µ ), we have I F (E υ µ T ) = I T * F (E υ µ ) and hence H υ F (µT ) = H υ T * F (µ). Appendix D. Precursors of F-Information There are several precursors 24 to our notion of F-information, including N-information (rediscovered as MMD), Integral Probability Metrics, Moreau-Yosida ϕ-divergences and (f, Γ )-Divergences, and in this Appendix we briefly summarise them. The idea that one can view a model class as being the result of a rich class being "pushed through" a restrictive channel (what the information processing equality does in effect) was central to the calculations of covering numbers by Williamson, Smola, and Scholköpf (2001). As can be seen from (42) in Appendix A, the classical binary variational divergence of E : {1, 2} ⇝ X can be written as I var (E) = sup f : X→([0,1],B) E 1 f − E 2 f. When the supremum is restricted to be over F , a proper subset of {f : X → ([0, 1], B)}, these are known as integral probability metrics (IPMs) (Müller, 1997) or probability metrics with ζ-structure (Zolotarev, 1983), and extend the Variational divergence by restricting the class of functions which are optimised over in its variational representation; see A.2. Special cases of this include the Wasserstein distance (Villani, 2009). The classical IPMs are a way of constraining the function class one optimises over in the variational representation of variational divergence. One can similarly restrict the class of functions in the variational representation of an arbitrary ϕ-divergence as was suggested by Reid and Williamson (2011, page 796), who proposed considering I ϕ,F (P, Q) def = sup ρ∈F (E P ρ − E Q ϕ * (ρ)), explored the particular case for ϕ(t) = |t − 1| and F being the unit ball in a reproducing kernel Hilbert space (Reid and Williamson, 2011, Appendix H), and posed the question of its relationship to a constrained Bayes risk also using the function class F (Reid and Williamson, 2011, page 799) (which is answered by the present paper). Xu et al. (2020) proposed a generalization of Shannon Mutual information by restricting the class of functions optimised over in a variational representation, motivated slightly differently to the F-information of the present paperthey motivated their definition on computational grounds, and observed as a consequence the estimation performance improves. (Note the brief discussion of F-mutual information in Appendix C.) Terjék (2021) regularised the optimisation for binary ϕ divergences with a Wasserstein regulariser. More generally, Birrell, Dupuis, et al. (2022) considered a 24. As we should well expect: "far from being odd or curious or remarkable, the pattern of independent multiple discoveries in science is in principle the dominant pattern" (Merton, 1961, page 477). larger range of F for arbitrary ϕ. However, they necessarily only considered the binary ϕ-divergence, and because they used the classical variational representation in terms of the Legendre-Fenchel conjugate of ϕ, their formulas become quite complex compared to the development in the present paper. A recent comparison of IPMs and ϕ-divergence (Agrawal and Horel, 2021) appears to mix up two things: a comparison of loss functions, combined with a question of the approximation power of a model class.
2022-07-26T01:16:23.080Z
2022-07-25T00:00:00.000
{ "year": 2022, "sha1": "ee0a6996b6ff2dd79e81769a77d59510ca085352", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ee0a6996b6ff2dd79e81769a77d59510ca085352", "s2fieldsofstudy": [ "Mathematics", "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
56218564
pes2o/s2orc
v3-fos-license
Bayesian Inference in a Joint Model for Longitudinal and Time to Event Data with Gompertz Baseline Hazards Longitudinal and time to event data are frequently encountered in many medical studies. Clinicians are more interested in how longitudinal outcomes influences the time to an event of i nterest. To study the association between longitudinal and time to event data, joint modeling approaches were found to be the most appropriate techniques for such data. The approaches involves the choice of the distribution of the survival times which in most cases authors prefer either exponential or Weibull distribution. However, these distributions have some shortcomings. In this paper, we propose an alternative joint model approach under Bayesian prospective. We assumed that survival times follow a Gompertz distribution. One of the advantages of Gompertz distribution is that its cumulative distribution function has a closed form solution and it accommodates time varying covariates. A Bayesian approach through Gibbs sampling procedure was developed for parameter estimation and inferences. We evaluate the finite samples performance of the joint model through an extensive simulation study and apply the model to a real dataset to determine the association between markers(tumor sizes) and time to death among cancer patients without recurrence. Our analysis suggested that the proposed joint modeling approach perform well in terms of parameter estimations when correlation between random intercepts and slopes is considered. Introduction In many clinical studies, longitudinal outcomes are collected alongside with time to event data.For example, we collect repeated patients' information such as heights, weights, blood pressures and many other variables over time and at the same time we are also interested in the time to an event of interest (e.g., death).For such data, the focus is always on how longitudinal outcomes influence the time to an event.Approaches for analyzing these two processes separately have been extensively discussed in the literature.The most common methods used to analyze longitudinal outcomes are the random effect model proposed by Laird and Ware (1982), linear mixed effects model by Verbeke (1997) and Generalized linear models by Laing & Zeger (1986), Zeger et al. (1988).On the other hand, event time process is analyzed by making use of cox proportional hazard models (Hougaard 2012).However, analysis of longitudinal and time to event process separately have received many criticisms as they bear inefficient and bias results.Therefore, it is very important to take into account both processes in analysis. Over the past two decades, extensive efforts have been made on statistical methods that simultaneously analyze longitudinal outcomes and time to event data.These methods offers great advantages over separate analyzing of each process.A comprehensive overview of joint modeling for longitudinal and time to event data was given in Tsiatis & Davidian (2004).More basic concepts and methods for joint models can be found in Ibrahim et al. (2010).In all these reviews, the two processes are jointly modeled by linking them together through a common latent structure.That is the longitudinal and survival process both share the same random effects.Extensions to earlier developments of joint modeling framework has been proposed.More recent developments with applications includes, Sweeting & Thompson (2011), Rizopoulos et al. (2012), McCrink et al. (2013) and Yang et al. (2016).Some authors have already proposed joint models for multiple longitudinal outcomes and repeated events of different outcome (Chi & Ibrahim.,2006;Musero et al.,2015;Huang & Previous work on joint model for longitudinal outcomes and time to event data with times varying covariates considered linear mixed effects models for longitudinal process and a proportion hazards model with specified baseline function for time to event process, Yu et al. (2006); Song et al. (2002).The most frequently used beeline functions for proportional hazards models assumes that survival data follow exponential or Weibull distribution.Some authors suggested that proportional hazard model with Weibull baseline function may be very flexible when time dependent covariates are included in the model (Casellas, 2007).However, generating survival data from such model with time varying covariates can be too complex.According to Austin (2013), one of the disadvantages of Weibull proportional hazards model with time dependent covariates is that its cumulative distribution function can not be derived in a crossed form.Therefore, to compute the cumulative incidence and the hazard function, we need to make use of numerical integration method.On the other hand, proportional hazard model with exponential baseline function also has disadvantages as it assumes constant hazards.For these reasons, researchers willing to generate survival times that involve time dependent covariates are advised to consider using Gompertz distribution event times data (Austin, 2012). Proportional hazards models with baseline function from Gompertz distribution seems to be more flexible as one can compute survival function without high programming needs.The Gompertz distribution was first introduced by Gompertz (1825).It is described as one of the fundamental mathematical models that accurately represents survival function based on the laws of mortality.That is the force of mortality or survival times tends to increase exponentially over time.It has been extensively used as growth model in many cancer assessments.It is for these reasons that Gompertz distribution plays an important role in modeling human mortality.Its earlier applications can be found in Ahuj & Nash (1967). To the best of our knowledge, there has been no joint model for longitudinal outcome and time to event data with proportional hazard model assumed to have Gompertz distribution properties.Therefore, in this article, we propose such model with time varying covariates under the framework of Bayesian inferences.Our joint model is made of two submodels linked together by common random effects.The first submodel is a linear mixed effects model for modeling true and unobserved markers.The interrelationship between measurements and subject specific effects is accounted by random intercept and slope.The second submodel is the proportional hazards model with Gompertz baseline function.To estimate the parameters of the joint model, we use both R and WinBUGS, Ntzoufras (2011) softwares.The rest of this article is organized as follows; In section 2 we present the notations and formulations of the joint model.Section 3 presents estimations and inferences, which includes joint likelihood and Bayesian parameter estimation procedures such as prior and joint posteriors specifications and model selection.In section 4, we present some simulation studies in order to access the performance of the model.Section 5, presents the application of the model to real datasets.The last section presents a discussion. Longitudinal Sub-Model Suppose we have observations for n subjects under study.Let y i t i j be n i × 1 column vector of random variables representing the observed longitudinal outcomes for subject i measured at time points t i j t i1 , t i2 , ..., t in i , where j = (1, 2, ..., n i ). Here n i represents the numbers of repeated measurements for subject i which varies among subjects.In practice, we may have missing observations for some subjects.This happens due to the fact that some subjects may decide to drop out of the study for reasons not related to the occurrence of event of interest.Therefore, in this study, we assume that these mas.ccsenet.orgModern Applied Science Vol. 12, No. 9; 2018 missing values in longitudinal measurements trajectory are missing independently of the unobserved measurements. To analyze the longitudinal process, we define the distribution of y i j by a Linear Mixed Effect(LME) model, where, µ * i (t i j ) = x i t i j β L + η i t i j is the true value of y i j and the outcome variable x i is the (n i × p) design matrix of fixed effects which includes possible time dependent covariates; β l is a (p × 1) corresponding column vector of the fixed effect coefficients (β L,s ); η i t i j = z i t i j w i , where z i denotes (q × 1) design matrix for the random effects; w i ∼ MVN 0, A and i is a (n i × 1) column vector of the residuals which represents the part of y i j which is not accounted by the model: x t i j β L +η i t i j such that i j ∼ N 0, σ 2 I n i .It should be noted here that A represents the variance co-variance matrix in which correlations among the repeated measurements and within subject correlation measurement values are represented.On other hand residuals among subjects and random effects w i are independent of each others.The variance covariance matrix is defined as with no correlation. Time to Event Sub-Model We assumed a proportional hazards model with Gompertz baseline function.Let denote the hazards function for subject i at time t, where λ 0 (t) = α exp (γt) is a baseline function with α > 0 and γ takes any value.When γ > 0 or γ < 0, the hazards function in monotone(increasing or decreasing).On the other hand, when γ = 0, the hazards function is equivalent to hazards function from an exponential distribution.v i represents the vector of prognostic factor associated with vector of coefficient ξ and ψ quantifies the strength between the two processes.Furthermore, the cumulative hazards/incidences function is defined as follows: Equation ( 3) can be simplified as follows: From equation (4), we deduce So, the inverse cumulative hazards function is given as and the individual survival function is expressed as mas.ccsenet.org Modern Applied Science Vol. 12, No. 9; 2018 Consequently, the individual event times can be generated as where T i stands for the survival for subject i and U is a random variable uniformly distributed with U ∼ Uni f orm(0, 1). The distribution function of the time to event based on the proportional hazards function is and the probability density function of time to event is 3. Estimation and Inferences The Joint Likelihood Function Let C i be the censoring time for subject i.Then, we have T i = min T * i , C i denotes the i th subject's observation time, where T * i is the true event time for that subject.Furthermore, we denote δ i = I T i = T * i , were I is the indicator function.It follow that, δ i is equal to 1 if T i is an event time and 0 if T i right censored for subject i. Letting φ = β L , α, ψ, γ, A, σ 2 , ξ denote the vector of all parameters in equation to be estimated.Note that y i j is the longitudinal outcomes for subject i measured at time t i j ( j = 1, 2, ..., n i ).Then, given subject-specific random effects, the two process are said to be independent of each other.Hence, the joint likelihood function of the longitudinal and time to event sub-models based on all observed data where is the probability density function of the longitudinal outcomes conditionally on the random effects w i and is defined as the probability density function of the random effects with q as dimension of the covariance matrix A. The likelihood of the survival sub-part which is a version of equation ( 7) in section 2. Bayesian Inferences The solution to the joint likelihood function (11) can not be achieved with the normal standard maximum likelihood procedure due to the fact that it involves integration of the longitudinal and survival components over the subject-specific random effects w i .This requires high programming needs.The best way to overcome such difficult is to make use of Bayesian approach.Faucett and Thomas (1996), proposed a Bayesian Markov Chain Monte Carlo (MCMC) approach that simulates samples from posterior distribution and estimate all the unknown parameters by using non-informative prior p(φ). Taking equation (11) and p(φ), we can define joint posterior of φ as product of the likelihood of the observed data and some priors as If the observed data is fixed, then we have Modern Applied Science Vol. 12, No. 9; 2018 Therefore, this translate our joint posterior distribution of the parameter φ into Due to computational complexity, log joint posterior distribution is more preferable.Hence, (17) Full Conditional Distributions In order to implement the Bayesian procedure, we need the full conditional distribution of each of unknown parameters of the model.Then Gibbs sampler can be used to generate MCMC samples from the joint posterior density p(φ|D Obs ).In Bayesian framework, this procedure involves iteratively sampling from its full conditional distributions with the remaining components(others) fixed to their current value.Hence, the full conditional distribution of the coefficient β L of the linear mixed effect model is where µ β L and τ β L are the parameters of the prior of β L . The full conditional distribution of ξ is where µ ξ and τ ξ are the parameters of the independent normal prior of ξ. The full conditional distribution of γ takes the form where µ γ and τ γ are the parameters of the independent normal prior of γ. The full conditional distribution of shape parameter, α in the survival sub-model is given by where a 0 and b 0 are the parameters of the independent gamma prior of α. The full conditional distribution of the association between the two processes ψ is where µ ψ and τ ψ are the specified parameters of the independent normal prior of ψ. The full conditional distribution of inverse variance, 1 σ 2 takes the form where a 0 and b 0 are the parameters of the inverse gamma of σ 2 . The full conditional distribution of the inverse covariance variance, A −1 takes the form where ν 0 and A 0 are the parameters of the inverse wishart prior of A. More simplified full conditional distribution can be derived the way as in Faucett & Thomas (1996) and Zhang et al. (2017). Model Selection In this paper, we propose two forms of the random effects terms in the joint model: For the model selection, we consider the Deviance information criteria(DIC) (Spiegelhalter et al., 2002) which defined as a version of well known Akaike's information criteria(AIC) and Bayesian criteria(BIC).The idea of using of DIC is that, DIC uses the posterior distribution which allows it to take into account the prior information of both longitudinal and time to event sub-models.Consider φ, the collection of parameters in the joint model and D obs = y i j , T * i , δ i , x i , v i , the observed data.Now, let specify the deviance function as where log(φ|w i , D obs ) is the likelihood in equation( 11).Define D(φ) = E −2 n i=1 log f (y i j , T * i , δ i , x i , v i ; φ) as the expectation of the deviance under posterior and φ = E φ posterior means of the parameters.According to Spiegelhalter et al., 2002, the difference between two measures denoted by p D = D(φ) − D(φ) can be interpreted as the posterior estimate of the effective number of parameters and it measures the complexity of the model.Hence adding it to the posterior mean Deviance, gives a measure of fit that penalized for complexity .Therefore, Based on the DIC, the model with the smallest DIC value is considered to be the model that would best predict a replicated dataset which has the same structure as the current observed dataset.However, as stated by Geedipally at el., 2014, the model is penalized by the D(φ) which will decrease as the number of the parameters in the model increases and p D , which compensates for this effect by favoring model with a smaller number of parameters.Therefore, it is very important to note that the way the model is parameterized will influence the outcomes of the Deviance Information Criterion (DIC) values. Simulation Studies In this section, we performed two sets of simulations studies under Markov Chain Monte Carlo (MCMC) in order to assess the performance of the proposed methodology. Simulation Study 1 The data are generated from a joint model with a single longitudinal variable and a time to event variable.Each subject is expected to have a record of n i = 5 biomarker values recorded at baseline and thereafter 4 visits are scheduled at equally spaced time interval t i j = {0.00,0.15, 0.30, 0.45, 0.60}.After this period, subjects are said to be censored noninformatively.Specifically, we first generate data using a Linear Mixed Effects model for the longitudinal sub-model; where µ * i (t i j ) is the true unobserved biomarker value and i (t i j ) ∼ N(0, σ 2 ), where σ 2 = 0.5 is the measurement error variance.We then simulate longitudinal data from a linear curve, µ * i (t i j ) = β 0 + β 1 x i + β 2 t i j + w i0 + w i1 t i j , where the vector of random effects w i = (w i0 , w i1 ) is simulated from a multivariate normal distribution MVN(0, A), with variance covariance matrix A = σ 2 w i0 ρσ w i0 σ w i1 ρσ w i0 σ w i1 σ 2 w i1 = 0.75 2 −0.02 −0.02 0.15 2 .Note that ρ = −0.2.We set coefficients β = (β 0 , β 1 , β 2 ) = (3.00,0.50, 5.00) . For the survival sub-model, we have where λ 0 (t) = α exp(γ(t), with α = 0.02 and γ = 1.2, v i is a vector of baseline covariates with associated coefficients ξ = (ξ 0 , ξ 1 ) , with ξ 0 = −0.74 and ξ 1 = −0.015.The values of baseline covariates were simulated as follows: x i ∼ N(12, 4) and v i ∼ Bin(1, 0.5) The parameter ψ was set at 0.2.Since our interest is in the time to event, we generate the vector of true event times T i * by first simulate the survival probability, U i , from Uni f orm(0, 1) for each subject and solve for T i * from the following equation: To obtain the cumulative function, we let Hence, we have Now, solving for T * i , from equation ( 29), we have We draw censoring time C i from an uniform distribution Ui f orm(0.05,8).Then, we compute T i = min(T * i , C i ) and event indicator δ i = 1 if T * i ≤ C i and 0 otherwise.The censoring was recored between 40% − 45%. Simulation Study Results The aim of the simulation study was to investigate the performance of the joint model.The results are summarized in Table 1 and 2. We have considered several quantities to determined the behavior of the estimators φ by comparing to the true φ as follows; 1) The estimated bias: where a negative bias indicates an underestimation while a positive bias indicates an overestimation. 2) The Root Mean Square Error: this measures the accuracy of the estimates.The lower the RMSE, the more accurate effects estimates. 3 The standard error: S .E( φ) = σ √ n , defined as equal to standard deviation divided by the square root of the sample size.This implies that the larger the sample size, the smaller the standard error. 4) The 95% coverage probability(CP): φ ± 1.96 × se( φ), the proportion of 1000 simulated data sets for which 95% confidence intervals included the true estimates.The closer the outcomes to 95%CP(0.95), the more accurate the estimates.The summary statistics of the estimated regression coefficients are summarized in table 1 and 2, respectively for n = 200 and n = 500 subjects.In each simulation, 1000 replications were performed.We can clearly see that the proposed methodology performed well in terms of parameter estimations.The biases are relatively small and the probability of 95% credible intervals dwells around the 0.95 values.On the other hand, simulation with large sample size have smaller standard error, hence the root mean square error.Overall, better results were obtained with joint model with correlated random effects.That is, the correlation between random intercept and slope positively influences the estimates. Data In this section, we apply the joint model to the Figure 2 show the the trace and density plots for posterior marginal distributions of selected parameters.We clearly see that, the MCMC of all parameters have converged to their target posterior distributions. Discussion Joint modeling for longitudinal outcome and time to event data has gained increasing popularity in literature.However, when it comes to the choice of baseline function on survival sub-part, many authors assumed that the survival times follow exponential or weibull distributions.In this paper, we developed a joint model under a Bayesian prospective assuming that the proportional hazards model for the survival times has a Gompertz baseline hazard function.We think that generating survival times from a Gompertz distribution have more advantages over Weibull distribution as its cumulative distribution has a closed form solution which make it more easier to simulate survival data in presence of time varying covariates.We started building separate models for each process and then link them together through a common latent variable.Our model incorporate both time invariant(fixed) and time varying covariates that forces the hazards of the outcomes to change over time.On the other hand, the inter-relationship between markers was accounted by subject-specific random effects. Due to high programming needs for fitting the joint likelihood function, we proposed a Bayesian approach that estimates the parameters by simulating samples from posterior distribution.Specifically, Gibbs sampler was used for posterior inferences as it provides conveniently way to the fit of complex models.We have conducted an extensive examination of the model parameter estimation through simulation studies (simulation study 1correlated random effects, and simulation study 2-uncorrelated random effects).The simulation results for both simulation studies are presented by looking at several quantities such as Bias-the difference between the average estimate over all simulations and the true parameter value, S.E-standard error of the estimates that measures the accuracy of predictions, RMSE-the square root of the mean error and CP-the coverage probability.The results from the two simulation studies, indicated adequate performance of the joint model.They highlighted however some weakness of the model when few sample sizes were used. There is ample of additional work needed in joint modeling framework.This paper covered only a small area of this fast growing field of research.Therefore, this work can be extended further to accommodate multiple longitudinal outcomes and competing risks as done by Musoro (2014).In future, we plan to develop a joint model for longitudinal and time to event data assuming that survival times follow a Generalized Gompertz distribution with three parameters (Haile et al.. 2016). In short, we have introduced a more flexible joint model for longitudinal and time to event data assuming that the survival time follow a Gompertz distribution.We further demonstrated that this model can easily be used in practice through a study, the FFCD 2000-2005 multi-center phase III clinical trial of patients diagnosed with metastatic colorectal cancer.In both simulation and application studies, two cases (case I with correlated random intercepts and slope and case II with uncorrelated random intercepts and slopes) were considered.It is clear that our work contributed to this fascinating research area by making use of more flexible methodology to develop a joint model. a) Correlated random intercepts and slope -Case I b) Uncorrelated random intercepts and slopes -Case II FFCD 2000-2005 multi-center phase III clinical trial of patients diagnosed with metastatic colorectal cancer.The study was conducted between February 2002 and January 2007 in France by Federation Francophone de Cancerologie Digestive(FFCD).The main aim of the study was to examine the efficacy of two treatment effects: Sequential arm(S) and combination arm(C).We consider datasets presented by Kro l et al. (2016 & 2017), in which 150 patients were randomly selected from the same clinical trial.The data contains individual progression of disease such as tumor size, time of new lesions(recurrent events), baseline covariates(Age, WHO performance status and previous resection: combination arm vs sequential arm), time to death or the last observed time for right censored.A total of 906 tumor size measurements were recored at subject specific follow-up time.During the study, 289 recurrences and 121 deaths were also recorded.We chose to model the longitudinal outcomes together with time to death without recurrence.In our model, we included a total of 716 tumor size measurements of 41 deaths and 109 right censored.The aim of this application is to examine the effects of longitudinal dynamics and baseline covariates on subjects who died without experiencing any recurrence. Figure 1 . Figure 1.Individual profiles for Longitudinal measurements(right) and the Kaplan-Meir estimates of the survival function among patients with no recurrence(left) Table 1 . Simulation study 1 results with correlation random effects from 1000 replications of 200 and 500 subjects.
2018-12-15T18:41:05.112Z
2018-08-24T00:00:00.000
{ "year": 2018, "sha1": "cfb412833b0a73481cb01bfae32eb77a2ed2d7e6", "oa_license": "CCBY", "oa_url": "https://www.ccsenet.org/journal/index.php/mas/article/download/76821/42802", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cfb412833b0a73481cb01bfae32eb77a2ed2d7e6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
244593153
pes2o/s2orc
v3-fos-license
Shared action: An existential phenomenological account Drawing on recent phenomenological discussions of collective intentionality and existential phenomenological accounts of agency, this article proposes a novel interpretation of shared action. First, I argue that we should understand action on the basis of how an environment pre-reflectively solicits agents to behave based on (a) the affordances or goals inflected by their abilities and dispositions and (b) their selfreferential commitment to a project that is furthered by these affordances. Second, I show that this definition of action is sufficiently flexible to account for not only individual action (in which both (a) and (b) refer only to an individual) but also several distinct subtypes of shared action. My thesis is that behaviour counts as shared action if and only if it is caused by a solicitation in which either (a) the goals, or (b) the commitments, or both (a) goals and (b) commitments are joint, i.e., depend on several individuals. We thereby get three distinct subtypes of shared actions: (i) jointly coordinated individually committed action, (ii) individually coordinated jointly committed action, and (iii) jointly coordinated jointly committed action. immediately obvious to the co-agents so that they are aware that they act together rather than individually. It is widely acknowledged that our capacity for shared action is a central condition of possibility of human civilisation at large (e.g. Searle, 1995Searle, , 2010Tomasello, 2014;Tomasello & Carpenter, 2007), yet it is conceptually unclear what exactly shared action is and how it differs from individual action. One of the reasons for this incertitude is, or so I shall argue, that the dominant approaches to shared action and intentions (Bratman, 1999(Bratman, , 2013Gilbert, 1990Gilbert, , 2013Searle, 1990Searle, , 2010 presuppose an overly intellectualist model of action that is largely at odds with the phenomenology of action, i.e., with how minded agents typically understand themselves and what they do in everyday activities. Given that shared action requires that we are aware that we act together, we must get the phenomenology right. Recently, some work has been put forth that aims to correct the intellectualism of the dominant approaches (e.g. Schmid, 2014aSchmid, , b, 2018Zahavi, 2015aZahavi, , b, 2018Zahavi, , 2019, but I will argue that these approaches do not go far enough in their phenomenological reinterpretations of shared agency. Instead, I will draw upon the model of agency advanced by existential phenomenologists like Heidegger (1962), Merleau-Ponty (2012), and Dreyfus (2014) to spell out the phenomenological structure of shared action. I argue that a specific form of agency-what I call pre-reflective agency-is best explained as the way in which an environment solicits us to act. As I use the terms, solicitations differ from affordances insofar as affordances can be inert. Solicitations, in contrast, are affordances that prompt actions because the relevant agent is committed to some underlying project that is furthered through these affordances. Extending this line of thought, I will argue that some solicitations prompt shared action. They do so either because they solicit several agents to cooperate (that is, to act on shared affordances) or because they solicit an agent or several agents to act to further a (joint) project (that is, to act due to a (joint) commitment). I first outline some of the problems characteristic of contemporary approaches to shared action (Section 1), and then I suggest that these problems can be avoided if we construct our model of shared action on the account of pre-reflective action found in existential phenomenology rather than the standard account of reflective action (Section 2). Since, however, the phenomenology of action is typically formulated in individualistic terms, I combine it with the idea of plural pre-reflective selfawareness to show how solicitations can be given to a group rather than an individual (Section 3). I then analyse solicitations in terms of (a) affordances inflected by someone's abilities and dispositions and (b) someone's self-referential commitment to a project furthered by these affordances (or, in short, in terms of (a) goals and (b) commitments) (Section 4). Drawing on this analysis, I construct a phenomenologically plausible taxonomy of individual and shared actions that incorporate both teleological and normative elements of pre-reflective shared actions (Section 5). 1 What is shared action? Some preliminaries I take it that a successful account of shared action must satisfy the following three conditions 2 : (1) The plurality condition: Shared action requires multiple ontologically similar agents. (2) The coalescence condition: Shared action requires that the plurality of agents form a collective. (3) The awareness condition: Shared action requires that the involved agents are aware of what they are doing. These conditions provide a good starting point because if we leave out one of the conditions, we contradict our basic intuition of what shared action is. The combination of plurality and awareness without coalescence wrongfully takes aggregated individual intentions, e.g. a group of people sitting on the bus minding their own business, to be a form of shared action. If we combine plurality and coalescence without the awareness condition, we wrongfully come to include many other activities than just actions. For instance, we might have a plurality of agents who have formed a collective (say, a book club), yet only some of their activity will count as shared action. It might be true for all members of the book club that they inadvertently shake their legs under the table, but this activity does not count as a shared action since the agent or agents must be aware of what they are doing in a specific way in order for it to count as an action. Lastly, the coalescence condition and the awareness condition without the plurality condition lead to something like a hive mind, i.e., several discrete bodies linked together in a single consciousness. Since aggregated individual intentions fail to fulfil the coalescence condition, and since the awareness condition requires that we locate whatever glue makes our individual actions coalesce into a single shared action immanently in the minds of the co-agents, it seems that any account of shared action must show that the intentions of the co-agents are somehow interdependent. What constitutes this interdependence? Let us take a closer look at two of the most influential accounts, Michael Bratman's and Margaret Gilbert's. Bratman proposes that this interdependence requires you and me to intend that we J together and that we are mutually responsive to each 2 A similar idea can be found in Searle (1990, p. 414) and is formulated as a list of desiderata in Mathieson (2005) and Walsh (2019). Some scholars assume that the coalescence condition and the awareness condition only obtain when there is common knowledge (or mutual belief) between the agents, but as Kirk Ludwig has argued this is too demanding since one can arguably engage in shared action with others even if one does not know or believe that others will do their part but for instance simply hopes that they will (Ludwig, 2016, pp. 219-221). I do not think that we can do away with these conditions, but I agree that the approaches criticised by Ludwig are too demanding. In the following, I will argue in favor of a non-intellectual way of reconciling the coalescence condition and the awareness condition by appealing to the way in which some forms of pre-reflective action tacitly assumes that others will do their part. other by tracking each other's intentions and actions (cf. Bratman, 2013, pp. 78-84). More specifically, Bratman argues that we intend J if and only if (1) (a) I intend that we J and (b) you intend that we J, (2) I intend that we J in accordance with and because of (1)(a), (1)(b), and meshing subplans of (1)(a) and (1)(b); you intend that we J in accordance with and because of (1)(a), (1)(b) and meshing subplans of (1)(a) and (1) (1) and (2) are common knowledge between us. (Bratman, 1999, p. 131) This account is "reductive in spirit" (Bratman, 1999, p. 108) because it reduces shared actions and intentions to interdependent individual actions and intentions. These are interdependent because each agent has the collective intention as its object while being responsive to the other agent and while operating under conditions of common knowledge. Margaret Gilbert argues against this reductionism that the coalescence condition can only be satisfied by a plural subject. A plural subject comes about when two or more people express their readiness to undertake a joint commitment, e.g. go for a walk (Gilbert, 1990). This commits the individuals "to emulate as best they can a single body" espousing a goal (Gilbert, 2013, p. 33). The gist of Gilbert's argument is that once the relevant individuals express their readiness to form a plural subject and this is common knowledge between them, they each have a reason to behave in a specific way. Whereas Bratman takes a joint goal to suffice, Gilbert stresses that individuals only coalesce when they are tied together normatively. Recalling that they expressed their readiness to undertake the joint commitment (through, for instance, an explicit agreement), each member of the plural subject is entitled to rebuke others if they violate the joint commitment, and, in contrast, to personal commitments, these commitments cannot be rescinded unilaterally. For instance, in walking together, each participant can blame the other for walking too fast, for not showing up on time, and so on. At the face of it, these accounts are quite different as they locate the coalescence in different elements of the intention. Bratman focuses mainly on the intentional object while Gilbert focuses on the intentional subject. In addition, they disagree on whether shared action is teleological (Bratman) or essentially normative (Gilbert). However, their accounts also have certain similarities by virtue of which, I contend, they both face three similar problems. The first problem, which I'll call the genetic problem, concerns the transition from individual intentions to collective intentions. In Bratman's case, the individual intentions of (1)(a) and (1)(b) have we J as their intentional object but this means that the individuals already possess an understanding of what they can do collectively prior to establishing the interdependence ((1)-(3)) that supposedly makes shared intentions possible (cf. Petersson, 2007). 3 Gilbert, on her part, grounds collective intentionality in joint commitments and argues that joint commitments are generated when individuals communicate their readiness to undertake such a commitment. Some argue that communication is itself an instance of collective intentionality, and if this is the case, Gilbert's account leads to an infinite regress, where a joint commitment presupposes communication, which, in turn, presupposes a joint commitment and so forth (cf. Schmid, 2009;Schweikard & Schmid, 2013). Thus, the transition from individual intentions to collective intentions constitutes a problem for both Bratman and Gilbert. The second problem, which I call the taxonomy problem, concerns the question whether Bratman and Gilbert target the same phenomena. The disagreement is often described as a contradiction between theoretically incompatible positions, but perhaps Gilbert and Bratman simply describe different phenomena-e.g. normative vs. teleological types of interaction. If this is the case, the problem is no longer to provide one simple formula for all types of shared actions and intentions but rather to come up with a suitably nuanced taxonomy capable of integrating their respective target phenomena. The third problem-the intellectualist problem-concerns how Bratman and Gilbert accounts for the awareness condition. They disagree on whether shared action requires that we normatively rely on or non-normatively predict the behaviour of others but both argue that the awareness condition only obtains under conditions of common knowledge (e.g. Bratman, 2013, pp. 57-59;Gilbert, 1992Gilbert, , pp. 189-191, 2013). In addition, they both subscribe to a fairly standard model of agency according to which a piece of behaviour counts as action only if it is guided by certain occurrent mental states. For Gilbert, for instance, when joint commitments come into conflict with other desires on our part, we must actively remind ourselves of our obligation(s) to the other members of the plural subject. Some have questioned the adequacy of this model by distinguishing different kinds of self-and other-awareness. Phenomenologists in particular argue that an adequate understanding of shared actions and we-experiences in general require that we cash out the awareness condition in pre-reflective terms (e.g. Schmid, 2014aSchmid, , 2018Walsh, 2019;Zahavi, 2015aZahavi, , 2019. Similarly, it can be argued from an action-theoretic point of view that the relation between actions and mental states such as intentions, beliefs, and desires are far more elusive than Gilbert and Bratman assume. In this vein, existential phenomenologists like Heidegger (1962), Merleau-Ponty (2012), and Dreyfus (2014) claim that actions do not involve an awareness of certain mental states Bratman, to have an intention to do something is to plan to do it in the sense of settling on a goal and deliberating on the means to achieve it (Bratman, 2013, p. 15). In other words, there is no circularity in saying that we intend J only if you and I each intend that we J, since the instance of "we intend J" that appears in the analysandum refers to us having reflectively endorsed and undertaken (that is, us having planned to) J, while the "we J" that appears in the analysans refers to a joint activity without this reflective endorsement. Formulated in this way, Bratman clearly presupposes that we are already aware of possible joint activities prior to forming a full-blown shared plan. In emphasizing pre-reflective rather than reflective action (see next section), I want to pose the question: How are we aware of what we can do prior to our reflection or deliberation? Footnote 3 (continued) and that the dominant approaches to the philosophy of action commit an intellectualist error that flies in the face of everyday experiences. Pre-reflective and reflective action In the philosophy of action, a form of intellectualism is often introduced by the need to distinguish mere bodily happenings from actions. It counts as an action if I raise my arm when dancing in a nightclub, but not when my arm is raised because someone else controls it through an implanted microchip. When discussing individual action, it is typically argued that bodily movement counts as action only if the movement is justified or caused by a reason, i.e., if it stands in a particular relation to certain mental states such as desires and beliefs. If we try to expand this conception of individual agency to also cover cases of shared agency, the number of mental states that must be entertained by the co-agents multiplies. For Bratman, for instance, the shared intention that we J involves not only that I desire that I do my part of J and that I believe that I can do so by undertaking certain subplans, but that I also intend that you do your part of J (e.g., that you have the appropriate desires and beliefs), and that this is common knowledge between us. Gilbert argues that it must be common knowledge between the participants in the plural subject that they are all similarly committed to espousing a goal and that they are all are committed to taking the individual steps necessary to reach this goal. In order to be plurally committed, I must presumably know what the goal is, believe that certain steps will help us obtain that goal, be aware that I have an obligation to help achieve this goal, and I must know that my co-agents also have the relevant knowledge, beliefs, and awareness of their obligations, including knowledge about my knowledge, beliefs, and so on. In short, things quickly get extremely complicated, and there are reasons to question whether this model of agency provides a plausible explanation of all actions. First, the resulting account of shared action, with its proliferation of mental states, seems to be overly demanding since even young children are capable of engaging in shared action. Second, and even more fundamentally, it is questionable that we are consciously aware of the mental states that presumably guide our actions in the way that standard philosophy of action suggests. In this vein, phenomenologists have argued that we often engage in intentional activity without being aware of the desires and beliefs that supposedly distinguish our actions from mere bodily movements. As Heidegger notes, we often open doors without ever thinking about their handles (Heidegger, 1962, p. 96). Similarly, to take an example from Dreyfus (2014, p. 84), Larry Bird reports that he would often pass the basketball to his team mates and only realise that he had passed it a moment later. In both cases, the agents have no conscious representation of the reasons that cause or justify their actions, yet it seems highly implausible to equate their activity with mere bodily movement of the kind that could have been induced by an implanted microchip. This suggests that there is an intermediary level between bodily happenings and the type of actions described in standard philosophy of action. Let us call this intermediary kind of activity for pre-reflective action. To get a first approximation of what pre-reflective action is, we can contrast it with bodily happenings, on the one hand, and reflective actions, on the other hand. Pre-reflective action is distinct from bodily happenings since it requires that we are aware of ourselves as the ones performing the action in question. Yet, in contrast to reflective action, pre-reflective action does not require that we consciously represent our desired goals, our beliefs about how to achieve them, and, in cases of shared action, our knowledge about our co-agents. In reflective actions, we are hence aware that our actions are guided by certain identifiable mental states. In contrast, pre-reflective (or "fluid") actions are, to borrow a formulation from Mark Wrathall, "experienced, not as the deliberative outcome of my aims and desires and beliefs, but as being drawn out of me directly and spontaneously by the particular features of the situation, without the mediation of occurrent mental or psychological states or acts" (Wrathall, 2014, p. 195). In pre-reflective action, I respond to the solicitations of my environment without reflecting on what I do. Rather than feeling that our mental states exercise control over our bodily movements, "we experience the situation as drawing the action out of us" (Dreyfus, 2014, p. 82). As an intermediary activity, the concept of pre-reflective action might seem rather unstable. Coming from the direction of reflective action, we might ask what it is to "consciously represent" certain mental states? John Searle has, for instance, argued that an agent might have a representational attitude (that is, an attitude with identifiable conditions of satisfaction that can be stated propositionally) without, however, consciously thinking a linguistic propositional thought (Searle, 2001, p. 277f). On a more relaxed reading, one might thus argue that all it takes for behaviour to count as action is that the agent is able to declare what she is doing as well as the means necessary to do it. However, there is evidence that even this relaxed reading of reflective action does not do justice to many everyday activities. It is often reported by, for instance, expert athletes and musicians that they 'go into flow' in such a way that they cannot explicitly state the steps they undertake or the conditions of satisfactions that makes them succeed (for discussions, see Dreyfus & Dreyfus, 1988;Høffding, 2019). As Dreyfus once put it, in pre-reflective action "my absorbed response must lower a tension without my knowing in advance how to reach equilibrium or what it would feel like to be there" (Dreyfus, 2014, p. 150). This suggests that some forms of actions cannot be represented or subjected to reflection, while we are performing them. But this opposition between action and representation is not only characteristic in the very moment of action. Some forms of intentional activity seem to resist explication all together. As a case in point, consider the phenomenon known as "the twisties" in which a gymnast suddenly forgets how to do a twist. This happened to the US gymnast Simone Biles during the 2020 Olympics in Tokyo. Presumably, the cause of the twisties is that the gymnast, perhaps due to the pressure of a big competition, comes to reflect on what is normally done pre-reflectively. As Biles later reported on social media, her mind and body were somehow out of sync, and from the reflective stance brought about by her sudden lack of confidence in her usual bodily and pre-reflective action, she could no longer "fathom" or "comprehend" what it was to do a twist. If this is correct, we would be hard-pressed to say that an expert performer like Biles would have "beliefs" about what it is to do a twist in anything but a metaphorical sense. For her, the intentional activity of doing a twist is disturbed by reflection, and even afterwards, when reflecting on what went wrong and how she usually does a twist, her pre-reflective action seems to resist reflection and explication altogether. Similarly, it is quite plausible that many shared actions can only take place if they are not disturbed by conscious deliberation and reflection. Consider, for instance, two people dancing "freestyle" in a nightclub. Some of their movements are likely to be consciously represented as when one dancer thinks to himself that in four beats, he will do a spin. Yet, most of their movements will be spontaneous and intuitive. When the dancers are "in the zone," they do not know how they place their limbs; they simply respond to the music and to each other fluidly and without thinking. Were one of them to reflect on their own movements or the movement of the other dancer, he would presumably feel out of sync not only with his own body but also with the other dancer and with the music. This constitutes an intersubjective version of the twisties. Coming from the other direction, one might want to press the distinction between pre-reflective actions and bodily happenings. If, as we saw Wrathall claim above, pre-reflective action is "drawn out of me directly and spontaneously by the particular features of the situation," how is that any different from a mere reflex, e.g., when my lower leg kicks in response to the doctor taping my patellar tendon? Dreyfus occasionally defends the extreme view that pre-reflective action lacks all self-awareness (1991, p. 67), but this, I believe, erases the distinction between bodily happenings and pre-reflective actions by making pre-reflective agents out to be a form of well-functioning zombies. In contrast, I will argue that the key to this question is that in pre-reflective action we have a special kind of awareness of ourselves as the ones performing the pre-reflective action, although we must be careful not to assume that this self-awareness must be explained in intellectually demanding terms such as those of desire, belief, and knowledge. In other words, if pre-reflective agents are not simple zombies, there must be some measure of success that is immanent to pre-reflective actions. The pre-reflective agent must be aware of him-or herself as successfully performing the relevant action. In the paradigm case, we must be aware not just that a bodily movement is caused by certain environmental features; instead, we must be aware of ourselves as those responding to a given solicitation. To get a clearer view of this immanent measure of success and, especially, how it relates to shared and not just individual action, we must discuss the nature of pre-reflective self-awareness in more detail. Self-awareness in action Phenomenological theories of action tend to focus on individual pre-reflective action, so we need to show that pre-reflective attitudes can refer to groups and, thus, help us explain the phenomenon of shared action. In this regard, Hans Bernhard Schmid's account of plural self-awareness looks particularly promising. His view is, roughly, that an attitude is collective iff we are plurally self-aware of it as ours. The plural self-awareness thesis inscribes our coalescence into the very fabric of intentionality in a way that does not rely on us being thematically oriented towards each other or on us holding each other responsible in light of communicatively instituted commitments. In line with the phenomenological tradition, Schmid argues that self-awareness does not arise after a subject has reflected on itself, but is rather an immanent feature of an intentional act so that whenever the subject directs itself towards some object in the world it has an implicit awareness of itself as having that experience and being thus directed. In the case of plural self-awareness, we have a pre-reflective and non-thematic awareness that certain attitudes (e.g. perceptions) are ours, collectively (rather than mine, individually, or yours and mine, distributively) (Schmid, 2014a, p. 18). Consider, for instance, the difference between me watching a beautiful sunset while walking alone and us watching a beautiful sunset while walking together. Schmid's claim is that in the latter case we are plurally selfaware of watching the sunset together in a way that is phenomenally obvious to us and does not require that we reflect on each other's presence. More specifically, Schmid argues that three features of our pre-reflective singular self-awareness can be translated into the plural: (i) In terms of ownership, plural self-awareness is "the basic way in which (…) collective intentions or beliefs are transparent to ourselves as ours." It is what "formally unifies our social mind" (Schmid, 2014a, p. 17). (ii) In terms of perspective, "[singular] [s]elf-awareness draws a distinction between the mind, as a formally unified whole, from the world" (Schmid, 2014a, p. 15), and, similarly, the group has "something like an integrated shared perspective" that involves an awareness of "the difference between how 'we,' together, look at things, and the things as they are" (Schmid, 2014a, p. 17). (iii) In terms of commitment, Schmid argues that both singular and plural self-awareness commits one to "minimal consistency" (Schmid, 2014a, p. 16). In the plural case, this becomes a "constant normative pressure for coherence between the attitudes of interacting individuals" (Schmid, 2014a, p. 18). 4 The promise of the plural self-awareness thesis is that it attempts to combine the awareness condition and the coalescence condition in a pre-reflective way. Plural self-awareness seems to be compatible with the idea that some shared actions are pre-reflective because plural self-awareness enables us to have shared attitudes without us being thematically aware of our co-agents and without requiring intellectually demanding forms of common knowledge. If, in pre-reflective action, I am aware of myself as being drawn to act by the situation, we might also occasionally be plurally self-aware that we are drawn to act by the situation. Further, Schmid suggests that plural self-awareness is irreducible to and perhaps even developmentally and explanatorily prior to singular self-awareness (Schmid, 2005(Schmid, , 2014a. This relates to what I called the genetic problem, namely, how to account for the emergence of collective intentions out of presumably basic individual intentions without presupposing that the relevant individuals are already capable of seeing the world from a shared perspective. Schmid considers this approach to be wrong-headed and rejects the assumption that collective intentions are somehow built out of individual intentions; instead, he claims that our capacity to see the world from a shared perspective is explanatorily basic. Despite this promise, an ambiguity of Schmid's account throws doubt on the utility of the nation plural self-awareness for conceptualising shared action. More specifically, Schmid glosses over the fact that the phenomenological tradition offers not one but two accounts of pre-reflective self-awareness. Roughly, the first type is associated with what is sometimes called "transcendental phenomenology," while the second type is associated with what is sometimes called "existential phenomenology." 5 As it turns out, it matters a great deal on which type of singular selfawareness we choose to construct plural self-awareness. 6 Atomistic singular self-awareness is Husserlian in spirit. It names the formal unity of the mind afforded by a transcendental subject that unites distinct experiences in a single stream of consciousness. On this view, self-awareness is not something added to the experience but is, rather, an intrinsic feature of the experience itself; it is what makes an experience an experience for me. Zahavi calls it the minimal or experiential self by which he means to suggest that this type of self-awareness is formal insofar as it says nothing about the personal characteristics of an individual. This type of self-awareness is atomistic since it is not "constitutively dependent upon social interaction" (Zahavi, 2014, p. 95). Indeed, atomistic singular self-awareness is formal in the sense that it is entirely independent of whatever the subject directs itself towards; it is, rather, a permanent feature of the subject's experiential life. Holistic singular self-awareness is the alternative type of self-awareness advocated by existential philosophers like Heidegger. For this reason, I will sometimes call it "the existential self." In contrast to the minimal self, the existential self targets the pre-reflective sense of self that is intrinsic to our practical engagement with the world and with other people. Like its atomistic counterpart, this self-awareness is given non-inferentially and non-observationally. Yet, holistic singular self-awareness is not formal. Rather, it is the sense of self that is intrinsically bound to how concrete situations appear to us in light of our everyday projects and engagement with other people. It is the pre-reflective self-awareness "reflected back to me" based on how the world solicits me to act. Formally put, Holistic singular self-awareness is the sense of self inherent to how a social and physical environment solicits actions based on (a) the affordances inflected by the individual's abilities and dispositions and (b) the individual's self-referential commitment to a project that is furthered through these affordances. In this definition, (a) designates that environments afford different things from different agents based on the agent's know how. Affordances are, hence, neither objective nor subjective but a correlation between the objective relations available in the environment and the abilities and dispositions of the agent. (b) refers to the fact that not all affordances are salient. According to Heidegger, what accounts for this fact is how the agent's self-understanding ties in with his or her activities. For instance, teacher-affordances are salient to me if I am committed to the project of teaching. Heidegger calls this the "for-the-sake-of" thereby suggesting that in order for something to be significant or salient an agent must be doing it for the sake of some particular self-understanding (Heidegger, 1962, p. 114ff). For-the-sake-of relations tie agents to the affordances of their environment because the agent's practical self-awareness as this or that determines which set of in-order-to's, which practical possibilities, show up as salient rather than as mere affordances with no normative force. The agent must be self-referentially committed to some project for an environment to solicit actions. This is not necessarily a deeply personal type of commitment. Sometimes the commitment underlies trivial cases like an agent being drawn to the chips in the buffet rather than the salad. Yet, this trivial solicitation can only get a grip on the agent if he or she is committed to some kind of project, say, the project of wanting to taste deliciously deep-fried food. In the words of Wrathall, "[w]hat makes me me and you you (…) is that each of us is, in virtue of our projects, a different way of 'polarizing' (…) those aspects of a situation that guide action." To be an agent is to be "a particular style of polarizing the affordances of a situation into particular solicitations to act" (Wrathall, 2017, p. 229). Although rather minimal, this kind of polarisation necessarily requires commitments because the agent cares about the activity in a way that can succeed or fail, e.g., if the chips turn out to be soggy and under seasoned. By themselves affordances are inert; they only become solicitations once someone cares about or commits to them. Borrowing a few terms from Steven Crowell, we might say that goals and affordances are "telic," while commitments are an "atelic" underpinning that render these goals and affordances worthwhile to someone (Crowell, 2013, p. 273). Such polarising commitments are self-referential because they resist further explanation. I am drawn to the chips because I simply care about tasting them. "Selfreferential" does not mean, however, that the agent deliberately chooses his or her commitments. To the contrary, our commitments are part and parcel of the solicitations. Indeed, in most cases, we barely take notice of our commitment as we are too busy pursuing the teleological steps of our project (getting to the buffet, picking up a plate, and scoping over a handful of chips…). Nonetheless, it makes sense to say that we are non-thematically aware of our commitments since they are a constituent feature of the teleological steps that thematically occupy our attention and since they can be brought to the forefront of our attention if, for instance, our project fails. This type of self-awareness is holistic because it names a non-thematic awareness of oneself as normatively engaged with an environment consisting of worldly objects and other people. A non-thematic sense of self, as committed to this or that project or self-understanding, is reflected back to us by the solicitations that draw us in. The distinction between atomistic and holistic self-awareness reveals two problems for the attempt to use Schmid's account of plural self-awareness to grasp the nature of pre-reflective shared action. First, since atomistic self-awareness is a permanent feature of our experiential life, it cannot help us identify the self-awareness necessary to distinguish pre-reflective action from bodily happenings. We cannot experience the failure of atomistic self-awareness since atomistic self-awareness is a necessary condition for having an experience in the first place. Pre-reflective action implies an immanent measure of success, as I have argued, and since we cannot experience the success or failure of atomistic self-awareness, it cannot help us distinguish pre-reflective action from bodily happenings. If I see my arm soar into the air because it is triggered by the implanted microchip, this is still an experience for me. My atomistic self-awareness remains the same. From the perspective of holistic self-awareness, however, things look very different. On this account, I would not recognise the activity as mine if, for instance, I am unaware of any affordances in response to which it would make sense for me to raise my arm. Here, the activity would fail to satisfy one of the immanent measures of success characteristic of prereflective action, namely, condition (a) above. We can also imagine another case, akin to alien hand-syndrome, where my left hand, when triggered by the microchip, gets a 'mind of its own' in the sense that it responds purposefully to affordances in my immediate environment (such as unbuttoning my shirt), but in this case, my activity does not count as pre-reflective action because the activity does not satisfy the other immanent measure of success, namely, condition (b) according to which I must be self-referentially committed to a project that is furthered through the affordances to which my activity responds. In this example, I simply do not recognise the purposes and responses of the alien hand as part of one of my projects and, thus, I am not aware of myself as successfully performing the activity in question. 7 Second, Schmid fails to recognise that only one form of pre-reflective self-awareness can be pluralised in shared action. As noted above, atomistic self-awareness is a permanent feature of the subject's experiential life; yet, in accounting for prereflective shared action, we need to show how an environment occasionally prompts us while it, in other circumstances, prompts me to act in a certain way. The selfawareness intrinsic to shared action cannot, in other words, be formal in the sense described above, but must rather be "reflected back to us" from a specific engagement with the world. Schmid does, at times, acknowledge that social relations and plural self-awareness are transitory (Schmid, 2014a, p. 22), yet he seems to consider plural self-awareness to be analogous to atomistic singular self-awareness when he claims that singular self-awareness "establishes something like the formal unity of mind" and "plays the role of Kant's 'transcendental apperception'" (Schmid, 2014a, p. 15). Like Zahavi's minimal self, Schmid's singular self-awareness is the unity of a stream of consciousness or the immanence of consciousness to itself. According to this analogy, Schmid's plural self-awareness "formally unifies our social mind" (Schmid, 2014a, p. 17), that is, independently of whatever is experienced. I contend, on the other hand, that 'our social mind' must be unified by the solicitations that prompt us to respond. In short, my suggestion is, first, that holistic self-awareness helps us explain the nature of pre-reflective action and, second, that a plural version of holistic selfawareness will help us explain the nature of pre-reflective shared action. Extrapolating from the previous definition, we get the following (preliminary) definition of this type of plural self-awareness: Plural self-awareness is the sense of self inherent to how a social and physical environment solicits actions based on (a) the affordances inflected by a group's abilities and dispositions and (b) the group's self-referential commitment to a project that is furthered by these affordances. The upshot of this redefinition is that it retains the main pro of Schmid's original proposal by not assuming shared action to involve the intellectually demanding representation of mental states and that it, in addition, allows us to account for the transience of plural self-awareness by way of our relations to other people and our environment while remaining true to the phenomenology of pre-reflective action. Joint goals and joint commitments I will now argue that the holistic model's way of tying together self-awareness and action provides a highly nuanced account of shared actions that effectively integrates both teleological and normative features of shared action. In the next section, I will spell this out in a taxonomy of individual and shared actions, but first we have to consider, in more detail, how the idea that the environment solicits actions from an agent can be translated from individual actions to shared actions. I suggested that a solicitation requires two elements: (a) the affordances inflected by the agent's (or agents') suite of abilities and dispositions and (b) the agent's (or agents') self-referential commitment to a project that is furthered through these affordances. With Heidegger we can also call these elements for (a) in-order-to's and (b) for-the-sake-of's. When it comes to (a) affordances, we should first note that, for human agents, environmental affordances are inherently connected to the various relations that connect us to other people. To take a Heideggerian example, the hammer affords hammering because the craftsman has been commissioned by someone to make the product. Here the environment affords something in light of a backgrounded understanding of the practical possibilities of someone else. Similarly, when two people are present in the same immediate environment, each agent pre-reflectively tracks and responds to the behaviour of the other. For instance, I pre-reflectively step aside in order for you to pass me in the narrow hallway. Our immediate understanding of our environment is thus already saturated by our non-thematic understanding of what others can and will do. This pre-reflective tracking and responsiveness will sometimes coalesce into joint affordances. In such cases, something appears as an affordance for us rather than just for me. This happens, for instance, when an environment affords something that I could not have done alone. Imagine, for instance, that you participate in the Black Lives Matter protest in The Centre in Bristol. A statue of the slave trader Edward Colston towers above this public space. You are enraged by this commemoration, and suddenly you see that someone has tied a rope around the statue. With a few of your fellow protesters, you start pulling the rope, ultimately toppling the statue. The environment solicits you to act together in light of the group's abilities and dispositions in a way that it simply would not do if you had walked past the statue on your own. It is more controversial whether (b) the commitments or for-the-sake-of's can be put in the plural. How can a group self-referentially commit to some of these affordances? It is often assumed that for-the-sake-of's are individual, but I want to make the case that self-referential commitments can be joint in the sense that my selfreferential or atelic commitment to a project constitutively depends on your being similarly committed. 8 To see this, let's take an example from Heidegger that explicitly describes joint goals but which, with a bit of modification, can also shed light on joint commitments. Heidegger describes two campers, where one chops wood while the other peels potatoes: They are with each other-and not just because they are in the vicinity of each other. They are with one another, although they are occupied with different things, yet for the same purpose, namely, with the preparation of the meal and, further, with taking care of their stay in the cabin. (Heidegger, 1996, p. 91) At the face of it, this looks like Bratman's teleological account. The two campers engage in shared action because they intend the same goal, namely, the preparation of the meal and the stay at the cabin, and they have meshing subplans. For Heidegger, however, the two campers are oriented towards their joint goal pre-reflectively, whereas Bratman construes this is a deliberative process. Peeling potatoes is significant in order to make the meal, which is significant in order to stay at the cabin, but the campers do not actually think about their joint goal. It is simply part of the intentional background that guides their actions. If we imagine that one camper had a cold and cancelled, but the other camper went on the trip anyway, he could still unreflectively engage in chopping wood and thus his state of mind, understood internalistically, would remain the same. Yet, Heidegger would insist that without the tacit reference to his friend, the activity would no longer make sense in the same way since the non-thematic goal would no longer be a joint goal but now only an individual goal. However, we must also account for the for-the-sake-of that, ex hypothesi, affects how the environment of wood and potatoes solicits actions from the campers. Suppose that the campers are a father and his teenage son. Father and son have planned their camping trip a few weeks in advance but in the days before their departure, the teenage son becomes inexplicably moody. The son is conscientious and does not try to bail on the camping trip, although he complains a lot. During the trip, he constantly listens to angry music with his headphones, and he keeps a gloomy look on his face while peeling the potatoes. Do father and son correctly coalesce in shared action? They did, of course, coordinate their actions in pursuit of the joint goal of camping. However, another sense of the we seems missing. Despite their coordination and their joint goal, father and son are to some extent performing their tasks next to rather than with each other. Neither Bratman nor Gilbert sees any substantial difference between these two examples. Bratman would say that each intends that they go camping, that they have correctly meshing subplans, and that they operate under conditions of common knowledge. For Gilbert, the decisive part is that father and son constituted a plural subject when they expressed their initial readiness to go on the camping trip and that they emulated a single body in doing so. On this account, the attitude of the sulky teenager is beyond rebuke, and, tellingly, Gilbert maintains that joint commitments hold even under coercive circumstances (Gilbert, 1993). In contrast, I believe that there is a significant difference between the two cases and that the latter case misses a crucial feature of genuine joint action even though a joint goal is intended and achieved. In brief, the difference consists in how the father and son relate to their joint goal. What is similar between the two cases is the set of in-order-to's and what differs are the for-the-sake-of's. The happy campers have a joint goal and a joint commitment. Father and son go camping as an end-in-itself, as we might say with reference to Kant (cf. Heidegger, 1982, p. 170). They go camping for the sake of doing something together, and the affordances of the situation prompt them to act in a specific way only in light of this joint commitment. The father to the sulky teenager also intends to go camping for the sake of doing something with his son. Yet, the teenager does not share this commitment. He is motivated by a different for-the-sake-of than his father. Perhaps the son simply goes camping because he does not want to get blamed for cancelling the trip. In any case, the son pursues the joint goal in light of an individual rather than joint commitment. For the father, this means that his for-the-sake-of breaks down as it constitutively depends on being shared by the son, and this alters what the environment solicits from him. The possibility of lighting a fire is now less salient than, say, the possibility of going to bed early. The trip is a failure for the father, not because father and son did not carry out the joint goal that they had agreed upon, but because he tried to do something for the sake of doing something together with his son and, alas, his son did not share this commitment. 9 This shows that the existential joint commitment is not tantamount to a reflective endorsement; it is pre-reflective in the sense that it is an integral feature of how a shared environment solicits people to response. It is the condition in light of which environmental affordances prompts us to act. As a commitment, it retains a normative element, however, since our project can succeed or fail in a way that is independent of the mere teleology of the action. The sulky teenager shows that the success or failure of shared action is not only measured by whether we achieve the goals that we aim for but also by whether others on which our commitment depends turn out to be similarly committed. In contrast to Schmid, who understands joint commitment as the "constant normative pressure for coherence between the attitudes of interacting individuals" (Schmid, 2014a, p. 18), the existential account of joint commitments does not concern coherence between attitudes as such but the fact that we sometimes care about things because we simply assume this care to be shared by others. 10 As the disappointed father might complain: "I just wanted us to do something together for once!". It is central to Gilbert's reflective concept of joint commitments that they provide us with obligations and entitlements. For her, joint commitments are the battle ground on which we coerce others to do their parts by invoking the rights and duties that we conferred upon each other when we expressed our readiness to undertake a joint commitment. Gilbertian joint commitments are thus in no way opposed to reflection. In fact, they come must fully into view when we explicitly remind each other and ourselves that we are jointly committed to do something as a single body. Existential joint commitments are very different for the father only feels the need to explicitly remind the son of their agreement to go camping because their existential joint commitment has already gone awry. When pre-reflective shared action succeeds, things go smoothly and we don't feel the need for overt normative exchanges. This need only arises because the campsite no longer solicits father and son to spend quality time with each other. Thus, when the father explicitly reminds his son-and perhaps himself-that they agreed to go camping and have a good time, their pre-reflective action has already been replaced by a reflective substitute in which we recall and represent our intentions, beliefs, common knowledge, obligations, and so on. Footnote 9 (continued) it is important to distinguish between the joint possibility or goal that the individuals try to actualise, on the one hand, and whether they do so as a group or as individuals, on the other hand. I think, however, that it is misleading to say that the individuals must act for the sake of the group's wellbeing or flourishing since this seems to require a prolonged concern for the group and that we entertain certain beliefs about the desires and goals of the group that lie beyond the concrete goal currently being pursued. Instead, I propose that the joint for-the-sake-of requires that the individuals are committed to the project only if the others are similarly committed. 10 To put the point differently, existential joint commitments do not concern the coherence or consistency of our attitudes due to the fact that when we act pre-reflectively, we do not question whether or not the attitudes of our co-agents cohere with our own. We simply act on the tacit assumption that they do. The question whether our attitudes do in fact cohere only arises, when pre-reflective shared action breaks down and we enter a reflective mode. This means that each of us might experience something as a shared action to which we are jointly committed even if it later turns out that we were wrong to tacitly assume others to be thus committed. Joint commitments are intrinsic to the first-person perspective but fallible. As Heidegger once noted, in a passage where he uses 'decision' [Entscheidung] to refer to the for-the-sake-of: "no individual among you can in any manner ascertain about how any other individual has decided" (Heidegger, 2009, 51). At this point of the camping trip, I imagine that things can go one of two ways. Either the reproach is successful and the son tells what has been bothering him after which father and son can finally enjoy their trip. In this case, the shared action becomes, once again, pre-reflective. Or father and son sit in awkward silence for the rest of the night deliberately forcing themselves to remain seated although the fire no longer solicits them to sit there, although the fire has lost its magic. In this case, the shared action remains reflective. A taxonomy of individual and shared actions I suggested earlier that something counts as shared action when an environment solicits behaviour based on (a) the affordances inflected by a group's abilities and dispositions and (b) the group's self-referential commitment to a project that is furthered by these affordances. We now see that the logical operator should not be a conjunction but an inclusive disjunction since (a) affordances and (b) self-referential commitments can be singular or plural independently of each other. We thus end up with a fourfold taxonomy of how an environment solicits actions that combines the goal orientation of Bratman's account and the normative dimension of Gilbert's in a single phenomenological framework. In the simplest case, both goal and commitment are singular: An environment solicits action based on (a) the affordances inflected by an individual's abilities and dispositions and (b) the individual's self-referential commitment to a project that is furthered by these affordances. The environment affords certain possibilities because of what the individual is able and disposed to do. These affordances are made into solicitations by the agent's commitment to actualise one rather than another possibility. For example, my laptop affords me to work since I know the password, and since I know how to open up the manuscript file, and so on. This affordance is a solicitation because I try to be an academic. Of course, the solicitation depends on anonymous social institutions but does not refer directly to other people and is hence an individual action, or, more precisely, an individually coordinated individually committed action. Another possibility is the following: An environment solicits action based on (a) the affordances inflected by a group's abilities and dispositions and (b) an individual's self-referential commitment to a project that is furthered by these affordances. If we pluralise (a) the affordances but not (b) the commitment, we have what I'll call coordinated action or, technically, jointly coordinated individually committed action. My example with the sulky teenager falls in this category because the teenager acts in pursuit of a joint goal although he is committed to this goal as an individual. The teenager pursues a joint goal for his own sake. Conclusion I have argued that we should conceptualise action on the basis of how an environment solicits someone to behave based on (a) the affordances or goals inflected by their abilities and dispositions and (b) their self-referential commitment to a project that is furthered by these affordances. This definition of action is sufficiently flexible to account for not only individual action (in which both (a) and (b) refer only to an individual) but also several distinct subtypes of shared action. Thus, behaviour counts as shared action if and only if it is prompted by a solicitation in which either (a) the goals, or (b) the commitments, or both (a) the goals and (b) the commitments are joint. We thereby get three distinct subtypes of shared actions: (i) jointly coordinated individually committed action, (ii) individually coordinated jointly committed action, and (iii) jointly coordinated jointly committed action. My account improves on existing accounts of shared action in several ways. First, in terms of the intellectualist problem, I have argued that we do not necessarily have to consciously represent the mental states of ourselves and others in order to act together. Instead, I have argued that some forms of shared action are pre-reflective in the sense that we are prompted to act by our immediate environment on the tacit assumption that others will also do so. This form of shared action does not require that we reflect upon (not even that we are capable of reflecting upon) the mental states of others but only that we pre-reflectively track and respond to their behaviour. Second, in terms of the taxonomy problem, my account covers both the teleology and the normativity of shared actions showing how they interrelate and how they differ from each other. More specifically, I have argued that in solicitations there are both a teleological and a normative element and that each of these can refer either to the agent as an individual or to a group of which the agent is a part. Third, I have avoided the genetic problem by refusing to explain shared actions and intentions as a phenomenon that emerges out of individual actions and intentions. Rather, I have described individual action and (the three types of) shared action as different varieties of the same basic mechanism, namely, our pre-reflective responsiveness to our shared environment. 11
2021-10-19T15:09:58.685Z
2021-10-17T00:00:00.000
{ "year": 2021, "sha1": "401ed6a1a07506e1dbe0882aea8dba2b3b6acc4c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11097-021-09785-4.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c9f1fcc8cc810d4b46a7feed9089835358aced0a", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Psychology" ] }
225776929
pes2o/s2orc
v3-fos-license
Academic skills in the screenish era Our culture has begun to react to the implications of digital technology for our shared memory and individual privacy in the form of legislation like GDPR.  But the culture of higher education---specifically the dominant model of knowledge generation and transmission---stays firmly rooted in the 'bookish' tradition. Honors education was started in response to the changing world in the wake of WWII. Its core principles of engaged learning and individualization of the student experience can lead higher education to embrace the new culture of digital technology-assisted knowledge generation and transmission. Introduction In response to the changing world after World War II, two American college presidents published books on the future of higher education in the US. The first, recognized by many honors professionals, was Frank Aydelotte's Breaking the Academic Lockstep. The second, perhaps less familiar, was Vitalizing Liberal Education by Algo Henderson, then President of Antioch College. We remember Aydelotte today as having inspired honors programs and colleges across the country. Henderson, on the other hand, is remembered for his insistence that the entire college environment be directed towards educational change in the student; his legacy includes student governance, internships and co-ops and 'live-learn' communities. Both thinkers sought to revolutionize the educational system so that it could lead the technological and cultural changes of the post-war world. They both prized engaged learning and individualization of the collegiate experience. And they both accepted that their students would face a world that was different in kind to the one for which they were trained. Today, we face an analogous set of challenges: a new technological world-order and new forms of mass-education. But unlike Aydelotte and Henderson, we have largely ignored the cultural changes necessary to prepare students for a world unlike the one for which we were prepared. I believe that honors educators, thanks to the legacy of individualized learning celebrated by Aydelotte and Henderson, have the power to lead this new era, if they are willing to trust our students. Digital technology allows for perfect replication of information. Non-digital replicas always have some imperfection, noise, or "static" introduced in the replication process. Digital replication does not have this imperfection-a digital file is identical to all of its copies. This perfect replication of information implies a number of interrelated capabilities. First, by copying information from one physical device to another as they wear out, information is practically permanent. Second, storage of information is independent of the medium; a video can be stored on a hard drive alongside a text document. Third, information can be searched, classified, and compared quickly. And fourth, as networks of digital copying have increased in both speed and scope, information has become a distributed, not centralized, network. The larger culture has begun to understand these implications-the European Union's General Data Protection Regulation (commonly known as GDPR) laws, for example, addresses the problem of practically permanent storage. But the academy has not shown many signs of adapting to this new world. 'Bookish' and 'screenish' Most faculty probably envision their ideal life as surrounded by books-they are what Ivan Illich called 'bookish' (Illich, 1993). In the UK, one does not "study" for a degree, one "reads" for that degree. Libraries frequently occupy the "academic heart" of a campus and have the appropriate architecture to match. Members of the academy today are the products of a European intellectual tradition dating back at least 8 centuries that identifies "being educated" with being intimately familiar with books. This bookishness will not be true in our students' lifetimes, if it is even true now. Don't get me wrong-books are wonderful devices to store information. I love my books. But books have major shortcomings when compared to digital technology. Books do not replicate. The best version is always the original. They are limited to a small set of media: text, images, charts, tables. They are not easy to index or search. And proximity matters-to use them, one must be physically in the same location. Most of us in higher education-especially those of us old enough to have leadership positions-were trained in the bookish era, and, hence, we tend to think of knowledge using the model of books. Primary sources rule. Proximity matters-the closer one is to the authority on a subject matter, the better or more reliable one's knowledge is. Information retrieval is inefficient; therefore, there is great value in summary documents (textbooks) and lexical memorization. None of these are necessarily true of knowledge. They are extrapolations from the metaphor of books, yet hard-baked in the culture of academia. Current students are not bookish. They are, to coin an awkward word, "screenish." But this does not mean that the skills we teach have no place in the screenish world. These skills are, witnessing the influence of social media on American politics, now more important than ever. But it does mean that instead of teaching our same old bookish ways, we should shift our metaphors about how knowledge works. An educated person in the screenish age must be able to navigate and utilize networks of interrelated bits of information, not just texts but blog posts, YouTube channels, and podcasts. In order to contribute to the American democratic society, the contemporary civicminded American should understand how Wikipedia, snopes.com and Reddit contribute to public discourse, and how 4chan and related sites seek to manipulate it. Consider critical thinking as an example. Most actual critical thinking instruction focuses on classic textual fallacies, including reliability of experts. The newspaper opinion editorial is the typical example of public argumentation. Indeed, the Ennis-Wier critical thinking essay test (Ennis & Wier, 1995) asks students to analyze a fictional op-ed, and most of the "make-anargument" and "break-an-argument" tasks in the Collegiate Learning Assessment (CLA) and the post-2016 version of the Student Aptitude Test (SAT) are framed in similar ways. Critical thinking about texts requires validating sources and watching for distractions, non-sequiturs, and equivocations. Critical thinking for the digital age is similar, requiring understanding photoshopped images, deep fakes, and Russian trolls. Changing skills Information in the digital age is multi-modal, distributed, permanent, searchable and fast. And most of us were taught the skills to succeed in a world where information was textual, centralized, limited, browsable but not searchable, and slow. Academic skills must change. Consider, by way of example, spelling. Recently, my mother found a box of class materials in her attic from when I was in 9th grade. It contained the results of my "career-placement" test. As a child, I was-and to be honest, still am-a horrible speller. So, while my placement test recorded a 99th percentile in "Abstract Reasoning," I scored only a 40th percentile in spelling. As a result, all academic jobs were precluded from my inventory of potential future careers. In 1988, spelling was a requirement for a life of letters. When I entered college in 1992, spellcheck was something one ran after the paper was finished, as a final check before printing. The feedback loop between my misspelling and correction offered by spellcheck was too long to teach me anything. MS Word introduced auto-spellcheck sometime around 1993, and my spelling skills quickly improved. By changing a skill that was considered a necessary requirement to one that could be achieved with the use of assistive technology, auto-spellcheck opened up academic careers to me and many like me. Today, requiring good spelling in a job description would be analogous to requiring perfect eyesight. Assistive technology is so ubiquitous that insisting on unassisted perfection would be prejudiced and unfair. Digital technology will, I suspect, do the same to many of the skills we treat as essential to our fields today. Students in science, technology, engineering and math (in the US, these are identified with the acronym "STEM") fields are often required to memorize huge lists of terminology. Assistive technologies for memory may well make this skill obsolete. In the "screenish" age, the skills of organizing and labeling information are far more important than mnemonic devices and flashcards. Transformation If higher education can transform for the needs of this new era, we must, in the words of Aydelotte (1944), 'clarify its aims and improve its quality' (p. 7). The aim of education is always to prepare students for their era, not ours. And, as Henderson (1944) says, '[the student] has to be taught how to search for knowledge on his own, how to utilize this knowledge in the thinking process, and then how to apply the results of this thinking in life's activities for some individual and social purpose' (p. 113). Our students need to be prepared for the brave new screenish world, not the bookish world for which we were trained. They need the skills necessary to contribute to knowledge in a global distributed informational environment. They need to view knowledge as a distributed, multi-modal network. Knowledge in this model is not something one owns or possesses but rather something shared that one can retrieve quickly when necessary. Honors is already frequently structured to encourage students to learn on their own and to apply what they have learned. They are masters of the media of the screenish world. We are not. So be it. Let us use the honors tradition of allowing our students to shape their education to create a system of education for the future, not the past. This will require a cultural change among faculty and the leaders of honors-but no more so than the one envisioned by Aydelotte and Henderson. Educate the students individually for their benefit and embrace the models of knowledge that are relevant in their world, not ours.
2020-07-02T10:31:34.463Z
2020-06-27T00:00:00.000
{ "year": 2020, "sha1": "f7ba0eae00c349469c4540ce020aeb23b4820394", "oa_license": "CCBY", "oa_url": "https://jehc.eu/index.php/jehc/article/download/121/101", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "24466f0e2356e6678846ff6b545a0d1144c0bb77", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [ "Sociology" ] }
269720750
pes2o/s2orc
v3-fos-license
Three-Way Alignment Improves Multiple Sequence Alignment of Highly Diverged Sequences : The standard approach for constructing a phylogenetic tree from a set of sequences consists of two key stages. First, a multiple sequence alignment (MSA) of the sequences is computed. The aligned data are then used to reconstruct the phylogenetic tree. The accuracy of the resulting tree heavily relies on the quality of the MSA. The quality of the popularly used progressive sequence alignment depends on a guide tree, which determines the order of aligning sequences. Most MSA methods use pairwise comparisons to generate a distance matrix and reconstruct the guide tree. However, when dealing with highly diverged sequences, constructing a good guide tree is challenging. In this work, we propose an alternative approach using three-way dynamic programming alignment to generate the distance matrix and the guide tree. This three-way alignment incorporates information from additional sequences to compute evolutionary distances more accurately. Using simulated datasets on two symmetric and asymmetric trees, we compared MAFFT with its default guide tree with MAFFT with a guide tree produced using the three-way alignment. We found that (1) the three-way alignment can reconstruct better guide trees than those from the most accurate options of MAFFT, and (2) the better guide tree, on average, leads to more accurate phylogenetic reconstruction. However, the improvement over the L-INS-i option of MAFFT is small, attesting to the excellence of the alignment quality of MAFFT. Surprisingly, the two criteria for choosing the best MSA (phylogenetic accuracy and sum-of-pair score) conflict with each other. Introduction Inferring evolutionary relationships among various species from molecular data, such as protein, DNA, and RNA sequences, is a basic problem in evolutionary biology.Multiple sequence alignment (MSA) is a primary step in phylogenetic reconstruction, and the accuracy of MSA directly affects the accuracy of reconstructing a phylogeny, especially a deep phylogeny [1][2][3][4]. The most common method for aligning two sequences (pairwise sequence alignment, or PSA) is the Needleman-Wunsch algorithm [5], which is a dynamic programming (DP) approach that finds the optimal alignment based on a given scoring scheme [6][7][8].Using DP in PSA is feasible in quadratic time and memory, which can even be reduced to linear memory requirement [9].The idea of using DP for sequence alignment can be easily extended to MSA.However, multi-dimensional DP is practically infeasible even for a small number of sequences, due to time and space complexity [10]. As an alternative, a progressive alignment approach was proposed for MSA [11] and implemented in many alignment tools, such as MAFFT [12], MUSCLE [13], T-COFFEE [14], and CLUSTAL-W [15].Progressive alignment fundamentally simplifies the task of MSA by breaking it down into a series of pairwise and profile alignments.Progressive alignment involves constructing a guide tree to determine the order of aligning sequences and Algorithms 2024, 17, 205 2 of 17 performing pairwise and profile alignment from the leaves of the guide tree to the root.The guide tree is usually constructed by using distance-based methods like UPGMA and neighbor joining (NJ) from a distance matrix obtained mainly from (1) k-tuple similarity or (2) PSA. Current MSA methods and tools perform well on aligning closely related sequences [16].However, the performance of these methods decreases with sequence divergence [17][18][19].Virtually everyone interested in deep phylogeny is looking for ways to improve MSA.Some have incorporated secondary structure to guide the sequence alignment [20][21][22][23], while others explored post-alignment refinement [1,17].The improvement of MSA with these approaches remains limited. A guide tree is a crucial component in progressive alignment, and its accuracy affects the accuracy of output alignment [24].A few studies have shown that an inaccurate guide tree could be a major source of error in progressive sequence alignment [25,26].Therefore, different strategies for improving guide trees have been proposed [24].Two criteria have been used to evaluate the effect of guide trees on the accuracy of MSA generated by MAFFT and ClustalW, including (1) the sum-of-pair score (SPS), excluding shared gaps in pairwise comparisons, and (2) the accuracy of phylogenetic reconstruction [27,28].The result indicates that the final SPS is affected little by the initial guide tree, but better guide trees significantly improve the accuracy of the reconstructed phylogenies. However, constructing an accurate guide tree is difficult for highly diverged sequences.Aligning three sequences using dynamic programming is expected to improve the alignment and, in particular, the estimated distances used to build the initial guide tree [14,15,29,30]. Three-dimensional dynamic programming (3D_DP), as formulated by Gotoh [31], represents an extension of the Needleman-Wunsch algorithm originally proposed for pairwise sequence alignment.Gotoh contributed to developing 3D-DP for an affine gap penalty.His approach to 3D-DP increases the time and space requirements to cubic complexity, which is not feasible for long sequences.The Carrillo-Lipman [32] algorithm was proposed to narrow down the search space within N-dimensional dynamic programming.The idea of this method is to combine the initial MSA with information from each pairwise alignment to define lower bounds for the two-dimensional projection of the optimal path.Consequently, this strategy enables us to focus solely on the cells within the N-dimensional lattice that satisfy these bounds. In this study, we aim to improve the accuracy of the guide tree, especially for highly diverged sequences, by using three-way alignment to measure the distance between sequences.The three-way alignment is expected to improve the three constituents' pairwise alignments and distance estimations, leading to more accurate distance estimates and the resulting guide trees.We assess the performance of MAFFT [12] with two types of guide trees, one generated internally by MAFFT and the other based on three-way alignment. Carrillo-Lipman Algorithm for Three Sequences In this section, we restate the Carrillo-Lipman equations for three sequences [32].Suppose we have three sequences, s 1 , s 2 , and s 3 .The optimal alignment for these three sequences has the highest score based on the SPS criterion.Therefore, any other alignment has a lower score, leading to the following inequality: S(γ * ) − S(γ e ) ≥ 0, (1) where γ * and γ e are the optimal and arbitrary alignments, respectively.The SPS of a 3-way alignment is as follows: where the γ 12 is the pairwise alignment of s 1 and s 2 , γ 13 is the pairwise alignment of s 1 and s 3 , and γ 23 is the pairwise alignment of s 2 and s 3 .In other words, any of these pairwise alignments can be considered the projection of γ * on the three surfaces of 3D-DP.Based on Equation (3), we can write the following three inequalities for each projection of γ * : S(γ 12 ) ≥ S(γ e ) − (S(γ 13 ) + S(γ 23 )) For each pair of sequences, we can find the optimal alignment which has the highest SPS, so we can write the following: where γ * 12 , γ * 13 , and γ * 23 are the optimal pairwise alignments.Using these three inequalities, we can rewrite the Equation (4) as the following inequalities: The Carrillo-Lipman algorithm defines the following three boundaries based on the above inequalities: L 12 is the lower bound for the score of the pairwise alignment of S 1 and S 2 , L 13 is the lower bound for the score of the pairwise alignment of S 1 and S 3 , and L 23 is the lower bound for the score of the pairwise alignment of S 2 and S 3 .In other words, L 12 , L 13 , and L 23 are the lower bounds for the measure of the projection of any 3-dimensional optimal path into the planes determined by each pair of sequences.Then, when looking for γ * , we need only consider those paths in the cubic that their pairwise alignment satisfies the related inequality. Similar to the Carrillo-Lipman algorithm, we call the set of paths for which their projection on the planes (S 1 , S 2 ), (S 1 , S 3 ), and (S 2 , S 3 ) satisfies the inequality (6), X 12 , X 13 , and X 23 respectively.Thus, the paths in the set, as follows: are the only possible candidates to be an optimal path.To consider only the paths in X means having to apply the dynamic programming procedure to find γ * only in subregion Y of the cubic.Let Y 12 , Y 13 , and Y 23 be the set of points for which their projection on each plain satisfies the related bound.Therefore, the set is as follows: This theory proves that it is unnecessary to apply the dynamic programming method to the entire cubic, and it suffices to consider just subregion Y.For each pair of sequences, we use 2D dynamic programing to find the PSA score to calculate γ * 12 , γ * 13 , and γ * 23 , as required for calculating the lower bounds, applying the Carrillo-Lipman algorithm on all possible triplets.It is noteworthy that the performance of this method heavily relies on the initial alignment γ e used for identifying lower bounds.To significantly reduce the search area, this alignment should closely approximate the optimal path.The time and space saved by this method is more for highly similar sequences than for highly diverged sequences. Three-Way Alignment Algorithm Let A, B, and C represent three sequences, and their lengths are denoted by n, m, and l, respectively.For three residues of A i , B j , and C k at position (i, j, k), there are seven possible alignment configurations.M(i, j, k) represents the best score when three residues are aligned.I xy (i, j, k), I xz (i, j, k), and I yz (i, j, k) are the scores of introducing one gap in C k , B j , and A i respectively.Similarly, I x (i, j, k), I y (i, j, k), and I z (i, j, k) represent the scores for aligning a residue in A i , B j , and C k while introducing gaps in the other two sequences. The 3-way alignment algorithm was formulated by Gotoh [31] for the affine gap penalty.By convention, the criterion for choosing the best alignment among all possible ones is equivalent to maximum parsimony, i.e., the alignment with the smallest alignment cost incurred by indels and mismatches is the best alignment.Expressed alternatively, the best alignment is the one with the highest alignment score as a function of matches and mismatches, as well as gap open and gap extension penalties.With three sequences, an aligned site with two residues in the first two sequences and a gap in the third sequence is interpreted as having a single change (a deletion in the third sequence), with u D representing the deletion cost.Similarly, an aligned site with a single residue in sequence 1 and a gap in the two other sequences is also interpreted as a single change, i.e., a single insertion in sequence 1, with u I representing this insertion cost.Gotoh [31] used u D = u I = u in his alignment algorithm, with the implicit assumption that insertions and deletions occur equally frequently.This was also adopted by Huang [33].However, Kruspe and Stadler [29] treated u D and u I differently.We defined Equations ( 10)-( 16) in a similar way to those in [29], with a slight modification to facilitate the implementation of the Carrillo-Lipman algorithm, as follows: In the formulae above, GO and GE are gap open and gap extension penalties, and S(α, β) denotes the score of aligning two residues, which is determined using a scoring matrix such as PAM or BLOSUM.The score of aligning three residues is the sum-of-pair score (SPS), i.e., S A i , B j , The specification in Equations ( 10)-( 16) carry some benefits in the context of the Carrillo-Lipman method described previously.Because we are searching for an optimal three-way alignment satisfying Gotoh's equations, we need to estimate the γ e , which is an arbitrary alignment of three sequences A, B, and C. In our implementation, we estimated γ e using progressive alignment and used it in the Carrillo-Lipman equations.Therefore, we used 2GO for (I xy , I xy , I xz ) to be consistent with the calculation of SPS.An aligned site with two residues in two sequences and a gap in the third sequence is counted as two indel events in SPS (i.e., an indel between sequence 1 and sequence 3 and an indel between sequence 2 and sequence 3).Similarly, an aligned site with a residue in sequence 1 and a gap in sequence 2 and sequence 3 is also counted as two indel events is SPS.By using 2GO in (I xy , I xy , I xz ), we can estimate γ e based on the progressive alignment and use it in Carrillo-Lipman equations.Equations ( 10)-( 16) do not conflict with Gotoh's equations. Similar to the PSA with the affine gap penalty function, we need to establish seven traceback matrices to reconstruct the optimal alignment once the scoring matrices are completed.The values within these matrices are determined during the forward procedure and are used in the subsequent traceback procedure. Simulated Dataset We generated our amino acid sequence datasets based on symmetric and asymmetric trees with 16 taxa (Figure 1) and 8 taxa (Figure 2).For the 16-taxa tree, two different sets of branch lengths were used to generate sequences with different levels of divergency, and 50 datasets have been generated for each tree.We used the Alisim tool, provided by IQ-TREE [34], to produce aligned sequences with an average length of 500 for each tree.The Jones-Taylor-Thornton (JTT) substitution model [35] was used for all datasets.There are two types of amino acid substitution models.The first type is based on counting empirical substitutions from a large number of aligned protein sequences, with the hope that the resulting substitution model will be one-hat-fits-all.The second type is derived from the maximum likelihood method based on a specific set of protein sequences (e.g., vertebrate mitochondrial proteins).They all specify the transition probabilities between amino acids given a branch length in a tree.and 50 datasets have been generated for each tree.We used the Alisim tool, provided by IQ-TREE [34], to produce aligned sequences with an average length of 500 for each tree.The Jones-Taylor-Thornton (JTT) substitution model [35] was used for all datasets.There are two types of amino acid substitution models.The first type is based on counting empirical substitutions from a large number of aligned protein sequences, with the hope that the resulting substitution model will be one-hat-fits-all.The second type is derived from the maximum likelihood method based on a specific set of protein sequences (e.g., vertebrate mitochondrial proteins).They all specify the transition probabilities between amino acids given a branch length in a tree.An insertion/deletion rate of 0.05 was used for both the 16-taxa and 8-taxa trees.The power law distribution (POW) was used as the insertion/deletion size, with a = 2 and and 50 datasets have been generated for each tree.We used the Alisim tool, provided by IQ-TREE [34], to produce aligned sequences with an average length of 500 for each tree.The Jones-Taylor-Thornton (JTT) substitution model [35] was used for all datasets.There are two types of amino acid substitution models.The first type is based on counting empirical substitutions from a large number of aligned protein sequences, with the hope that the resulting substitution model will be one-hat-fits-all.The second type is derived from the maximum likelihood method based on a specific set of protein sequences (e.g., vertebrate mitochondrial proteins).They all specify the transition probabilities between amino acids given a branch length in a tree.An insertion/deletion rate of 0.05 was used for both the 16-taxa and 8-taxa trees.The power law distribution (POW) was used as the insertion/deletion size, with a = 2 and An insertion/deletion rate of 0.05 was used for both the 16-taxa and 8-taxa trees.The power law distribution (POW) was used as the insertion/deletion size, with a = 2 and power = 100.Therefore, we have the following four datasets based on 16-taxa trees: (1) a symmetric tree and (2) a half-symmetric tree, with branch lengths specified on the left panel of Figure 1, and and (3) an asymmetric tree and (4) a half-asymmetric tree, with branch lengths specified in the right panel of Figure 1.We have the following two datasets based on 8-taxa trees: (1) a symmetric tree and (2) an asymmetric tree with branch lengths specified in Figure 2. The simulated data and C source code implementing the 3-way alignment are included in the Supplementary Materials. Measuring Distance Matrix and Constructing Guide Tree For each dataset, we aligned all possible triplets (56 and 560 triplets for 8-taxa and 16-taxa topologies, respectively) using the Carrillo-Lipman algorithm with the BLOSUM62 matrix, a gap-open penalty of 10, and a gap-extension penalty of 2. Each pair of sequences exists in (n − 2) triplets.Therefore, to measure the distance between two sequences, we calculated the average over their distances in all those (n − 2) triplets.The final distance matrix contains the average distance of each sequence pair.We used the JTT model to measure evolutionary distances between each pair of sequences in the aligned triplets.The resulting distance matrices were then used as inputs to the NJ algorithm in the PHYLIP package to construct guide trees that would later be used in progressive multiple alignment. Seqeuence Alignment with MAFFT We compared the performance of MAFFT with the MAFFT-generated guide trees against the 3-way alignment guide trees.In this study, we assess the performance of three different algorithms of MAFFT, including FFT-NS-1, FFT-NS-2, and L-INS-i (which is the most accurate option in MAFFT).We used MAFFT default, except for specifying FFT-NS-1, FFT-NS-2, or L-INS-i.The FFT-NS-1 option measures distances based on the sharing of k-tuples between sequences (where k is typically 6).A guide tree is then reconstructed using UPGMA to guide the subsequent multiple alignment.FFT-NS-2 reconstructs a new guide tree using the alignment generated by FFT-NS-1 and realigns sequences based on the new tree.We expect FFT-NS-2 to generate a more accurate alignment compared to the FFT-NS-1 because of the recomputing of the guide tree.L-INS-i uses local alignments with the Smith-Waterman algorithm to generate a distance matrix instead of the k-tuple method.Moreover, it uses a new objective function combining the weighted sum-of-pair score (WSP) and COFFEE-like score, which measure the consistency between MSA and PSA [12]. Comparing the Accuracy of Phylogenetic Trees MSAs generated in the previous step were used to construct phylogenetic trees using PhyML [36], with the option of simultaneously optimizing tree topology, branch length, and rates.These PhyML trees were then compared with the true tree for both the 16-taxa and 8-taxa trees, shown in Figures 1 and 2, through calculation of Robinson-Foulds distances (RFds) [37].The RFd between the true tree and the reconstructed tree is taken as a proxy for phylogenetic accuracy, where RFd = 0 means that the two trees share the same topology, and larger RFd values are associated with inaccuracies.RFd values between trees were computed using the APE package [38] in R. Note that RFd only measures the topological difference between trees but not the differences in branch lengths.Thus, a reconstructed tree would be considered identical to the true tree when RFd = 0, even if the two trees differ in branch lengths. Three-Way Alignment Tends to Generate Guide Trees Closer to the True Tree than Other Approaches We first evaluated the two guide trees, one generated from the MAFFT L-INS-i option and the other using our three-way alignment (3-WAY in Table 1) by comparing them with the true tree (i.e., the tree used for sequence simulation).With the symmetric tree, both approaches recovered some true trees, but the three-way alignment approach recovered slightly more true trees (Table 1).Similarly, the RFd is greater for L-NS-i than it is for the three-way alignment approach.These results are consistent with our hypothesis that the three-way alignment approach would produce better guide trees.However, these differences are small and not statistically significant, given our sample size of 50 sets of simulated sequences (two-tailed paired-sample t-test, t = 1.1881,DF = 49 and p = 0.2405, Table 1).Given the effect size, a sample of 140 would be needed get a p value below 0.05. (1N true : the number of correctly reconstructed trees (RFd = 0) using a method. (2)RFd: mean RFd from 50 simulated sets of sequences. (3)SE RFd : standard error of RFd. With the asymmetric tree, neither the L-NS-i approach nor the three-way alignment results in a guide tree that is identical to the true tree (Table 1).However, the difference in the RFd, similar to the results with symmetric trees, is in the expected direction, i.e., being smaller for the three-way alignment than for the L-INS-i approach.However, the difference between the two groups is not statistically significant given the sample size of 50 for each group (t = 1.1586,DF = 49, p = 0.2522). Table 2 presents the result of the comparisons of guide trees for 16-taxa trees.There are four different 16-taxa trees, including half-symmetric, symmetric, half-asymmetric, and asymmetric trees, which are represented as H-S tree, S tree, H-AS tree and AS tree, respectively, in Table 2.We compared the guide trees from four different approaches, including three-way alignment (3-WAY in Table 2) and the three MAFFT options (FFT-NS-1, FFT-NS-2, and L-INS-i), based on the simulated sequences.The guide trees reconstructed from k-tuple similarities (FFT-NS-1 and FFT-NS-2, with k = 6) are apparently much worse than those reconstructed from pairwise alignment (L-INS-i) or three-way alignment (Table 2).However, just as in Table 1, there is no significant difference between the last two approaches.The L-INS-i approach actually performed slightly better than the three-way alignment approach with the H-AS tree, recovering more true trees (41 versus 39) and having a smaller mean RFd (0.36 versus 0.52) than the three-way alignment approach (Table 2), although the difference is not statistically significant.The only difference reaching borderline significance involves the asymmetric tree (Table 2).The three-way alignment appears to produce a better guide tree, with an RFd nearly significantly smaller than that of the L-INS-i option (paired sample t-test, t = 1.8448,DF = 49, p = 0.0711). Table 2. Quality of guide trees generated using three MAFFT options (FFT-NS-1, FFT-NS-2, L-INSi) and the 3-way alignment (3-WAY), based on simulated amino acid sequences the 16-taxa trees, including half-symmetric (H-S tree), symmetric (S tree), half-asymmetric (H-AS tree), and asymmetric (AS tree) trees.Other column labels are the same as in Table 1.How will the difference in the guide tree affect the final phylogenetic reconstruction?We obtained MSA from each of the three types of guide trees as follows: (1) the true tree used for sequence simulation, (2) the guide tree reconstructed by the L-INS-i approach, and (3) the guide tree from the three-way alignment (3-WAY in Table 3).These MSAs are then used to reconstruct phylogenies by PhyML.We expect the MSAs obtained with the true tree as the guide tree to recover true trees but are interested in whether the three-way alignment approach will outperform the L-INS-i approach.Using the true tree as the guide tree apparently increases the chance of the true tree being recovered through the aligned sequences, which is true for both the 8-taxa symmetric and asymmetric trees (Table 3).With the symmetric tree, the three-way alignment approach outperformed the L-NS-i approach, recovering more true trees and having a smaller mean RFd (Table 3).However, the difference is not statistically significant given the sample size of 50 for each group (two-tailed paired-sample t-test, t = 1.4289,DF = 49, p = 0.1594). H-S With the asymmetric tree, none of the 50 MSAs from the L-INS-i approach recovered a true tree, and neither did the three-way alignment approach (Table 3).However, the RFd is smaller for the three-way alignment approach (mean RFd = 5) than that of the L-NS-i approach (RFd = 6).The difference is statistically significant based on a paired-sample t-test (t = 3.6293, DF = 49, p = 0.0007).This difference between the L-INS-i and the three-way alignment approach is also consistent with the results in Table 1. We also performed the same comparison of phylogenetic results from the 16-taxa trees (Table 4).We compared the accuracy of the reconstructed phylogenetic trees from the FFT-NS-1, FFT-NS-2, L-INS-i, and three-way alignment methods.The results are similar to those in Table 2, i.e., the guide trees reconstructed from six-tuple similarities (FFT-NS-1 and FFT-NS-2) are worse than those reconstructed from pairwise alignment (L-INS-i) or three-way alignment (Table 4).When the true tree was used as the guide tree, the resulting MSA recovered the true tree, except in the case of the asymmetric tree (AS tree in Table 4).Thus, the true tree is indeed the best guide tree, although there are controversies on this seemingly self-evident statement, as we will discuss later.Table 4. Result of comparing the reconstructed phylogenetic trees using PhyML and MSA generated with FFT-NS-1, FFT-NS-2, L-INS-i, and MAFFT using three-way alignment guide trees and true tree as input to MAFFT for 16-taxa trees, including half-symmetric, symmetric, half-asymmetric, and asymmetric trees.Column headings are the same as in Table 2.For the half-symmetric tree (H-S tree) and half-asymmetric tree (H-AS tree), because of reduced sequence divergence, the true tree was recovered from most of the datasets.Even the FFT-NS-1 and the FFT-NS-2 approaches perform well, recovering 90% and 94% of the true trees, respectively, in the H-S tree case and 80% and 81% in the H-AS case (Table 4). H-S Tree S Tree H-AS Tree AS Tree For the symmetric tree (S tree in Table 4), the FFT-NS-1 and FFT-NS-2 approaches recovered few true trees, but the L-INS-i and the three-way approaches recovered most of the true trees (Table 4).RFd is slightly smaller for the three-way alignment approach than it is for the L-INS-i approach, but the difference is not significant (paired-sample t-test, t = 0.7035, DF = 49, p = 0.2425).For the asymmetric tree (AS tree in Table 4), both the L-INS-i and the three-way alignment approaches recovered few true trees.The RFd is slightly smaller for the three-way alignment approach, but the difference is not significant (paired-sample t-test, t = 0.4928, DF = 49, p = 0.6244). Accuracy of the Guide Tree Affects the Accuracy of the Final Tree from MSA We evaluated the hypothesis that the quality of guide trees directly influences the phylogenetic accuracy by directly examining the association in RFd between the guide tree and the final phylogenetic reconstruction from PhyML.For the 8-taxa tree, we combined results from two simulations (symmetric and asymmetric trees) and the two types of guide trees (the L-INS-i and three-way alignment approaches), so that there are 200 guide trees and 200 PhyML trees from the resulting MSA.There is a strong association in RFd between the guide tree and the PhyML-reconstructed final tree (Figure 3).When the guide tree has an identical topology as the true tree (RFd = 0 between the two), the resulting PhyML-reconstructed tree also tend to have the topology of the true tree; when the guide tree deviates much from the true tree, so does the resulting PhyML-reconstructed tree (Figure 3).We have done the same for 16-taxa trees (Figure 4), including the fast but inaccurate FFT-NS-1 and FFT-NS-2 options in MAFFT, in addition to the L-INS-i and three-way alignment approaches.For each of these approaches, we combined the results from four simulations (the symmetric and asymmetric trees and the half-symmetric and half-asymmetric trees).Thus, each sub-figure in Figure 4 includes 200 guide trees and 200 PhyML trees.It is clear that the two fast and inaccurate options (that generate guide trees from six-tuple similarities) produced poor guide trees (large RFd values), as well as the final PhyML trees from the resulting MSA (Figure 4A,B) relative to the L-INS-i approach that generated the guide tree from local pairwise alignment (Figure 4C) or to the three-way alignment approach (Figure 4D).However, for all the four approaches, the guide tree quality strongly affects the accuracy of the final PhyML tree (Figure 4).The relationship between the guide tree RFd and PhyML tree RFd are all highly significant (p < 0.0001).We have done the same for 16-taxa trees (Figure 4), including the fast but inaccurate FFT-NS-1 and FFT-NS-2 options in MAFFT, in addition to the L-INS-i and three-way alignment approaches.For each of these approaches, we combined the results from four simulations (the symmetric and asymmetric trees and the half-symmetric and half-asymmetric trees).Thus, each sub-figure in Figure 4 includes 200 guide trees and 200 PhyML trees.It is clear that the two fast and inaccurate options (that generate guide trees from six-tuple similarities) produced poor guide trees (large RFd values), as well as the final PhyML trees from the resulting MSA (Figure 4A,B) relative to the L-INS-i approach that generated the guide tree from local pairwise alignment (Figure 4C) or to the three-way alignment approach (Figure 4D).However, for all the four approaches, the guide tree quality strongly affects the accuracy of the final PhyML tree (Figure 4).The relationship between the guide tree RFd and PhyML tree RFd are all highly significant (p < 0.0001). Sum-of-Pair Score May Not Be a Good Criterion for Choosing the Best MSA There are two criteria that can be used to evaluate the quality of an MSA.The first is phylogenetic accuracy, i.e., the MSA that results in the most accurate phylogenetic reconstruction is the best MSA.This criterion is conceptually fine but not computationally practical.Also, one can generally evaluate phylogenetic accuracy only for simulated sequences with a known true tree.The second criterion is the sum-of-pair score (SPS) or its variations, such as weighted SPS [39][40][41].This weighted SPS is used in the default option in MUSCLE and the G-INS-i and L-INS-i options in MAFFT.The criterion is computationally practical and expected to be generally consistent with the first criterion.Our results in the previous section show that when an MSA is generated with the true tree as a guide tree, this MSA tends to result in the most accurate phylogenetic reconstruction.It is, therefore, interesting to know if an MSA generated with the true tree as a guide tree also leads to the highest SPS. We compared SPS from two types of MSAs, one generated using the true tree as the guide tree (the "trueTree" approach) and the other generated using the accurate L-INS-i option (the "L-INS-i" approach, which creates the guide tree based on local pairwise alignment) in MAFFT.The input sequences are simulated with symmetric and asymmetric trees as before, with an average sequence length of 500 amino acids.Each simulated data set generated two MSAs, one from the trueTree approach and the other from the L-INS-i approach.The two MSAs were also used for phylogenetic reconstruction using PhyML.When the true tree was used as a guide tree, the final PhyML tree was closer to the true tree (smaller RFd) than that of the L-INS-i approach (Figure 5A,C).This difference is highly significant based on a paired-sample t-test (p < 0.0001 for data in Figure 5A,C).(A) FFT-NS-1 approach in which the guide tree was generated from 6-tuple similarities.(B) FFT-NS-2 approach in which the guide tree is recomputed from the first round of multiple sequence alignment.(C) L-INS-i approach in which the guide tree is from local pairwise alignment.(D) Three-way alignment approach in which the guide tree was described in the Section 2. A bubble plot was used because many points overlapped with each other. Sum-of-Pair Score May Not Be a Good Criterion for Choosing the Best MSA There are two criteria that can be used to evaluate the quality of an MSA.The first is phylogenetic accuracy, i.e., the MSA that results in the most accurate phylogenetic reconstruction is the best MSA.This criterion is conceptually fine but not computationally practical.Also, one can generally evaluate phylogenetic accuracy only for simulated sequences with a known true tree.The second criterion is the sum-of-pair score (SPS) or its variations, such as weighted SPS [39][40][41].This weighted SPS is used in the default option in MUSCLE and the G-INS-i and L-INS-i options in MAFFT.The criterion is computationally practical and expected to be generally consistent with the first criterion.Our results in the previous section show that when an MSA is generated with the true tree as a guide tree, this MSA tends to result in the most accurate phylogenetic reconstruction.It is, therefore, interesting to know if an MSA generated with the true tree as a guide tree also leads to the highest SPS. We compared SPS from two types of MSAs, one generated using the true tree as the guide tree (the "trueTree" approach) and the other generated using the accurate L-INSi option (the "L-INS-i" approach, which creates the guide tree based on local pairwise alignment) in MAFFT.The input sequences are simulated with symmetric and asymmetric trees as before, with an average sequence length of 500 amino acids.Each simulated data set generated two MSAs, one from the trueTree approach and the other from the L-INS-i approach.The two MSAs were also used for phylogenetic reconstruction using PhyML.When the true tree was used as a guide tree, the final PhyML tree was closer to the true tree (smaller RFd) than that of the L-INS-i approach (Figure 5A,C).This difference is highly significant based on a paired-sample t-test (p < 0.0001 for data in Figure 5A,C).Thus, when phylogenetic accuracy is used as a criterion, the MSA resulting from using the true tree as a guide tree is better than MSA from the L-INS-i approach. Algorithms 2024, 17, x FOR PEER REVIEW 13 of 18 Thus, when phylogenetic accuracy is used as a criterion, the MSA resulting from using the true tree as a guide tree is better than MSA from the L-INS-i approach.Conflict between two criteria (phylogenetic accuracy and sum-of-pair score) in choosing the best MSA.Sequences with an average length of 500 are simulated with the 16-taxa symmetric and asymmetric trees.Two MSAs were produced from each set of sequences, one with the true tree as the guide tree (trueTree) and the other with the L-INS-i approach (L-INS-i), which generates a guide tree from local pairwise alignment.Sum-of-pair score was calculated for each MSA.PhyML was used for phylogenetic reconstruction for each MSA, and the Robinson-Foulds distance (RFd) was calculated between the true tree and the PhyML tree.The trueTree approach produced PhyML trees closer to the true tree than that of the L-INS-i approach for both the asymmetric tree (A) and symmetric tree (C).However, the L-INS-i approach produced MSAs with higher sum-of-pair scores than that of the trueTree approach, which is true for both the asymmetric tree (B) and symmetric tree (D). Surprisingly, SPS is higher for the MSA from the L-INS-i than the MSA from using the true tree as the guide tree (Figure 5B,C).This is consistent for both the asymmetric tree and the symmetric tree.The difference is highly significant based on a paired-sample ttest (p < 0.0001).This creates a conflict in choosing the best MSA.With the criterion of phylogenetic accuracy as a criterion, the MSA from the trueTree approach is better; with the SPS as the criterion, the MSA from the L-INS-i approach is better. To further confirm the results in Figure 5, we simulated longer sequences with an average length of 1500 amino acids according to those symmetric and asymmetric trees.The same computation was repeated.For asymmetric trees (Figure 6A), the trueTree approach (MSA obtained with the true tree as a guide tree) generated PhyML trees more similar than those generated from the L-INS-i approach.This difference in RFd between the trueTree and the L-INS-i approaches is highly significant (paired-sample t-test, p < 0.0001).The longer sequence length allowed both the trueTree and the L-INS-i approaches to recover all symmetric true trees (Figure 6C).Thus, the criterion of phylogenetic accuracy still favors the trueTree approach over the L-INS-i approach.The relevant scatter plots for dataset used in Figures 5 and 6 are provided in Supplementary Materials.Conflict between two criteria (phylogenetic accuracy and sum-of-pair score) in choosing the best MSA.Sequences with an average length of 500 are simulated with the 16-taxa symmetric and asymmetric trees.Two MSAs were produced from each set of sequences, one with the true tree as the guide tree (trueTree) and the other with the L-INS-i approach (L-INS-i), which generates a guide tree from local pairwise alignment.Sum-of-pair score was calculated for each MSA.PhyML was used for phylogenetic reconstruction for each MSA, and the Robinson-Foulds distance (RFd) was calculated between the true tree and the PhyML tree.The trueTree approach produced PhyML trees closer to the true tree than that of the L-INS-i approach for both the asymmetric tree (A) and symmetric tree (C).However, the L-INS-i approach produced MSAs with higher sum-of-pair scores than that of the trueTree approach, which is true for both the asymmetric tree (B) and symmetric tree (D).Surprisingly, SPS is higher for the MSA from the L-INS-i than the MSA from using the true tree as the guide tree (Figure 5B,C).This is consistent for both the asymmetric tree and the symmetric tree.The difference is highly significant based on a paired-sample t-test (p < 0.0001).This creates a conflict in choosing the best MSA.With the criterion of phylogenetic accuracy as a criterion, the MSA from the trueTree approach is better; with the SPS as the criterion, the MSA from the L-INS-i approach is better. To further confirm the results in Figure 5, we simulated longer sequences with an average length of 1500 amino acids according to those symmetric and asymmetric trees.The same computation was repeated.For asymmetric trees (Figure 6A), the trueTree approach (MSA obtained with the true tree as a guide tree) generated PhyML trees more similar than those generated from the L-INS-i approach.This difference in RFd between the trueTree and the L-INS-i approaches is highly significant (paired-sample t-test, p < 0.0001).The longer sequence length allowed both the trueTree and the L-INS-i approaches to recover all symmetric true trees (Figure 6C).Thus, the criterion of phylogenetic accuracy still favors the trueTree approach over the L-INS-i approach.The relevant scatter plots for dataset used in Figures 5 and 6 are provided in Supplementary Materials.Conflict between two criteria (phylogenetic accuracy and sum-of-pair score) in choosing the best MSA.Sequences with an average length of 1500 are simulated with the 16-taxa symmetric and asymmetric trees.Computations are the same as in Figure 5.The trueTree approach produced PhyML trees closer to the true tree than those of the L-INS-i approach for both the asymmetric trees (A), and both algorithms produced PhyML trees identical to true tree for the symmetric tree (C).However, the L-INS-i approach produced MSAs with higher sum-of-pair scores than those of the trueTree approach, which is true for both the asymmetric tree (B) and symmetric tree (D). In contrast, SPS is higher for MSA from the L-INS-i approach than that of the trueTree approach (Figure 6B,C), which is consistent with the results in Figure 5. Thus, the SPS criterion tends to favor MSAs that do not generate the best tree.The conflict between the two criteria appears real. Performance of the Three-Way Alignment on Benchmark Datasets We performed a quick evaluation of the performance of the three-way alignment approach by using the BAliBASE [42] benchmark datasets of protein sequences.We selected 60 highly diverged reference alignments, including (1) the first 20 sets in in RV11 (BB110001-BB11020), (2) 20 randomly chosen sets in RV30, and (3) 20 arbitrarily chosen sets from RV12 (BB12002-BB12006, BB12009, BB12010, BB12012-BB12024).These MSAs were corroborated with other information, such as protein structure, and may be considered the best approximation of the true alignment.From each of these 60 sets of protein sequences, we generated two additional alignments, one from MAFFT with the accurate L-INS-i option and the other from the three-way alignment approach.These three MSAs are referred to as BAliBase, L-INS-i, and three-way.From each alignment, a PhyML tree is built with the default LG model and the simultaneous optimization of tree topology, branch lengths, and rates.The three resulting trees were also designated BAliBase, L-INSi, and three-way, respectively.The BAliBase tree was taken as the best approximation of the true tree.The RFd value was calculated between the BAliBase tree and the L-INS-i tree and between the BAliBase tree and the thee-way tree.The results are similar to those with simulated sequences.The mean RFd is 3.03333 between the BAliBase and L-INS-i trees and 2.66667 between the BAliBase and three-way trees.The two are marginally significant based on a one-tailed paired-sample test (t = 1.6638,DF = 59, one-tailed p = 0.0507). Discussion There are disagreements involving guide trees in progressive multiple sequence alignment.First, what is the best guide tree for progressive multiple sequence alignment?Figure 6.Conflict between two criteria (phylogenetic accuracy and sum-of-pair score) in choosing the best MSA.Sequences with an average length of 1500 are simulated with the 16-taxa symmetric and asymmetric trees.Computations are the same as in Figure 5.The trueTree approach produced PhyML trees closer to the true tree than those of the L-INS-i approach for both the asymmetric trees (A), and both algorithms produced PhyML trees identical to true tree for the symmetric tree (C).However, the L-INS-i approach produced MSAs with higher sum-of-pair scores than those of the trueTree approach, which is true for both the asymmetric tree (B) and symmetric tree (D). In contrast, SPS is higher for MSA from the L-INS-i approach than that of the trueTree approach (Figure 6B,C), which is consistent with the results in Figure 5. Thus, the SPS criterion tends to favor MSAs that do not generate the best tree.The conflict between the two criteria appears real. Performance of the Three-Way Alignment on Benchmark Datasets We performed a quick evaluation of the performance of the three-way alignment approach by using the BAliBASE [42] benchmark datasets of protein sequences.We selected 60 highly diverged reference alignments, including (1) the first 20 sets in in RV11 (BB110001-BB11020), (2) 20 randomly chosen sets in RV30, and (3) 20 arbitrarily chosen sets from RV12 (BB12002-BB12006, BB12009, BB12010, BB12012-BB12024).These MSAs were corroborated with other information, such as protein structure, and may be considered the best approximation of the true alignment.From each of these 60 sets of protein sequences, we generated two additional alignments, one from MAFFT with the accurate L-INS-i option and the other from the three-way alignment approach.These three MSAs are referred to as BAliBase, L-INS-i, and three-way.From each alignment, a PhyML tree is built with the default LG model and the simultaneous optimization of tree topology, branch lengths, and rates.The three resulting trees were also designated BAliBase, L-INS-i, and three-way, respectively.The BAliBase tree was taken as the best approximation of the true tree.The RFd value was calculated between the BAliBase tree and the L-INS-i tree and between the BAliBase tree and the thee-way tree.The results are similar to those with simulated sequences.The mean RFd is 3.03333 between the BAliBase and L-INS-i trees and 2.66667 between the BAliBase and three-way trees.The two are marginally significant based on a one-tailed paired-sample test (t = 1.6638,DF = 59, one-tailed p = 0.0507). Discussion There are disagreements involving guide trees in progressive multiple sequence alignment.First, what is the best guide tree for progressive multiple sequence alignment?Second, how can we obtain the best guide tree?There are also disagreements on what criterion should be used in choosing the optimal MSA.If phylogenetic reconstruction is the ultimate goal, then phylogenetic accuracy obviously should be the ultimate criterion for choosing the best MSA.Given that this criterion cannot be practically used, does the SPS criterion serve as a good proxy?This study aims to address these questions, with a focus on highly diverged sequences that are hard to align.One would tend to assume that the true tree should be the best guide tree.However, this assumption conflicts with the principle that multiple sequence alignment should start with the most similar sequences and progress toward less similar sequences (R. C. Edgar, pers.comm.).This conflict is illustrated with the following true tree: ((S1:0.001,S2:0.1):0.001,(S3:0.001,S4:0.1):0.001).S1 and S3 are the most similar sequences, with a pairwise distance of only 0.003.They should therefore be aligned first following the principle stated above.However, the true tree would not allow S1 and S3 to be aligned first and would force S1 and S2 (or S3 and S4) to be aligned first.This is one of the reasons for widely used multiple sequence alignment programs, such as MAFFT [12] and MUSCLE [13], to use a modified version of UPGMA to reconstruct the guide tree, because UPGMA will cluster S1 and S3 together.Such a guide tree ensures that S1 and S3 would be aligned first.Will such a guide tree and the resulting MSA cause phylogenetic distortion in the final reconstructed tree?Our results, especially those in Table 3 and Figures 3 and 4, suggest that if the accuracy of the final phylogeny is taken as a criterion, the true tree indeed is the best guide tree.Version 5 of MUSCLE [43] includes an ensemble of trees for exploring the consequence of the resulting MSA on phylogenetic reconstruction.This would help phylogeneticists appreciate the variation in guide trees and the resulting reconstructed phylogenies. How to Obtain the Best Guide Tree? If we agree that the true tree is the best guide tree, then how do we obtain a guide that is the best approximation of this true tree?In this research, we explore the potential of three-way alignment in improving the accuracy of the guide tree.Our results are consistent with the hypothesis that three-way alignment can produce better guide trees (exhibiting lower RFd with true tree) compared to guide trees from PSA or k-tuple approaches, leading to improved MSAs and the phylogenetic reconstruction based on the MSAs (Tables 1-4).Two lines of evidence were presented to support the conclusion that the guide tree from the three-way alignment (3-WAY) is better than that generated from the most accurate option in MAFFT (L-INS-i, which creates the initial guide tree from local pairwise alignment).First, the guide tree from the 3-WAY approach is closer to the true tree than that from the L-INS-i approach.Second, when the MSA generated from 3-WAY and L-INS-i guide trees were fed to PhyML for phylogenetic reconstruction, the MSA from the 3-WAY guide tree produced PhyML trees closer to the true tree than that from the L-INS-i approach. While guide trees based on k-tuple similarities in MAFFT are poor, the guide tree from the L-INS-i option in MAFFT is very good, and three-way alignment may be useful only in the most challenging cases with extremely diverged sequences.Sequences simulated from our half-symmetric and half-asymmetric trees are comparable in divergence to many real homologous amino acid sequences, yet MAFFT performed well with these sequences.Only with the highly diverged sequences simulated from the asymmetric trees did MAFFT experience difficulties in generating quality MSA (Tables 1-4).The best MSA should produce the true tree, especially when the objective of sequence alignment is accurate phylogenetic reconstruction.However, phylogenetic accuracy cannot be used directly as a criterion because the true tree is unknown, except in simulated sequences.One would hope that the sum-of-pair score (SPS) or its variations, such as weighted SPS, which is computationally practical and widely used as a criterion for choos-ing the best MSA, would be equivalent to the criterion of phylogenetic accuracy.In other words, the MSA with the highest SPS is also the MSA that would result in the most accurate phylogeny.Our results (Figures 5 and 6) suggest that this is not the case.From each set of our simulated sequences, two MSAs were produced, one with the true tree as the guide tree (trueTree) and the other using the guide tree from the L-INS-i approach (L-INS-i).When these two MSAs were fed to PhyML for phylogenetic reconstruction, the MSA from the trueTree approach produced trees more similar to the true tree than that from the L-INS-i approach.However, the latter has significantly higher SPS than the former.Thus, the two criteria are inconsistent. It is difficult to provide a specification of time complexity with the Carrillo-Lipman algorithm.This algorithm for the three-way alignment includes three pair-wise alignments, followed by a simplified three-way alignment that does not need to visit all cells in the cube.The time complexity for this last step is difficult to express because the time required for this step depends on the nature of the three sequences.If the three sequences are nearly identical, then we have the best scenario, and the time required for this step would be almost linear.If the three sequences are highly diverged and differ much in length (i.e., many indel events), then the time requirement for this step would be similar to the plain three-way alignment using dynamic programming.Because we aim to improve the sequence alignment of highly diverged sequences, the time saved with the Carrillo-Lipman algorithm is not substantial. One time-saving protocol is to first identify regions of consistency among the three pairwise alignments in each three-way alignment using the approach proposed by Gotoh [44].The regions of consistency do not need three-way alignment.They can serve as anchors so that one only needs to do three-way alignment for sequence segments between such anchors. Conclusions In progressive multiple sequence alignment, the quality of a guide tree affects the quality of MSA and the quality of subsequent phylogenetic reconstruction.The three-way alignment improves the quality of the guide trees and results in more accurate phylogenetic reconstruction.The two criteria for choosing the best MSA, phylogenetic accuracy and sum-of-pair score, conflict with each other. Figure 1 . Figure 1.The 16-taxa trees used for simulating sequences.The branch lengths from the leaf to each internal node are indicated by the scale above the tree.Trees referred to as symmetric and asymmetric trees use the top numbers of the scale.Trees referred to as half-symmetric and half-asymmetric trees use the bottom numbers of the scale. Figure 2 . Figure 2. Eight-taxa trees used for simulating sequences of high divergence.The scale indicates the branch lengths from the leaf to the internal nodes. Figure 1 . Figure 1.The 16-taxa trees used for simulating sequences.The branch lengths from the leaf to each internal node are indicated by the scale above the tree.Trees referred to as symmetric and asymmetric trees use the top numbers of the scale.Trees referred to as half-symmetric and half-asymmetric trees use the bottom numbers of the scale. Figure 1 . Figure 1.The 16-taxa trees used for simulating sequences.The branch lengths from the leaf to each internal node are indicated by the scale above the tree.Trees referred to as symmetric and asymmetric trees use the top numbers of the scale.Trees referred to as half-symmetric and half-asymmetric trees use the bottom numbers of the scale. Figure 2 . Figure 2. Eight-taxa trees used for simulating sequences of high divergence.The scale indicates the branch lengths from the leaf to the internal nodes. Figure 2 . Figure 2. Eight-taxa trees used for simulating sequences of high divergence.The scale indicates the branch lengths from the leaf to the internal nodes. Figure 3 . Figure 3. Relationship between RFd of a guide tree and RFd of the corresponding PhyML tree from the resulting MSA, based on the 8-taxa symmetric and asymmetric trees.A bubble plot was used because many points overlap each other.The relationship is highly significant (n = 200, r = 0.83143, p < 0.0001). Figure 3 . Figure 3. Relationship between RFd of a guide tree and RFd of the corresponding PhyML tree from the resulting MSA, based on the 8-taxa symmetric and asymmetric trees.A bubble plot was used because many points overlap each other.The relationship is highly significant (n = 200, r = 0.83143, p < 0.0001). Figure 4 . Figure 4. Relationship between RFd of guide trees and RFd of phylogenetic trees generated by PhyML, based on 16-taxa trees.(A)FFT-NS-1 approach in which the guide tree was generated from 6-tuple similarities.(B) FFT-NS-2 approach in which the guide tree is recomputed from the first round of multiple sequence alignment.(C) L-INS-i approach in which the guide tree is from local pairwise alignment.(D) Three-way alignment approach in which the guide tree was described in the Materials and Methods section.A bubble plot was used because many points overlapped with each other. Figure 4 . Figure 4. Relationship between RFd of guide trees and RFd of phylogenetic trees generated by PhyML, based on 16-taxa trees.(A)FFT-NS-1 approach in which the guide tree was generated from 6-tuple similarities.(B) FFT-NS-2 approach in which the guide tree is recomputed from the first round of multiple sequence alignment.(C) L-INS-i approach in which the guide tree is from local pairwise alignment.(D) Three-way alignment approach in which the guide tree was described in the Section 2. A bubble plot was used because many points overlapped with each other. Figure 5 . Figure5.Conflict between two criteria (phylogenetic accuracy and sum-of-pair score) in choosing the best MSA.Sequences with an average length of 500 are simulated with the 16-taxa symmetric and asymmetric trees.Two MSAs were produced from each set of sequences, one with the true tree as the guide tree (trueTree) and the other with the L-INS-i approach (L-INS-i), which generates a guide tree from local pairwise alignment.Sum-of-pair score was calculated for each MSA.PhyML was used for phylogenetic reconstruction for each MSA, and the Robinson-Foulds distance (RFd) was calculated between the true tree and the PhyML tree.The trueTree approach produced PhyML trees closer to the true tree than that of the L-INS-i approach for both the asymmetric tree (A) and symmetric tree (C).However, the L-INS-i approach produced MSAs with higher sum-of-pair scores than that of the trueTree approach, which is true for both the asymmetric tree (B) and symmetric tree (D). Figure 5 . Figure5.Conflict between two criteria (phylogenetic accuracy and sum-of-pair score) in choosing the best MSA.Sequences with an average length of 500 are simulated with the 16-taxa symmetric and asymmetric trees.Two MSAs were produced from each set of sequences, one with the true tree as the guide tree (trueTree) and the other with the L-INS-i approach (L-INS-i), which generates a guide tree from local pairwise alignment.Sum-of-pair score was calculated for each MSA.PhyML was used for phylogenetic reconstruction for each MSA, and the Robinson-Foulds distance (RFd) was calculated between the true tree and the PhyML tree.The trueTree approach produced PhyML trees closer to the true tree than that of the L-INS-i approach for both the asymmetric tree (A) and symmetric tree (C).However, the L-INS-i approach produced MSAs with higher sum-of-pair scores than that of the trueTree approach, which is true for both the asymmetric tree (B) and symmetric tree (D). Figure 6 . Figure 6.Conflict between two criteria (phylogenetic accuracy and sum-of-pair score) in choosing the best MSA.Sequences with an average length of 1500 are simulated with the 16-taxa symmetric and asymmetric trees.Computations are the same as in Figure5.The trueTree approach produced PhyML trees closer to the true tree than those of the L-INS-i approach for both the asymmetric trees (A), and both algorithms produced PhyML trees identical to true tree for the symmetric tree (C).However, the L-INS-i approach produced MSAs with higher sum-of-pair scores than those of the trueTree approach, which is true for both the asymmetric tree (B) and symmetric tree (D). 4. 1 . Is the True Tree the Best Guide Tree for Progressive Multiple Sequence Alignment? 4. 3 . Is Sum-of-Pair Score or Its Derivative a Good Criterion for Choosing the Best MSA? Table 1 . Rresult of comparing guide trees generated uising 3-way alignment and L-INS-i methods based on simulated amino acid sequences on 8-taxa symmetric and asymmetric trees. Table 3 . Phylogenetic accuracy from different guide trees.Sequences were simulated for the 8-taxa symmetric and asymmetric trees.MSAs were generated (1) with the true tree (True Tree), (2) from the L-INS-i option (L-INS-i), and (3) from the 3-way alignment (3-WAY).Phylogenetic reconstruction was performed with PhyML.Other column headings are the same as in Table2.
2024-05-12T15:14:19.098Z
2024-05-10T00:00:00.000
{ "year": 2024, "sha1": "c094a0979d9ec6b20efbb227e68071d4690776db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1999-4893/17/5/205/pdf?version=1715332979", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8a0f641e41bc32ede406fa1d67ae73b4255c809f", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [] }
21355307
pes2o/s2orc
v3-fos-license
Agnostic Informatics System of Systems: The Open ISoS Services Framework . The upward integration endeavor is making informatics systems (I-systems) increasingly complex. The modeling techniques, methodologies, development strategies, deployment and execution environment, maintenance and evolution, and governance, to mention just a few aspects are making the resulted (un)integrated informatics technology system a vendor lock-in landscape. The relation between informatics science and engineering and the organization’s business or control processes automation, or services provisioning and adaptation, has demonstrated to be difficult to converge to a common understanding of clear computational responsibility borders. Existing approaches and standards fail to be complete with respect to establishing a landscape of informatics technology under vendor agnostic model (lock-in free). In this context, this paper extends previous research by proposing an organization´s level modularity framework aiming at formally identifying an agnostic, and open informatics system of systems (ISoS). A definition of its components is provided, and a validation case study is discussed. Introduction The role and value of informatics science and engineering can be significantly improved if the gap between the technology landscape and the business processes domain is reduced [24]. Current informatics solutions are often difficult to be substituted, paving the way for vendor lock-in cases, which weakens their value [27]. This problem has been studied in the context of multi-sectorial standards, network effects, and the impact of lock-in patterns in informatics systems industry, often leading to the conclusion that "lock-ins are not in general avoidable" [8]. It is thus necessary to develop strategies to reduce such dependencies. The challenge cannot, however, be addressed exclusively from an informatics point of view. It requires a common strategy including the business and administration areas in order to reach a common understanding of the complexity of integrating informatics systems in the enterprise/organization and the need to induce competition among solution providers. One approach is the moderation of innovation strategies based on consensus agreements, associated with the consolidation of open specifications leading to a wider, coordinated, and complete suite of standards. A proposal of an open infrastructure as a facilitator to integrate legacy systems, and developed under an open community goes into this direction [29]. Nevertheless, the complexity of the challenge is demonstrated by the crescent recognition that a holistic model for systems integration is lacking. The software engineering discipline, while key for the development of informatics systems (I-systems) has pursued various alternative development strategies. For instance, the agile methodologies are an evolution of the waterfall model, later converging to hybrid approaches, and more recently to the formalization of the OMG's Kernel and Language for Software Engineering Methods (Essence) specification [21]. Research work on software engineering feasibility discusses the management driven decision by adopting an agile or plan-driven (waterfall) approach, guided by value creation management decision [18]. However, even if a systematic approach is considered, the focus is on software development and not how to integrate different I-systems and its components, and to cope with the substitutability principle [24]. A first attempt to solve the lock-in problem was proposed with the Collaborative Enterprise Development Environment (CEDE) platform as a way to structure the software development landscape [24]. This paper presents and discusses an approach to reduce the above-mentioned vendor dependency risks and the gap between processes and the I-systems technology environment. The proposed approach is a step further on our previous research aiming at contributing towards an open informatics systems modularity and framework for organizations. By the end of 2002, in the early days of service-oriented paradigm (SOA), we formulated an autonomous system abstract implementing services, which was then applied to the Intelligent Transport Systems Interoperability Bus (ITSIBus) [25]. Later in 2011, we enhanced the modularity of the design based on the experience acquired with other projects (Horus and SINCRO) targeted to develop open architectures for nation-wide informatics systems, leading to the concept of Cooperation Enable System (CES) [22]. This paper proposes the formalization of the Informatics System of Systems (ISoS) framework as a conceptualization based on the CES modularity abstraction. The proposed I-system notion ranges from simple to complex entities made of CES elements, and able to answer one or more requirements sets. As an application example, the enterprise collaboration network (ECoNet) [26] infrastructure and platform, operationalized by the enterprise collaboration manager (ECoM) I-system is discussed as a validation of the ISoS adaptive integration framework. Problem Domain and the State of Research Achieving an effective organization's agnostic ISoS technology landscape is an open problem without a known and well-founded approach. This problem is commonly addressed from two main research streams: (i) system's development and operation cycles, and (ii) organization's informatics systems architecture, in conjunction with processes and services models. A convergence of approaches is, however, needed. Establishing the foundations for integrated I-systems at the level of organizations (enterprises and other) requires, in fact, multidisciplinary research contributions. As such, state of the art is at the level of "islands of automation", where different Isystems, developed under different specific industry cultures, are difficult to manage, integrate and extend [15]. More recently, the collaborative dimension needed to support interactions between organizations added further structuration requirements for the involved computational responsibilities. Besides interoperability requirements, the challenge is to reach an adaptive and cooperative system of systems whose components are provided by multi-suppliers. The need to cope with the evolution of systems requires the capability of smoothly replacing I-systems by other (new generation) I-systems, which represents an even more complex challenge. A number of recent and ongoing initiatives have tried to contribute to partial solutions to some of these challenges. For instance, the ISO/IEC/IEEE 42010:2011 systems and software engineering standardization proposal, an evolution of the IEEE 1471:2000 architectural description of software-intensive systems, embeds a systemic approach. The efforts to map existing enterprise architecture frameworks (e.g. Zachman Framework, TOGAF, RM-ODP, GERA, and ArchiMate) demonstrate a general concern on how to formalize enterprise informatics systems modeling [10]. From another direction, the Engineering Service Bus suggests an approach addressing the integration of heterogeneous engineering and modeling tools, contributing to resolving some technical and semantic gaps [3]. Another academic contribution is represented by the collaborative enterprise development environment (CEDE) platform [24], focuses on the reduction of vendor dependencies regarding services development. The enterprise service bus (ESB) concept was introduced as an adaptation layer for the integration of monolithic enterprise systems. The modeldriven data independence, efficiency and functional flexibility using feature-oriented software engineering (DIEFOS) is an example of the trend towards an efficient model-driven adapter framework [15]. In a more recent initiative, and in line with the idea of microservices [13], a model for a mini enterprise application description (EA-Mini-Descriptions) was proposed. This is an interesting modeling strategy based on the OMG's MOF 1 [20], establishing a layered meta-data modeling framework, and based on M0 (Run-Time Data), M1 (Architectural Model, Meta-Data), M2 (Integration Rules, Architectural Ontology, Architectural Meta-Model), and M3 (ArchiMate, OWL) [4]. Further developments of the microservice model, such as in the Microservices Inner and Outer Architecture as defined in [19] and its relation to the Enterprise Services Architecture Reference Cube, seem promising although requiring further clarification. Modularity has been a research topic for a long time. E.g. [11] applies modular systems theory to the SOA paradigm. Based on an empirical study the same authors conclude that "Implementing new, dedicated decision-making bodies for SOA hampers organizations in achieving higher degrees of IT flexibility and reuse", pointing to the need for new decision-making and governance approaches regarding technological strategies in the organizations. However, most existing research work does not include any discussion about the multi-supplier issue and its impact in the organizations' I-systems. As an exception, in [17] and [16] a mathematical model for the dynamics and modularity degree analysis of an elevator system is proposed and discussed. A similar research discusses the fact that Airbus abandoned a proprietary modular cabinet from Honeywell, replacing it by the ARINC 600 open Integrated Modular Avionic (IMA), an open modularity specification that was applied in the design of the A380 airplane [5]. It is quite interesting that the main motivations for this move were to guarantee alternative suppliers for the same components and as a side effect (also important aspect) cost reduction. Also interesting is the fact that IMA was founded by Honeywell, the supplier that used to be unique to offer the mentioned proprietary component. Nevertheless, in spite of these efforts from the research and practice communities, a well-founded strategy to deal with the growing complexity of I-systems is lacking. The Collaborative Network Dimension. Beyond the intra-organization dimension (vertical integration), the integration of I-systems has to answer the growing number of interactions between informatics systems of business partners (horizontal integration along the value chain). Existing technological approaches to support Collaborative Networks (CNs) do not seem to address the needs of inter-systems collaboration properly. Seen from the informatics science and engineering point of view a CN [1] establishes a directed graph of nodes and edges, where nodes correspond to organizations with their own process and specific technological culture. In practice, these graphs can involve a complex mesh of dedicated connections, based on different transport and payload message formats. In order to cope with this complexity the grid community has suggested the grid infrastructure to support Virtual Organizations (VO) which require "unique authentication, authorization, resource, access, resource discovery, and other challenges" answered by the grid technology [7]. A more recent work on cloud computing extends the idea, proposing an application driven network (ADN) to establish the quality of service links between business applications (our Isystems) under the quality of service (QoS) constraints [28]. Nevertheless, the CN abstraction requires more than using distributed workstations to share and interchange resources [6]. Another initiative, the KeyVOMS server as a VO Management System (VOMS), suggests that application services share a common infrastructure to manage virtual organizations [12]. But one key problem is that no unified approach is able to cope with all requirements, making potential frameworks only partially successful; "there will always be tools, which are unique for specific use cases" [3]. In spite of the many efforts to establish a robust network infrastructure to support CNs, a major problem is a cost associated with instantiating and maintaining the services [6]. Therefore, an open framework is needed to induce a move from current proprietary approaches in enterprise systems (e.g. SAP, Oracle, IBM, and Microsoft) towards establishing a collaborative-oriented informatics system landscape with substitutable components. The proposed ISoS framework presented in the next section goes in this direction. ISoS Model and Framework In this paper, we formalize the open I-system of systems (ISoS), extending previous research on Cooperation Enabled System (CES) [22]. The proposed framework is based on two main entities: (i) the I-system, as an organization's level autonomous computational responsibility under some business model, and (ii) the Cooperation Enabled System (CES), as an atomic component integrating an I-system. For a CES to be used in an organization's informatics landscape, it has to be integrated into an I-system. Therefore we can say that the informatics environment of an organization is made of I-systems which in turn are composites of CES. The CES model establishes an atomic modularity abstraction able to support substitutability. A CESx has a substitutable CESy from a competing supplier if the services implemented by CESx are structurally and semantically equivalent to the services implemented by CESy. Moreover, the substituted and substitute need to implement migration mechanisms able to recover current and historic state data. This requires that a CES implements specialized migration services to be called by the substitute when assuming the roles of the substituted CES. The substitution process might be complex enough to require human intervention. Nevertheless, the model assumes the development of standard mechanisms for each class of CES, making competing products substitutable. Therefore, a CES abstraction is defined as: i. I0 is the entry point service for the self-awareness mechanism responsible for adaptability; ii. SA is the Self-Awareness element, following the CES definition; iii. MC is a modular composite that can be based on CES (CESc) or other equivalent structure. If a CES composite, CESc = {CES0, CES1, …, CESN} where N ≥ 0; and the CES0 is the system CES, responsible for managing the composite and its I0 the entry-point (self-awareness); to deal with legacy assets the model does not impose a strict CES implementation. However, the SA(I0) entry point needs to conform the service I0 of the CES model specification. This framework is adaptive considering that implementation is free to adopt any existing competence, components and technology assets. Only the availability of an equivalent I0 (awareness entry point) is mandatory for ISoS structural compliance. The openness of an I-system can range from fully open to close, crossing possible hybrid situations, depending on the substitutability of its atoms (a CES or any other modularity framework) since the I-system complies with the ISoS mandatory specifications. The general structure of an I-system and its relation to the CES atom are depicted in the Fig.1. Fig. 1. -Model of an Informatics system (I-system) The proposed I-system model is transparent regarding the adopted implementation technologies. An adaptive virtual execution environment, supported by the respective CES0, manages heterogeneity and execution location (cloud or on-premises). Furthermore, the model aims to simplify the integration of legacy I-systems by considering the respective CES0 as a wrapper. The integration of I-systems such as (i) federated data sharing, and data management (data lifecycle management; backups/recovery, historical data); (ii) unified authentication and role-based access control; (iii) unified administration of deployed I-systems; (iv) unification of the user interface considering the participation of each I-system in user interactions; (v) unified security strategy for data privacy, data integrity and (programmatic) access to computational services; is aimed. Definition 3 ISoS : An I-system of systems (ISoS) is defined as a tuple: ISoS = (I0, SA, ISC), where: i. The I0 is the entry point service, supporting the self-awareness mechanism, following the I0 service of a CES and I-system; ii. The Self-Awareness (SA) follows the I-system and CES definitions; iii. The I-system composite (ISC) is defined as a set ISC = {I-system0, I-system1, …, I-systemM}, where M ≥ 0 ; for simplicity the ISC is also represented by S = ISC. Following the strategy adopted for an I-system, the minimal requirement for an organization to be considered conforming to the ISoS framework is to implement an equivalent I-system0 and the respective I0. The I-system0 plays an enterprise integration, coordination, operationalization, and mediation role. Through the I-system0 the proposed ISoS framework establishes an open adaptive coupling infrastructure (OACI) as a generic logical bus connecting the enterprise I-systems, as illustrated in Fig. 2. Comparing with the practiced enterprise service bus (ESB) where one or more informatics systems mediate the required interconnections, the OACI is based on the simple I-system0, CES0, and I0 mechanisms to establish peer-to-peer adaptive interconnections among I-systems. The integration mediators (integration hubs) that establish additional dependencies are not requited in the proposed ISoS framework. Every shared informatics capability has to be formalized under the I-system concept. As an example, one of the I-systems can be the ECoM if the ECoNet [26] collaborative platform is adopted, as depicted in Fig. 2. The other I-systems can lookup for and obtain access credentials to the ECoM services from the organization's I-system0, CES0, service I0. The ISoS framework makes possible for the organization's informatics landscape to evolve for a coordinated composite of I-systems potentially substitutable if developed under open specifications. Therefore, the I-system0 is a kind of meta-system responsible for coordinating the remaining deployed I-systems. It is the responsibility of I-system0 to implement common governance functions, e.g. a unified security, services discovery mechanisms, and user authentication and authoring. The I-system model is flexible enough to support component's I-systems distributed across on-premises or cloud computational resources. Such flexibility is possible because the CES0 component is responsible for the management of the I-system composite as a consistent, unity entity, and the computational responsibility (Fig. 3). An open I-system is also said to have all its CES under external modularity [24]. If not all CES are substitutable, the I-system is said to be partially open. It is closed if none is substitutable. In this case, the I-system is said to be developed under an internal modularity strategy. A CES under an external modularity is said to be open. Definition 5 Substitutability: An I-system is substitutable x (I-systemx  S) if y: I-systemy  I-systemx . is the capability of a CES or an I-system that makes possible to replace them by an equivalent through a migration process. Substitutability can happen at two different levels: (i) I-system level (substitutable CES), and (ii) ISoS level (substitutable I-systems). A validation of the proposed model in the context of the EU European MIELE project to the ports administration ecosystems was applied to develop the logistics single window vision. This case is briefly described below. The Logistics Single Window Collaborative Network. The Logistics Single Window (LSW) [2] and Port Community System (PCS) [14] research granted by the European MIELE project aimed at establishing a European-wide collaborative framework for door-to-door freight and logistics management [23], [26]. The number of connected stakeholders, the involved heterogeneity (processes and technology) and the complexity of business data and services exchanges, establishes a web of Isystems difficult to develop and maintain. The LSW services provided by business organizations interact through the ECoNet infrastructure (as depicted in Fig. 4) [26]. The LSW I-system offers transport and logistics services or composites of services involving a number of stakeholders participating in the door-to-door freight offerings. The I-system approach formalizes the current point-to-point model based on adapters for the data interchanges, using a common and open infrastructure where adapters are formalized as collaboration contexts (CoC) [23]. However, for organizations that have not yet adopted ECoNet neither the ISoS framework, they can continue to use adapters, providing that their peers make the necessary changes to cope with legacy practices, see Fig. 5. The proposed I-system approach is adaptive as it makes possible for legacy environments to follow a progressive adoption of the proposed models (ISoS, ECoNet, and CES). The user-organizations need that suppliers that they trust adopt these frameworks, commonly constrained by the need to acquire new competencies and change product's lifecycle management processes. The adoption process can be accelerated if the specifications and reference implementations are developed under some open-source model. The advantage for user-organizations is the potential to reduce costs resulting from an increased competitiveness induced by the substitutability principle of the adopted I-systems. As far as a structural dimension is concerned, the proposed models are flexible enough to accommodate different implementations. An I-system is not mandatory to be implemented based on a composite of CES. In fact, an I-system is a black-box with a single well-known entry point, the (or equivalent) I0 service of the CES0 component. What is important is that any peer can introspect implemented functionalities and technology constraints for a dynamic coupling between I-systems. For simplicity, the sharing of functionalities implemented by different computational responsibilities (different suppliers) are restricted to I-systems. This means that if a CES component has value beyond a single I-system, its services can be available through a new Isystem. The example of a CES implementing a wide organization's persistence service configures a specialized I-system with that specific responsibility. For a user-organization to evolve to an agnostic (or dependence free) informatics landscape, a semantics consolidation is necessary. We propose to develop reference models for I-systems targeted to specific application domains. Considering the need to promote substitutability of LSW provider, the challenge is to develop a reference implementation -LSWreference -establishing common interfaces for all derived implementations (market LSW I-system products offerings). Furthermore, considering that different logistics stakeholders might adopt different LSW providers, the proposed model makes possible for them to join a virtual collaboration context [26]. Impact on Existing Practices The proposed ISoS challenges current practices considering it introduces an application level modularity framework requiring a novel structuration of existing approaches. It promotes the adoption of open models and technical specifications whose products are verified through a conformance certification process. This means that existing market competition based on unique product features or development services for specific I-systems is expected to move towards standardized computational responsibilities capable of being substituted. This can, however, happen under a smooth changing process without disturbance of complex operating legacy I-system. In fact, the proposed framework makes possible a partial migration of existing I-systems considering that no constraint exist to incorporate existing technologies. The ISoS framework considers an adaptive coupling among I-systems making possible the convergence for patterned computational responsibilities. Such standard computational responsibilities as I-systems can even wrap legacy systems in order to cope with the recognized difficulty from industry to change their development processes and technologies. Furthermore, a novel collaborative governance model is required considering there is a need for an integrated monitoring and maintenance management strategy. As Isystems tend to be more interdependent/cooperative, malfunction detection and diagnostic needs to be performed by a unified I-system. Such I-system shall be responsible for the first monitoring line and dispatch the maintenance responsibility for each I-system according to the identified problems. Conclusions and Further Research The informatics system of systems (ISoS) framework in conjunction with the cooperation enabled system (CES) establish an adaptive strategy for evolving organizations to dependency-free technology landscapes. The CES abstraction makes possible for an informatics system (I-system) to adopt different implementations of an equivalent suit of computational capabilities, promoting in this way substitutability at the component level. The I-system as a composite of CES or any agglomeration of computational capabilities is the organization's modularity level able to make the technology landscape to converge for cooperative and substitutable informatics systems. The proposed ISoS framework establishes a unique I-system, the I-system0 as the unique responsibility to coordinate and manage the other I-systems. A validation scenario considering the development of the logistics single window (LSW) concept was developed to make possible user organizations and other stakeholders collaborate even if they subscribe LSW services from different providers. This is made possible by adopting the ECoNet collaborative platform, and its ECoM I-system targeted to manage data exchanges under specific contexts and virtual collaboration groups (while multi-tenant collaboration domains). However, in spite of the demonstrated value, I-systems as products requires further investments to gear the market towards the adoption of the ISoS framework. At the semantic level, the approach for future work is to develop an I-systems ontology establishing a sufficient set of reference I-systems and the respective reference implementations to support conformity certification processes. The strategy is in line with the EA-Mini-Descriptions [4]. It is also aligned with the Generic Enabler implementations as developed and maintained by the Future Internet Lab (FIWARE Lab) [30]. One main problem to get such a sufficient set of I-systems reference definitions is how to convince I-systems developer companies to frame their products under the ISoS framework. Our approach is to invite public and private userorganizations to invest on reference implementations on the proved certainty that in subsequent acquisitions the induced costs reduction pay off the investments in research and development.
2018-04-03T05:35:25.061Z
2017-09-18T00:00:00.000
{ "year": 2017, "sha1": "c521d68aa8c10fae351bc9d1a681d0d352e50cc3", "oa_license": "CCBY", "oa_url": "https://hal.inria.fr/hal-01674890/file/455531_1_En_37_Chapter.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "c0aaa5b8597b65d7d798ec5d9739e04e8711e3cb", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
17716488
pes2o/s2orc
v3-fos-license
Rate and Temporal Coding Convey Multisensory Information in Primary Sensory Cortices Abstract Optimal behavior and survival result from integration of information across sensory systems. Modulation of network activity at the level of primary sensory cortices has been identified as a mechanism of cross-modal integration, yet its cellular substrate is still poorly understood. Here, we uncover the mechanisms by which individual neurons in primary somatosensory (S1) and visual (V1) cortices encode visual-tactile stimuli. For this, simultaneous extracellular recordings were performed from all layers of the S1 barrel field and V1 in Brown Norway rats in vivo and units were clustered and assigned to pyramidal neurons (PYRs) and interneurons (INs). We show that visual-tactile stimulation modulates the firing rate of a relatively low fraction of neurons throughout all cortical layers. Generally, it augments the firing of INs and decreases the activity of PYRs. Moreover, bimodal stimulation shapes the timing of neuronal firing by strengthening the phase-coupling between neuronal discharge and theta–beta band network oscillations as well as by modulating spiking onset. Sparse direct axonal projections between neurons in S1 and V1 seem to time the spike trains between the two cortical areas and, thus, may act as a substrate of cross-modal modulation. These results indicate that few cortical neurons mediate multisensory effects in primary sensory areas by directly encoding cross-modal information by their rate and timing of firing. Introduction Survival and appropriate behavior require constant integration of a multitude of sensory inputs from the envi-ronment. As a result, stimulus detection and reaction times improve (Gielen et al., 1983;Driver and Spence, 1998;Gleiss and Kayser, 2012). This combinatorial pro-cessing of information takes place not only in higher association areas, but also, as recently demonstrated, in putatively unisensory areas, such as primary sensory cortices (Ghazanfar and Schroeder, 2006;Macaluso, 2006;Lakatos et al., 2007;Driver and Noesselt, 2008;Kayser et al., 2008;Sieben et al., 2013). Modulation of network oscillations in their power and phase has been found to represent a powerful mechanism of cross-modal integration (Lakatos et al., 2007;Sieben et al., 2013). It seems to use as anatomic substrate not only thalamo-cortical projections (Zikopoulos and Barbas 2007, Lakatos et al., 2009) but also direct axonal projections between primary sensory cortices that have been documented across multiple species and areas (Falchier et al., 2002;Budinger et al., 2006;Hall and Lomber, 2008;Sieben et al., 2013;Stehberg et al., 2014;Zingg et al., 2014;. Despite augmenting evidence for the role of primary sensory cortices in multisensory processing, it is still largely unknown how individual neurons in these areas encode the information content of multiple senses. Most of the knowledge comes from the auditory system where visual and/or tactile stimuli modulate the neuronal firing, in most cases by suppressing its discharge (Bizley et al., 2007;Kayser et al., 2008;Meredith and Allman, 2015). These effects critically depend on the precision of spike timing. Dissection of underlying microcircuits revealed that interareal synaptic inhibition augments the salience of relevant stimuli by degrading the potentially distracting sensory processing (Iurilli et al., 2012;Olcese et al., 2013l;Ibrahim et al., 2016). To which extent these cellular rules of multisensory integration are common for all primary sensory cortices is still largely unknown. Generally, neurons carry information about modalityspecific sensory stimuli by using either a firing rate code (i.e., neurons modulate their action potential frequency when the "preferred" stimulus is presented) or a temporal code (i.e., sharpening of the coincidence of spiking) (Masuda and Aihara, 2007;Ainsworth et al., 2012;. While the two coding mechanisms may occur separately (Roelfsema et al., 2004;Womelsdorf and Fries, 2006), in most experiments, changes both in rate and spike timing/correlation have been described (Biederlack et al., 2006;Zuo et al., 2015). Similar dual coding mechanisms by individual neurons might underlie multisensory communication at the level of primary sensory cortices by which the salience of a certain stimulus and, thereby, its behavioral impact are augmented. To test this hypothesis, we focused on the cellular mechanisms underlying visual-somatosensory interactions at the level of primary sensory cortices. We provide electrophysiological and anatomic evidence that simultaneous visual and tactile stimuli modulate the firing rate and onset of a small population of cortical neurons. Moreover, cross-modal stimulation strengthens the phasecoupling of neuronal firing to network oscillations and the synchrony between spike trains. Anatomic evidence suggests that direct corticocortical axonal projections underlie these effects. spanned supragranular, granular, and infragranular layers and were labeled with DiI (1,1'-dioctadecyl-3,3,3',3'tetramethyl indocarbocyanine; Invitrogen) for postmortem reconstruction of their tracks in histologic sections. A silver wire was inserted into the cerebellum and served as ground and reference electrode. The body temperature of the animal was kept constant at 37°C during recording. The electrical activity was recorded at a sampling rate of 32 kHz using a multichannel extracellular amplifier (no gain, Digital Lynx 10S, Neuralynx) and the acquisition software Cheetah. Sensory stimulation Unimodal (either light flash or whisker deflection) or bimodal (simultaneous light flash and whisker deflection) stimuli were applied using a custom-made stimulation device as previously described (Sieben et al., 2013;. Briefly, whiskers were stimulated by deflection through compressed air-controlled roundline cylinders (RT/57110/M/10, Norgren) gated via solenoid valves (VCA, SMCPneumatik). The device produced almost silent, nonelectrical stimulation with precise timing (0.013 Ϯ 0.81 ms) that was constant over all trials/conditions. For full eye field visual stimulation, 50-ms-long LED light flashes (300 lux) were used. For bimodal stimulation, whisker deflection and light flashes were applied in the same hemifield. The stimuli were randomly presented in three different stimulation conditions (unimodal tactile, unimodal visual, bimodal visual-tactile). Each type of stimulus was presented 100 times contralateral to the recording electrodes with an interstimulus interval of 6.5 Ϯ 0.5 s. To achieve a physically simultaneous stimulation of whiskers (valve-controlled whisker stimulation) and eyes (instantaneous light flash), the time delay of whisker stimulation was calculated to match visual stimulation onset. The nonstimulated eye was covered with an aluminum foil patch. Retrograde tracing and immunohistochemistry Retrograde tracer injections were performed as previously described . In brief, ketamine/ xylazine anesthetized rats were immobilized into a preformed mold fixed into the stereotaxic apparatus and received unilateral injections of the retrograde tracer Fluorogold (FG; Fluorochrome) in S1 barrel field (2.4-2.6 mm posterior and 5.5-5.8 mm lateral to bregma) or V1 (6.9-7.1 mm posterior and 3.4-3.7 mm lateral to bregma). A total volume of 100 nl FG (5% in dH 2 O) was injected (30 nl/min) via a 26-G needle attached to a pump controller (Micro4, World Precision Instruments) at a cortical depth of 300 m. The syringe was left in place for 3 min to ensure an optimal diffusion of the tracer. The surgical opening was sealed with fibrin glue (Surgibond, SMI sutures) and postsurgery analgesic therapy was given (Meloxicam; 0.1-0.2 mg/kg). After a survival time of 4-8 d, the animals were deeply anesthetized with ketamine/xylazine and perfused transcardially with 4% paraformaldehyde (PFA). Brains were removed and postfixed in 4% PFA for 24 h. Coronal slices were sectioned at 50 m and treated with PBS containing 0.2% Triton X-100 (Sigma-Aldrich), 10% normal bovine serum (Jackson ImmunoResearch), and 10% donkey serum (Millipore). The sections were incubated 2-4 d with mouse monoclonal Alexa Fluor 488-conjugated antibody against NeuN (1:100, MAB377X, Millipore) and rabbit polyclonal primary antibody against GABA (1:1000, #A2052, Sigma-Aldrich) followed by a 2-h incubation with Alexa Fluor 568 donkey anti-rabbit IgG secondary antibody (1:1000, A10042, Invitrogen). Fluorescent images were obtained with a Axioskop 2 Mot microscope (Zeiss) equiped with a fluorescence camera. For quantification of retrogradely backlabeled cells, five 50 m thick sections spanning S1 and V1 were selected and regions of interests (ROIs; height: 150 m, width: 300 m) were defined using ImageJ software. FGand GABA-positive neurons were counted within each ROI and normalized to the number of NeuN-positive cells detected within supragranular, granular, and infragranular layers. Data analysis Data were imported and analyzed offline using customwritten tools in Matlab software version R2013B (Math-Works). For antialiasing, the signal was bandpass filtered (0.1 Hz and 5 kHz) by the Neuralynx recording system. A third-order Butterworth filter was applied. LFP data were down-sampled by a factor of 10. Spike sorting and cluster analysis The position of recording sites over layers was confirmed by electrophysiological (i.e., reversal of the evoked potential between supragranular and granular layers) and histologic (i.e., granular cell body layer) landmarks. Recording sites within the transition between cortical layers were not considered for analysis. The raw signal was high-pass filtered (Ͼ400 Hz). The threshold for detecting MUA was set individually at 25-30 V. The stored signals were sorted offline depending on waveform shapes using spike sorting software (Plexon). A group of similar waveforms was considered as being generated from a single neuron if it defined a discrete cluster in a 2D/3D space and exhibited a clear refractory period (Ͼ1 ms) in the interspike interval histogram. The quality of separation between identified clusters was assessed by four different statistical measurements: the classical parametric F statistic of multivariate ANOVA (MANOVA), the J3 and Pseu-doF (Psf) statistics, and the Davies-Bouldin validity index (DB) (Davies and Bouldin, 1979;Späth, 1980). The values of statistical testing ranged between 8.55623e-007 and 0.1 for MANOVA, 0.85 and 11.19 for J3, 554 and 12833 for PsF, and 0.19 and 3.45 for DB. A total number of 262 units were clustered in S1, whereas a total number of 246 units were identified in V1. Approximately one to three units per recording site could be detected in each cortical layer (supragranular, granular, infragranular). To classify the units into pyramidal neurons (PYRs) or interneurons (INs), four characteristic features of the extracted waveforms were used: (1) spike duration, (2) spike after-hyperpolarization duration, (3) spike end slope, and (4) spike trough-to-peak duration. The feature values of all units over all layers were used for principal component (PC) analysis. For S1 units, the first three PCs accounted for 99.5% of the variance (PC1 66.5%, PC2 22.3%, PC3 10.7%), while for units in V1, the first three PCs accounted for 99.0% of the variance (PC1 76.1%, PC2 15.0%, PC3 8.7%). A k-means cluster algorithm (k ϭ 2) was applied to the first three PCs to classify the sorted units. Firing rate MUA and single-unit activity (SUA) were calculated before and after stimulus (MUA: Ϯ 1 s, 10 ms bin size; SUA: Ϯ 1 s, 3 ms bin size) and summed over trials. Units were considered as being responsive if the stimulus-induced firing response was significantly modified, e.g., it exceeded 1.96 times the SD [95% confidence interval (CI)] of the spontaneous firing rate averaged 1-0.9 s before stimulus. To categorize the units that display a significant change in firing, their rate of discharge was calculated as number of spikes during the first time interval (0-80 ms) after modality-specific unimodal stimulation (i.e., tactile stimulation for units in S1 or visual stimulation for units in V1) and compared with the spiking response to unimodal but modality-unspecific stimulation (i.e., tactile stimulation for units in V1 or visual stimulation for units in S1) as well as to bimodal stimulation (i.e., visual-tactile stimulation for units in S1 or V1). The latency of SUA was measured using the average first-spike latency across trials. The MUA peak of firing was obtained by narrowing the bin size to 1/sampling rate and detecting subsequently the bin with maximum firing rate. Multisensory interactions In line with previously established criteria (Kayser et al., 2008), single units were classified into five groups according to the responsiveness within the first 100 ms to unimodal and bimodal stimulation (see Figure 2E, i and ii): (1) unimodal, (2) cross-modal, (3) additive multisensory, (4) nonadditive multisensory, (5) nonresponsive neurons. Units that significantly changed their firing only after unimodal stimulation were classified as unimodal neurons (i.e., tactile stimulation for neurons in S1, visual stimulation for neurons in V1). Units that significantly changed their firing only after cross-modal stimulation were classified as cross-modal neurons (i.e., visual stimulation for neurons in S1, tactile stimulation for neurons in V1). Units that did not significantly modify their firing after any stimulation type were classified as nonresponsive neurons. Units were regarded as multisensory neurons if they either responded to all stimulation types (unimodal, crossmodal, and bimodal) or when the response to the bimodal stimulus was significantly different compared with that to the unimodal stimulus. This bimodal modulation was quantified as previously described (Kayser et al., 2008). First, we determined whether the bimodal response was significantly enhanced or suppressed compared with the unimodal stimulation. This strength of enhancement or suppression was quantified using the enhancement index enhancement ϭ bimodal Ϫ unimodal unimodal ϩ bimodal ϫ 100 where bimodal and unimodal correspond to the maximal unimodal and bimodal firing response, respectively. This measure informs about the strength of the enhancement or suppression effect. The maximal unimodal response corresponded always to where neural activity was measured (i.e., tactile response for measurements in S1, visual response for measurements in V1). To determine whether this bimodal response modulation was equal to an additive summation of the unimodal stimulations or corresponded to a supra-or subadditive effect, a bootstrapping method was applied in a second step Kayser et al., 2008;Felch et al., 2016). For this, we generated a matrix of the sums of spikes after all possible unimodal trial-by-trial combinations (100 tactile stimulations ϫ 100 visual stimulations). We repeatedly selected 100 samples of this matrix for 10,000 times in a randomized order with replacement. From these 10,000 samples, we created a population mean of the firing rate against which we compared the observed firing rate of each neuron after bimodal stimulation by computing the z-score. The deviation from additivity was quantified using the additivity index: additivity ϭ bimodal Ϫ ͑unimodal ϩ crossϪmodal͒ bimodal ϩ ͑unimodal ϩ crossϪmodal͒ ϫ 100 where unimodal, cross-modal, and bimodal reflect the tactile, visual, and visual-tactile (S1) as well as the visual, tactile, and visual-tactile (V1) responses. Neurons that showed a significant effect in the additivity index by deviating from the generated normal distribution (p Ͻ 0.05) were regarded as nonadditive multisensory. Positive or negative additivity values correspond to supra-or subadditive effects, respectively. In contrast, units were classified as additive multisensory if they showed significant firing changes in response to all types of stimulations but the additivity index did not reach significance. Spike synchrony Cross-correlation between spike trains in S1 and V1 after bimodal stimulation was used as measure of synchrony and calculated using the Matlab function xcorr (5 ms bin size, 3 ms step size, time lag Ϯ 1 s) with V1 firing as reference. The cross-correlation values between S1 and V1 after bimodal visual-tactile stimulation were corrected for spurious coherence by subtracting the crosscorrelation values between S1 spike trains after unimodal tactile stimulation and V1 spike trains after unimodal visual stimulation. Unimodal tactile and unimodal visual stimulations were presented at different time points during the stimulation paradigm, and hence should not show any correlation of firing. All PYRs and INs of all classified neuronal groups (unimodal, cross-modal, additive multi-sensory, nonadditive multisensory) with a significant firing response to stimulation were included in the crosscorrelation analysis. Only pairs of neurons with significant cross-correlation values (3.29 SD/99.9 CI threshold) for at least 10 consecutive bins were considered for analysis. A Gaussian smoothing filter was applied to the 1D signal array. Phase coupling analysis The intercortical phase and strength of locking between the spiking of clustered units and network oscillations was assessed using a previously described algorithm (Siapas et al., 2005;Brockmann et al., 2011). For this, the raw LFP signal was bandpass filtered (4-12, 12-30, and 30-100 Hz) using a third-order Butterworth filter preserving phase information. Subsequently, a Hilbert transform was applied to the filtered signal. If the firing of a neuron is modulated by oscillations within a specific frequency band, then its phase over the oscillatory cycle is not uniformly distributed. Phases of zero referred to the peak and a phase of /referred to the trough of the cycle. The coupling between spikes and network oscillations was tested for significance using the Rayleigh test for nonuniformity. The spike trains were converted into a sequence of unit length vectors oriented by the phase of their corresponding spikes. The value of Rayleigh's Z statistic indicates strength of phase coupling (or degree of nonuniformity) between unit events and field potential and was computed by Z ϭ nR 2 where R denotes the mean resultant vector (MRV) length of the given phase series. The probability that the null hypothesis of sample uniformity holds is given by For n Ͼ 50, P ϭ e ϪZ approximation is adequate (Fisher, 1993). Only neurons that showed a significant degree of phase locking were considered for analyses. Their MRV length (locking strength) as well as their mean direction (preferred phase of locking) were calculated. The phase locking of spikes to oscillatory activity was confirmed using the pairwise phase consistency (PPC) measure that is independent of the numbers of trials or spikes (Vinck et al., 2010;Tamura et al., 2016). For this, the average pairwise circular distance (D) was calculated as where D is the absolute angular distance between two samples, j and k are the phases of LFP samples assigned to contemporaneous spikes, and N is the number of spikes. The PPC results from the normalization of D as follows: PPC ϭ Ϫ 2D . PPC ϭ 1 indicates complete phase consistency, whereas lack of phase locking leads to PPC ϭ 0. Negative values of PPC correspond to uniformly distributed spikes. Statistics Statistical analyses were performed using Matlab or IBM SPSS Statistics version 22.0 (IBM). Gaussian distribution of the data were assessed using the Kolmogorov-Smirnov test. Normally distributed data were tested for significant differences (‫ء‬p Ͻ 0.05, ‫‪p‬ءء‬ Ͻ 0.01, and ‫‪p‬ءءء‬ Ͻ 0.001) using unrelated t test. Data that did not follow a Gaussian distribution were tested with Wilcoxon signedrank test for paired data or with the Mann-Whitney U test for nonpaired data. Count data were analyzed with the two proportion z test. Nonuniformity of circular data were assessed using the Rayleigh test. Significant differences in the preferred phase of neuronal firing to oscillatory activity were assessed using the nonparametric circ_cm test of the Matlab circular statistics toolbox (Berens, 2009). Data are shown as mean Ϯ SEM. Cross-modal stimulation modulates the firing rates of neuronal subpopulations in primary somatosensory and visual cortices To elucidate the cellular mechanisms of multisensory integration, we firstly assessed the population and individual firing rates after uni-and bimodal stimulation in supragranular (S), granular (G), and infragranular (I) layers of S1 and V1 (Fig. 1A,C). For this, we examined the MUA recorded at multiple sites over the cortical depth ( Fig. 1B) in lightly urethane-anesthetized Brown Norway rats. The good visual acuity of pigmented Brown Norway rats makes them well suited for testing visual-somatosensory processing. By conducting the entire investigation under sleep-like conditions mimicked by urethane anesthesia (Clement et al., 2008;Bitzenhofer et al., 2015), we avoided the interference with spontaneous whisking and the impact of alert state, which both modulate cross-modal interactions. However, the processing mechanisms identified here may differ from those taking place in awake state, since sleep-like conditions have been shown to increase single unit response variability to bimodal stimulation (Populin, 2005). In S1, contralateral whisker stimulation led to a strong increase of MUA peaking at 32.20 Ϯ 10.64 Hz in S layer, 14.53 Ϯ 4.39 Hz in G layer, and 14.48 Ϯ 3.69 Hz in I layer after 14.7, 12.29, and 15.05 ms from stimulus onset, respectively. This increase was followed by a nonsignificant decrease and later by a long-lasting low-magnitude augmentation of firing when compared with the baseline (Fig. 1D, i). In V1, contralateral light stimulation caused broad peaks of augmented MUA after ϳ70.84 ms in all cortical layers (Fig. 1D, ii). Bimodal stimulation similarly changed the firing rates in S1 and V1. Despite a small decrease, no significant differences were detected when compared with the spiking dynamics in S1 (S: 27.59 Ϯ 9.71 Hz, p a ϭ 0.31; G: 12.37 Ϯ 3.02 Hz, p b ϭ 0.16; I: 13.58 Ϯ 4.36 Hz, p c ϭ 0.4; Fig. 1D, i) and V1 (S: 8.5 Ϯ 4.05 Hz, p d ϭ 0.5; G: 9.60 Ϯ 4.24 Hz, p e ϭ 0.19; I: 7.17 Ϯ 2.93 Hz, p f ϭ 0.3; Fig. 1D, ii) after unimodal stimulation. Thus, population activity in primary sensory cortices was not significantly changed by bimodal visual-tactile stimulation when compared with unimodal modality-specific stimulation. To investigate whether bimodal stimulation influences the firing of individual neurons in S1 and V1, we performed cluster analysis of MUA followed by classification of units into PYRs or INs according to four major spike waveform features (see Materials and Methods; Fig. 2A-D). By these means, 81% of S1 neurons (212 out of 262 neurons) and 83% of V1 neurons (204 out of 245) were identified as putatively PYRs. The low fraction of spiking shapes assigned to INs (S1 over all layers: 15.83 Ϯ 0.05%, V1 over all layers: 14.29 Ϯ 0.04%) is in agreement with previous anatomic and functional studies (Somogyi et al., 1998;Gupta et al., 2000;Markram et al., 2004;Rudy et al., 2011). The recorded PYRs and INs were similarly distributed over S1 and V1 layers (S1: S layer, 10 PYRs and 1 INs, V1: S layer, 13 PYRs and 1 INs; S1: G layer, 93 PYRs and 32 INs; V1: G layer, 78 PYRs and 22 INs; S1: I layer, 109 PYRs and 16 INs, V1: I layer, 113 PYRs and 18 INs). In line with MUA dynamics over time, the temporal organization of pyramidal and interneuronal firing in S1 after whisker deflection differed from the firing induced by light stimulation in V1. While in S1 the firing of both PYRs and INs increased during the first 40 ms after stimulus and significantly decreased during the subsequent 40 ms, in V1, the pyramidal and interneuronal discharge firstly increased for 40-80 ms after stimulus and remained constant for ϳ200 ms thereafter (Fig. 2E). These temporal differences in unisensory responses are in line with previous results (Wang et al., 2008;Ghoshal et al., 2011) and reflect preprocessing differences along the anatomic pathways (Petersen, 2007;Cruz-Martín et al., 2014). As expected, for both S1 and V1, the shortest response onset was detected in G layers. Previous studies showed that neurons in putatively unisensory primary cortices heterogeneously respond to uniand bimodal stimulation (Wallace et al., 2004;Meredith and Allman, 2015). Correspondingly, we analyzed in detail the firing patterns of individual neurons in S1 and V1 after tactile, visual, and bimodal (i.e., visual-tactile) stimulation (Fig. 3). According to the responsiveness to these three types of stimuli, the neurons were classified in five groups: (1) unimodal neurons (i.e., responsive to unimodal stimulation only), (2) cross-modal neurons (i.e., responsive to unimodal stimulation of the opponent modality), (3) additive multisensory neurons (responsive to unimodal, crossmodal, and bimodal stimulation with no significant difference between unimodal and bimodal stimulation, or responsive to unimodal, cross-modal, and bimodal stimulation where the response to bimodal stimulation signif- Figure 1. MUA evoked in S1 and V1 by uni-and bimodal stimulation. A, Schematic drawing displaying the protocol for sensory stimulation via whisker deflections and/or light flashes as well as the location of extracellular recordings in S1 and V1 of Brown Norway rats. B, Schematic drawing of a 16-site silicon probe spanning the cortical layers (S ϭ supragranular, G ϭ granular, I ϭ infragranular). Red-filled recording sites at transitions between cortical layers were not considered for analysis. C, (i) Digital photomontage reconstructing the position of all recording sites (white dots) of a Dil-labeled probe in S1. (ii) Same as (i) for V1. D, (i) Line graphs displaying MUA in supragranular (S), granular (G), and infragranular (I) layers after uni-and bimodal stimulation. Stimulus is marked by the gray dotted line. Insets show the peaks of MUA after stimulation at higher magnification. (ii) Same as (i) for V1. Note that unimodal visual or unimodal tactile stimulation did not evoke responses in S1 or V1, respectively. icantly differed from unimodal stimulation but was not supra-or subadditive when compared with the arithmetic sum of unimodal and cross-modal stimulation responses), (4) nonadditive multisensory neurons (responsive to uni-modal, cross-modal, and bimodal stimulation; the response to bimodal stimulation significantly differed from unimodal stimulation, being supra-or subadditive when compared with the arithmetic sum of unimodal and cross- Figure 2. Classification of single units according to their electrophysiological phenotype. A, Example of clustered action potential waveforms (i) of three neurons recorded in S1 granular layers (ii-iv). B, Schematic drawing of a spike wave form and of features that were used for classification of units into PYRs or INs. C, (i) Two-dimensional scatter plot of feature vectors into the PC space spanned by the first two PCs with k-means assignment of class membership (blue ϭ PYR, red ϭ IN) of S1 units. (ii) Same as (i) for V1 units. D, Example waveforms of a classified PYR (top) and IN (bottom) in G layer of S1 and V1. E, (i) Raster plot depicting spike trains recorded from PYRs (blue) and INs (red) in supragranular (S), granular (G), and infragranular (I) layers of S1 after bimodal stimulation (gray arrow and dotted line). Each line corresponds to one trial and each dot to one spike. The temporal organization of spiking patterns was used for identifying time windows for further analyses [before stimulus, early stimulus-induced response: 0-40 ms (S1), 0-80 ms (V1); late stimulus-induced response: 80-500 ms after stimulus (S1 and V1)]. Insets (gray boxes) correspond to the time intervals that were used for SUA quantification leading the classification of neurons into unimodal, cross-modal, additive multisensory, nonadditive multisensory, and unresponsive. (ii) Same as (i) for V1. Figure 3. Classification of neurons in S1 and V1 according to their spiking response to uni-and bimodal stimulation. A, Pie charts quantifying the distribution of unimodal, cross-modal, additive multisensory, nonadditive multisensory, and unresponsive PYRs (triangle) and INs (circles) in supragranular, granular, and infragranular layers of S1. Numbers inside the pie chart indicate the total count of neurons in that class. B, Same as A for V1. modal stimulation responses), and (5) nonresponsive neurons (i.e., responsive to none of the stimulations). Multisensory additive neurons had the highest prevalence across layers in S1 and V1 (S1-S: 50% PYRs, 0% INs; G: 43% PYRs, 44% INs; I: 32% PYRs, 56% INs; V1-S: 47% PYRs, 100% INs; G: 50% PYRs, 45% INs; I: 41% PYRs, 56% INs). In most of the cases, they displayed a significant, but similar firing change in response to unimodal, cross-modal, and bimodal stimulation. These neurons exert their multisensory effects at subthreshold level. Only a few neurons in the multisensory additive group displayed a significant difference between unimodal and bimodal stimulation (Fig. 3A,B). These neurons were distributed only across G and I layers (S1-G To examine the modulation of firing rates of additive multisensory and nonadditive multisensory neurons during cross-modal processing in more detail, we compared SUA after unimodal and bimodal stimulation (Fig. 4). In S1, the firing of nonadditive multisensory INs in S layer (p g ϭ 0.03) and to a lower extent of the additive multisensory neurons in G layer (p h Ͻ 0.01) significantly increased after bimodal stimulation ( Fig. 4A; Table 1). In contrast, PYRs in G layer, which were classified as nonadditive multisensory neurons decreased their firing after bimodal stimulation (p i Ͻ 0.001; Fig. 4A; Table 1). The firing rates of neurons located in I layer were similarly modified after uni-versus bimodal stimulation. In the G layer of V1, the firing of nonadditive multisensory neurons was significantly decreased after bimodal stimulation, the most prominent effects being detected for INs (p j Ͻ 0.001) ( Fig. 4B; Table 1). In I layer of V1, PYRs assigned to the nonadditive multisensory neurons decreased their firing (p k Ͻ 0.001), whereas the INs belonging to the same group significantly augmented their firing (p l Ͻ 0.001; Fig. 4B; Table 1). These results indicate that, even if bimodal stimulation did not significantly modulate population firing rates monitored by MUA, it changes the firing rate of discrete groups of individual multisensory neurons. In S1, the most prominent changes were detected in S and G layer with a strong attenuation of pyramidal firing and an augmentation of interneuronal firing. A similar response pattern was detected in V1, yet neurons across all layers responded within the first 80 ms. While the categorization of neuronal subpopulations into the previously described five groups was made according to their immediate response to uni-versus bimodal stimulation, additive as well as nonadditive multisensory neurons showed stimulus-induced changes of firing at later time points (i.e., 80-500 ms) as well (Fig. 4C,D; Table 1). These late effects of variable magnitude across areas, layers, and cell type (PYRs vs INs) most likely had a polysynaptic origin, and consisted in most cases of a decrease of neuronal firing after bimodal stimulation when compared with unimodal stimulation. Taken together, these results reveal complex patterns of firing rate modulation by visual-tactile stimulation in a relatively small fraction of the many multisensory neurons in primary sensory cortices (Fig. 4E). Overall, the INs increased their firing rate, whereas PYRs mainly decreased their firing rate within the first 80 ms after stimulus. Thus, broad depression of stimulus-induced excitatory neuronal activity after bimodal stimulation is accompanied by enhanced firing of a sparse number of additive and nonadditive multisensory INs. Cross-modal stimulation modulates the firing latencies and phase-coupling in area-and cell type-specific manner Not only the firing rate but also the timing of neuronal discharges has been proposed to contribute to multisensory processing and improve the behavioral performance (Bizley et al., 2007;Rowland et al., 2007;Chabrol et al., 2015). On the one hand, bimodal stimulation may modify the firing latency and, hence, the delay between sensoryand motor-related activation with consequences for behavioral performance. On the other hand, bimodal stimulation may alter the locking time and strength of neuronal firing to the phase of network oscillations. To decide which temporal coding strategy serves for the processing of visual-tactile stimuli, we firstly compared the spiking latencies of cells identified as PYRs and INs in S layer (n ϭ 15 cells, n ϭ 2 cells), G layer (n ϭ 140 cells, n ϭ 45 cells), and I layer (n ϭ 165 cells, n ϭ 31 cells) of S1 and V1, respectively, after bimodal and unimodal stimulation. To avoid the influence of firing rate on the measured first spike latency, we separately analyzed the spiking onset of additive multisensory neurons that only showed subthreshold multisensory responses (i.e., significantly responded to unimodal tactile, unimodal visual, and bimodal stimulation but with no significant difference between the unimodal and bimodal stimulation). In G layer of S1, the latency of the first spike of subthreshold multisensory PYRs significantly decreased after bimodal stimulation when compared with unimodal stimulation (p m Ͻ 0.01) ( Fig. 5A; Table 2). In contrast, the latency of subthreshold pyramidal firing in I layers increased after bimodal stimulation (p n Ͻ 0.001; Fig. 5A; Table 2 and 3). In V1 G layers, subthreshold INs responded faster after bimodal stimulation when compared with unimodal stimulation (p°Ͻ 0.001; Fig. 5D; Table 2). Over all layers of S1 and V1, INs (S1: 20.45 Ϯ 1.86; V1: 56.8 Ϯ 2.86) and PYRs (S1: 23.45 Ϯ 1.03; V1: 57.58 Ϯ 3.32) did not significantly differ in their firing onset (p p ϭ 0.52). The similar response timing of PYRs and INs is in line with previous studies showing that thalamic relay cells target both excitatory and inhibitory L4 neurons (Kloc and Maffei, 2014;Yu et al., 2016). These results indicate that visual-tactile stimulation modulates the onset of neuronal firing in primary sensory cortices. Second, we investigated whether bimodal stimulation affects the temporal coupling between individual neuronal spiking and network oscillations. For this, we analyzed the locking of spikes to the phase of LFP oscillations recorded in the ipsilateral V1 and S1, respectively, by calculating the MRV and confirming the results by PPC analysis (Fig. 6A-D). Similarly to the previous studies (Harris et al., 2016), the low number of clustered neurons in S layers of S1 and V1 precluded reliable assessment of their and nonadditive (blue) multisensory PYRs (triangle) and INs (circle) in supragranular, granular, and infragranular layers of S1 during the first 40 ms after unimodal (solid bar) and bimodal (striped bar) stimulus. B, Same as A for V1. C, Bar diagrams showing the firing rate of additive (green) and nonadditive (blue) multisensory PYRs (triangle) and INs (circle) in supragranular, granular, and infragranular layers of S1 during 80-500 ms after unimodal (solid bar) and bimodal (striped bar) stimulus. D, Same as C for V1. E, Schematic diagram displaying the modulation of PYRs and INs in supragranular, granular, and infragranular layers by bimodal stimuli. Lower transparency corresponds to rate increase, whereas higher transparency codes rate decrease. Bimodal stimulation is marked by gray arrow and dotted line. phase locking, and therefore, only spiking from neurons in G and I layers significantly locked to V1 4 -100 Hz network oscillations were considered. The proportion of phaselocked neurons tended to decrease with increasing frequency from 4-12 Hz (S1-PYR unimodal: 40.59%, bimodal: 58.42%; IN unimodal: 43.75%, bimodal: 77.08%; V1-PYR unimodal: 10.47%, bimodal: 42.93%; IN unimodal: 5.00%, bimodal: 47.50%) to 30-100 Hz (S1-PYR unimodal: 27%, bimodal: 58%; IN unimodal: 41%, bimodal: 28%; V1-PYR unimodal: 6%, bimodal: 34%; IN unimodal: 0%, bimodal: 23%; Fig. 6A,C, ii). Similar effects were found for I layers (Fig. 6B,D, ii). In S1, the responses of PYRs and INs to bimodal stimulation occurred around the peak of oscillatory theta cycle, whereas after unimodal stimulation, they were concentrated at the trough (p q Ͻ 0.05; Fig. 6A, i and iii). In addition, bimodal stimulation augmented the number of phase-locked PYRs and INs in G layer (theta-PYR: from 28% to 54%, p r Ͻ 0.001, IN: from 38% to 75%, p s Ͻ 0.05; beta-PYR: from 27% to 57%, p t Ͻ 0.001, IN: from 34% to 69%, p u Ͻ 0.01; Fig. 6A, ii). Correspondingly, the magnitude of the MRV significantly increased for both cell types for theta (p v Ͻ 0.001, p w Ͻ 0.001) as well as beta oscillations (p x Ͻ 0.001, p y Ͻ 0.001; Fig. 6A, iv). A higher number of phase-locked PYRs and INs after bimodal stimulation was additionally detected in I layer (theta-PYR: from 49% to 62%, IN: from 56% to 81%, beta-PYR: from 56% to 65%, IN: from 69% to 82%), yet their firing was less precisely timed by the theta-beta oscillatory cycle and correspondingly, the magnitude of the MRV was lower when compared with that calculated for G layer (Fig. 6B, i-iv). In V1, the phase locking of PYRs and INs to theta-beta oscillations significantly augmented after bimodal stimulation, both the number of phase-locked cells (theta: PYR p z Ͻ 0.001, IN p aa Ͻ0.001, beta: PYR p bb Ͻ 0.001, IN p cc Ͻ 0.01) and the strength of MRV for PYRs being increased (theta: p dd Ͻ 0.001, beta: p ee Ͻ 0.01; Fig. 6C, i-iv, D, i-iv). The bimodal induced strengthening of phase locking between spikes and theta-beta phase was confirmed by PPC analysis (Fig. 6A, v, through D, v). While the augmentation of spike-LFP synchrony may result from the previously reported phase reset of network oscillations, modulation of spike timing occurs also in the absence of such phase reset and is dependent on LFP frequency and neuronal type. These data indicate that visual-tactile stimulation modulates not only the rate but also the timing of pyramidal and interneuronal firing in primary sensory cortices. A subset of glutamatergic neurons establishes direct bi-directional connections between S1 and V1 To elucidate the anatomic substrate of visual-tactile processing, we assessed the patterns of direct connectivity between S1 and V1. We previously showed that corticocortical axonal projections may account for crossmodal phase reset of network oscillations (Sieben et al., 2013). Here, we investigate the role of corticocortical connections for the cross-modal modulation of neuronal firing by quantifying the layer-and cell type-specific distribution of projections between S1 and V1. For this, we injected small amounts of the retrograde tracer FG, which has high resistance to fading (Schmued and Fallon, 1986), into the S1 (n ϭ 7 rats) or V1 (n ϭ 7 rats) taking special attention that the tracer covered all layers without exceeding the cortical area (Fig. 7A,C). The spatial confinement of injection was verified by back labeling in the corresponding first-order thalamic nuclei of S1 (ventral posteromedial nucleus) and V1 (lateral geniculate nucleus), respectively. Confirming our previous results, bright fluorescence back labeling of parent cell bodies feedforwardly projecting to the S1 barrel field or to V1 were detected when FG was injected into V1 or S1, respectively. We quantified in detail the layer distribution of neurons in one primary sensory area that directly project to the other one. Related to the density of cells positive for the neuronal marker NeuN, a small fraction of neurons contributes to corticocortical coupling. In S1, the highest density was detected in the S layer (1.28 Ϯ 0.33%) and I layer (2.36 Ϯ 0.37%), whereas only 0.3 Ϯ 0.13% of neurons in G layer were retrogradely stained (Fig. 7A,B). In V1, the distribution was similar with the highest density of FG-positive neurons in S layer (1.53 Ϯ 0.27%) and I layer (2.81 Ϯ 0.76%; Fig. 7C,D). sparse reciprocal GABAergic connections between primary sensory cortices in deep cortical layers. To uncover the contribution of the sparse corticocortical projections for the timing of neuronal firing in S1 and V1, we calculated the coupling strength and delay between spike trains in one cortical area in relationship to the other area after bimodal stimulation (Fig. 7E,F). The low number and firing rate of clustered units simultaneously recorded in S layers of S1 and V1 precluded the analysis of their temporal spike correlations. Crosscorrelation analysis for spike trains recorded in G layers of S1 and V1 identified significantly correlated trains ff , yet their number was very low (G: 12 out of 492, I: 24 out of 552). The analysis of cell-type specificity of spike train coupling equally confirmed the anatomic data. The majority of significantly temporally correlated neurons were PYRs projecting onto PYRs (G layer: 75%, I layer 92% of correlated pairs). The delay between spike trains simultaneously recorded in both areas after bimodal stimulation gave first insights into the directionality of neuronal interactions between S1 and V1 during multisensory processing. At the level of G layers, 82% of PYRs in S1 fired 12.5 Ϯ 0.82 ms before V1 neurons, whereas only 2 out of 11 PYRs fired shortly (6 Ϯ 2.82 ms) after V1 neurons. Similarly, at the level of I layer, the firing of the majority of S1 neurons (22 out of 24) preceded V1 discharges. Thus, bimodal stimulation leads few S1 neurons to drive the firing of intercortically connected V1 neurons. Taken together, these results indicate that processing of visual-tactile information in the primary sensory corti-ces S1 and V1 involved coordinated and directed firing of a small fraction of neurons, mainly PYRs, via direct cortico-cortical axonal projections. Discussion Cross-modal modulation of neuronal assemblies in primary sensory cortices is necessary for multisensory processing. The present study provides insights into the cellular substrate of visual-tactile interactions by testing how individual neurons in S1 and V1 convey cross-modal stimuli in activity patterns along corticocortical axonal projections. We demonstrate that (1) in both S1 and V1, a small fraction of PYRs and INs respond to cross-modal stimuli; (2) visual-tactile stimulation augments the firing of INs and decreases the firing of PYRs (the most prominent effects were detected in S1 supragranular and granular layers and V1 granular and infragranular layers); (3) visualtactile stimuli modulate the firing latency and sharpens the phase locking of both PYRs and INs to theta-beta band network oscillations; and (4) synchrony of spike trains coupling S1 and V1 via direct but sparse intercortical axonal projections increases after cross-modal stimulation. New experimental findings of the last years profoundly challenged the traditional view on multisensory processing. Originally, it was assumed that the integration of inputs from different senses follows hierarchically organized pathways and mainly involves higher cortical areas and some subcortical nuclei (Meredith and Stein, 1983;Stein and Stanford, 2008;Reig and Silberberg, 2014). Accumulating experimental evidence, however, has doc- Figure 6. Modulation of spike-LFP coupling by uni-and bimodal stimulation. A, (i) Polar plots depicting the MRV of S1 phase-locked PYRs (triangle) and INs (circle) to 4-12 Hz (cyan), 12-30 Hz (magenta), and 30-100 Hz (orange) V1 network oscillations after unimodal (solid line) and bimodal stimulation (dashed line). Significant phase differences between uni-and bimodal conditions are indicated by ‫ء‬ (color coded to the frequency range of V1 oscillations). (ii) Bar diagrams displaying the fraction of PYRs (triangles) and INs (circles) significantly phase-locked to 4-12, 12-30, and 30-100 Hz oscillations before (left) and after (right) bimodal stimulation of the categorized neurons (black ϭ unimodal, gray ϭ cross-modal, green ϭ additive multisensory, blue ϭ nonadditive multisensory). (iii) umented cross-modal activation in primary sensory cortices, which traditionally have been considered as sensory specific (Ghazanfar and Schroeder, 2006). At the network level, sensory systems seem to share similar mechanisms of cross-modal integration. Modality-unspecific stimuli cause phase reset of ongoing spontaneous network os-cillations, facilitating that modality-specific stimuli arrive during the same oscillatory phase (Lakatos et al., 2007;Sieben et al., 2013). As a result, the processing efficiency of stimuli augments (Fries et al., 2001). At the level of single neurons, most of the knowledge originates from studies in the auditory system. It has been reported that continued Circle plot depicting the phase locking of unimodal (black), cross-modal (gray), additive multisensory (green), and nonadditive multisensory (blue) PYRs (triangle) and INs (circle) in the S1 after unimodal (right) and bimodal stimulation (left) to V1 theta oscillations. (iv) Bar diagram displaying the MRV of S1 PYRs (triangle) and INs (circle) locked to V1 4-12 Hz (cyan), 12-30 Hz (magenta), and 30-100 Hz (orange) network oscillations. (v) Same as (iv) for the PPC measure. (vi) Diagram depicting the V1 theta oscillations onto which the S1 neurons depicted in (iii) are locked to. B, Same as A for S1 infragranular neurons phase-locked to V1 infragranular oscillations. C, Same as A for V1 granular neurons phase-locked to S1 granular oscillations. D, Same as C for V1 infragranular neurons phase-locked to S1 infragranular oscillations. Figure 7. Anatomic and functional characterization of direct S1-V1 connectivity. A, Fluorescence microscopy image displaying the injection site of the retrograde tracer FG covering all V1 layers in a 50-m-thick cortical slice (top) and retrogradely labeled neurons in S1 (bottom). B, (i) Photograph depicting retrogradely labeled neurons over S1 layers (S, G, I) after FG injection into V1 when costained against GABA in a 50-m-thick cortical slice. Inset, a GABA-and FG-positive (red) neuron and a GABA-negative FG-positive neuron (white). (ii) Bar diagram displaying the fraction of FG-, GABA-, and FG ϩ GABA-positive neurons in S1 after FG injection in V1. C, Same as A for injection in S1 and retrogradely labeled neurons in V1. D, Same as B for V1. E, Line plot displaying the cross-correlation of simultaneously recorded spike trains in the granular layer of S1 and V1 after bimodal stimulation. The cross-correlation after unimodal stimulation was subtracted to correct for spurious coupling. Gray lines correspond to PYR-PYR correlation, whereas cyan and magenta lines indicate PYR-IN and IN-PYR coupling, respectively. F, Same as E for infragranular layers. only a small number of individual neurons in the primary auditory cortex modify their spiking pattern in response to visual and tactile stimuli (Kayser et al., 2008;Kayser et al., 2010;Meredith and Allman, 2015). In the present study, we demonstrate that cross-modal modulation of neuronal firing takes place in S1 and V1 as well. Only a small fraction of both PYRs and INs changed their firing rate, and consequently, at population level, no significant differences could be detected. Detailed analysis of firing after uni-and bimodal stimulation enabled further categorization of these neurons and identification of subtle area-, layer-, and cell typespecific differences. In line with anatomic data of information processing along the sensory tract with direct thalamic inputs to both granular and infragranular layers (Meyer et al., 2010;Feldmeyer, 2012;Constantinople and Bruno, 2013), the largest proportion of unimodal PYRs and INs were detected in S1 and V1 granular and infragranular layers. In S1, the majority of PYRs and INs responded to both visual and tactile stimuli, yet their firing rate was mostly similar under all stimulation conditions. Only few neurons responded differently to bi-versus unimodal stimulation. The high prevalence of additive multisensory neurons confirms previous data showing that multisensory interactions in rodents are modulatory (Ghoshal et al., 2011). In V1, the distribution of PYRs and INs across different classes was similar to S1. In S1, cooccurrence of visual and tactile stimulation augmented interneuronal firing and decreased pyramidal firing in supragranular and granular layers. These effects are in line with the cross-modal suppression of neuronal firing described for visual-auditory and visual-tactile stimuli (Kayser et al., 2008;Sieben et al., 2013;Meredith and Allman, 2015). In V1, the decrease of pyramidal firing and increase of interneuronal firing were present in the infragranular layer, whereas an overall suppression of spiking was observed in the granular layer. Overall, these differences may indicate that subtle differences in the circuits entrained in cross-modal processing are specific for each primary sensory cortex, although the overall coding scheme is similar. Complementary to changes in spiking rate, the temporal pattern of pyramidal and interneuronal firing was modulated. In S1, the spiking response latency for PYRs as well as in V1 for INs decreased in a similar way as previously reported for higher multisensory areas, such as superior colliculus (Rowland et al., 2007). By these means, the delay between sensory and motor activation equally decreases, improving behavioral performance (Frens et al., 1995;Goldring et al., 1996). Two scenarios might account for spike timing modulation in S1 before the onset of stimulus-induced firing in V1. First, visual-tactile information might either already be integrated at thalamic level from where it is fed forward to the neocortex, or S1 might receive inputs from matching or cross-modal first-order thalamic nuclei VPM and LGN (Cappe et al., 2009). It has recently been shown that visual stimulation excites VPM neurons, their firing onset preceding unimodal responses in S1 or V1 (Allen et al., 2016). In line with these findings, it was previously shown that network effects in S1 are visually modulated well before the onset of visually evoked responses in V1 (Sieben et al., 2013). Besides integrative processes at subcortical level, most likely thalamic, a second scenario implies the existence of cross-modal neurons in primary sensory areas that modulate spike timing in putatively unimodal sensory areas after bimodal stimulation. It has been demonstrated that sensory areas represent a heterogeneous pool of neurons responding not only to modality-specific stimuli but also to cross-modal inputs (Wallace et al., 2004;Meredith and Allman, 2015). The present data suggest that the same multifaceted neuronal responses occur in primary sensory cortices, which already have been shown to code complex information such as reward or locomotion (Niell and Stryker, 2010). Besides timing modulation of stimulus-induced firing, the number of phase-locked cells and the locking strength between pyramidal/interneuronal firing and theta-beta network oscillations increased across all layers in S1 and V1. Sharpening of spike timing seems to be an ubiquitary and efficient mechanism of multisensory processing (Bizley et al., 2007;Lakatos et al., 2007;Kayser et al., 2008). It correlates with the ability of cross-modal stimuli to reset the phase of thetabeta band network oscillations in primary somatosensory, visual, and auditory cortices (Lakatos et al., 2007;Sieben et al., 2013). Single-cell responses to sensory stimuli are under the direct control of network states, such as anesthesia or sleep (Fontanini and Katz, 2008). Vice versa, the behavioral state affects circuit computations as well as longrange corticocortical interactions (Massimini et al., 2005;Ferrarelli et al., 2010;Fu et al., 2014;Kuchibhotla et al., 2017). The effects described here might have been modulated by urethane anesthesia that influences excitatory and inhibitory neuronal firing (Hara and Harris, 2002) and, consequently, might differ from multisensory interactions in the awake state (Iurilli et al., 2012;Ibrahim et al., 2016). Furthermore, it has been shown that urethane anesthesia strengthens thalamic burst firing which, in contrast to tonic firing mode, inhibits the transmission of sensory information to the cortex (Huh and Cho, 2013). By these means, the flow of unisensory and multisensory information to primary sensory cortex might be restricted under urethane anesthesia. At very high doses, urethane has been shown to evoke cross-modal responses in primary sensory cortices (Land et al., 2012) as well as increasing the prevalence of cross-modal neurons in modalityspecific cortices (Lissek et al., 2016). While studies exploring multisensory processing in the anesthetized animal have found bimodal enhancement effects (Meredith and Stein, 1983;King and Palmer 1985), no bimodal enhancement but rather depressive effects of neuronal firing were found in the awake state (Populin and Yin, 2002). However, these disadvantages of sleep-like/anesthetized conditions should be related to the confounding effects of awake state (e.g., attention, diverse brain states) on multisensory processing at cortical level. In the light of the presented findings, two major questions concerning the multisensory processing in primary sensory cortices need to be addressed. First, which circuits underlie the rate and temporal coding of multisensory information? Anatomic investigations revealed the existence of direct axonal projections between primary sensory cortices, although their density is very low (Sieben et al., 2013;Henschke et al., 2014). The analysis of layer-specific connectivity between S1 and V1 revealed that intercortically projecting neurons are mainly located in infragranular layers of S1 and V1 and are almost absent at the level of granular layer. This connectivity pattern differs from those reported for V1-A1 (Ibrahim et al., 2016). Corresponding to the distribution of projections, more V1-S1 spike trains in infragranular than in granular layer synchronized their firing. Most of intercortically connected neurons are PYRs and only very few are INs, confirming previous anatomic investigations on the distribution of long-range GABAergic connectivity (Tamamaki and To mioka, 2010). Despite their low number, these neurons seem to have a high impact on multisensory processing. Hyperpolarization in supra-and infragranular layers of V1 were detected when A1 was activated by noise and most likely results from intercortical activation of infragranular neurons and a subsequent local infragranular-to-supragaranular inhibition (Iurilli et al., 2012;Ibrahim et al., 2016). Direct corticocortical connections might not be the only source of cross-modal inputs to modality-specific sensory areas. Primary sensory areas are located at the interface between subcortical thalamic relay stations and higher sensory areas. Feedback influences from multisensory convergence zones at the border of two sensory-specific cortices might send cross-modal inputs to primary sensory areas (Driver and Noesselt, 2008). In addition, direct connections have been identified between primary sensory cortices and association cortices in monkeys (Rockland and Ojima, 2003) and rats (Paperna and Malach, 1991;Sreenivasan et al., 2017). Particularly, the rat V1 has reciprocal connections with the temporal association cortex and extrastriatal areas (Miller and Vogt, 1984;Vaudano et al., 1991;Wang and Burkhalter, 2007;Laramée et al., 2011) that in turn send outputs to primary auditory areas (Smith et al., 2010). Whether similar connections exist between S1 and higher visual areas remains to be elucidated. However, it is questionable whether polysynaptic loops from one primary sensory area to another primary sensory area via higher sensory areas can account for fast multisensory effects described here. Besides the connectivity between primary sensory areas and higher sensory areas, sensory information might also be integrated already at the level of first-order thalamus from where it is fed forward to primary sensory cortices. Crossmodal neurons have recently been detected in VPM and audiovisual processing effects have been described in the medial geniculate body (Komura et al., 2005). Future investigations need to assess to which extend a similar wiring scheme and synaptic interactions account for the visualtactile processing. The second question with high relevance for understanding multisensory processing is to which extent rate versus temporal code complement each other for carrying cross-modal information. It has been suggested that changes in single neurons code for the discrete properties of sensory stimuli, whereas temporal code tags the relatedness of firing modulation to form a broader percept (Singer, 2009). While rate and temporal codes may act independently, the majority of studies proposed their dual action as key of information representation (Masuda and Aihara, 2007;Ainsworth et al., 2012;. Synchronization of neural assemblies can be obtained using one or the other coding strategy. Coding by firing rate requires not only a large number of spikes and neurons but also homogeneous cell populations internally connected by equal weights. Therefore, its influence on assembly activity is rather limited and needs to be complemented by temporal coding that ensures that stimuli are timed to the optimal phase of network oscillations, thus having the increased salience. Our data revealed that both codes, rate and temporal, act simultaneously and underlie the communication between S1 and V1. Interestingly, the communication between S1 and V1 spike trains seems to be directed during cross-modal stimulation. Especially simultaneously recorded PYRs in the infragranular layer of S1 fire shortly before V1 neurons, suggesting that they drive the entrainment via monosynaptic projections. In the light of present findings, we propose that crossmodal influences on early somatosensory or visual processing should improve the perception of tactile and visual stimuli. Neuronal firing precisely occurring during neuronal rhythms facilitates the transfer of information (Salinas and Sejnowski, 2001). While very few behavioral investigations addressed the S1-V1 interactions , experimental evidence from auditory system supports this hypothesis (Alais and Cass, 2010; Gleiss and Kayser, 2012). Precise targeting and manipulation of neurons involved in interareal corticocortical communication is required in the future to understand the behavioral readout of their activity codes in multisensory perception.
2018-04-03T04:01:06.032Z
2017-03-01T00:00:00.000
{ "year": 2017, "sha1": "2088a095b90a665bcfbc6358de796ab9b3ff72ff", "oa_license": "CCBY", "oa_url": "https://www.eneuro.org/content/eneuro/4/2/ENEURO.0037-17.2017.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2088a095b90a665bcfbc6358de796ab9b3ff72ff", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
118749525
pes2o/s2orc
v3-fos-license
Passive decoy state quantum key distribution with practical light sources Decoy states have been proven to be a very useful method for significantly enhancing the performance of quantum key distribution systems with practical light sources. While active modulation of the intensity of the laser pulses is an effective way of preparing decoy states in principle, in practice passive preparation might be desirable in some scenarios. Typical passive schemes involve parametric down-conversion. More recently, it has been shown that phase randomized weak coherent pulses (WCP) can also be used for the same purpose [M. Curty {\it et al.}, Opt. Lett. {\bf 34}, 3238 (2009).] This proposal requires only linear optics together with a simple threshold photon detector, which shows the practical feasibility of the method. Most importantly, the resulting secret key rate is comparable to the one delivered by an active decoy state setup with an infinite number of decoy settings. In this paper we extend these results, now showing specifically the analysis for other practical scenarios with different light sources and photo-detectors. In particular, we consider sources emitting thermal states, phase randomized WCP, and strong coherent light in combination with several types of photo-detectors, like, for instance, threshold photon detectors, photon number resolving detectors, and classical photo-detectors. Our analysis includes as well the effect that detection inefficiencies and noise in the form of dark counts shown by current threshold detectors might have on the final secret ket rate. Moreover, we provide estimations on the effects that statistical fluctuations due to a finite data size can have in practical implementations. I. INTRODUCTION Quantum key distribution (QKD) is the first quantum information task that reaches the commercial market to offer efficient and user-friendly cryptographic systems providing an unprecedented level of security [1]. It allows two distant parties (typically called Alice and Bob) to establish a secure secret key despite the computational and technological power of an eavesdropper (Eve), who interferes with the signals [2]. This secret key is the essential ingredient of the one-time-pad or Vernam cipher [3], the only known encryption method that can deliver information-theoretic secure communications. Practical implementations of QKD are usually based on the transmission of phase randomized weak coherent pulses (WCP) with typical average photon number of 0.1 or higher [4]. These states can be easily prepared using only standard semiconductor lasers and calibrated attenuators. The main drawback of these systems, however, arises from the fact that some signals may contain more than one photon prepared in the same quantum state. When this effect is combined with the considerable attenuation introduced by the quantum channel (about 0.2 dB/km), it opens an important security loophole. Eve can perform, for instance, the so-called Photon Number Splitting attack on the multi-photon pulses [5]. This at-tack provides her with full information about the part of the key generated with the multi-photon signals, without causing any disturbance in the signal polarization. As a result, it turns out that the standard BB84 protocol [6] with phase randomized WCP can deliver a key generation rate of order O(η 2 ), where η denotes the transmission efficiency of the quantum channel [7,8]. This poor performance contrasts with the one expected from a QKD scheme using a single photon source, where the key generation rate scales linearly with η. A significant improvement of the achievable secret key rate can be obtained if the original hardware is slightly modified. For instance, one can use the so-called decoy state method [9,10,11,12], which can basically reach the performance of single photon sources. The essential idea behind decoy state QKD with phase randomized WCP is quite simple: Alice varies, independently and randomly, the mean photon number of each signal state she sends to Bob by employing different intensity settings. This is typically realized by means of a variable optical attenuator (VOA) together with a random number generator. Eve does not know a priori the mean photon number of each signal state sent by Alice. This means that her eavesdropping strategy can only depend on the actual photon number of these signals, but not on the particular intensity setting used to generate them. From the measurement results corresponding to different intensity settings, the legitimate users can obtain a better estimation of the behavior of the quantum channel. This fact translates into an enhancement of the resulting secret key rate. The decoy state technique has been successfully implemented in several recent experiments [13], which show the practical feasibility of this method. While active modulation of the intensity of the pulses suffices to perform decoy state QKD in principle, in practice passive preparation might be desirable in some scenarios. For instance, in those experimental setups operating at high transmission rates. Passive schemes might also be more resistant to side channel attacks than active systems. For example, if the VOA which changes the intensity of Alice's pulses is not properly designed, it may happen that some physical parameters of the pulses emitted by the sender depend on the particular setting selected. This fact could open a security loophole in the active schemes. Known passive schemes rely typically on the use of a parametric down-conversion (PDC) source together with a photon detector [14,15,16]. The main idea behind these proposals comes from the photon number correlations that exist between the two output modes of a PDC source. By measuring the photon number distribution of one output mode it is possible to infer the photon number statistics of the other mode. In particular, Ref. [14] considers the case where Alice measures one of the output modes by means of a time multiplexed detector (TMD) which provides photon number resolution capabilities [17]; Ref. [15] analyzes the scenario where the detector used by Alice is just a simple threshold detector, while the authors of Ref. [16] generalize the ideas introduced by Mauerer et al. in Ref. [14] to QKD setups using triggered PDC sources. All these schemes nearly reach the performance of a single photon source. More recently, it has been shown that phase randomized WCP can also be used for the same purpose [18]. That is, one does not need a non-linear optics network preparing entangled states. The crucial requirement of a passive decoy state setup is to obtain correlations between the photon number statistics of different signals; hence it is sufficient that these correlations are classical. The main contribution of Ref. [18] is rather simple: When two phase randomized coherent states interfere at a beam splitter (BS), the photon number statistics of the outcome signals are classically correlated. This effect contrasts with the one expected from the interference of two pure coherent states with fixed phase relation at a BS. In this last case, it is well known that the photon number statistics of the outcome signals is just the product of two Poissonian distributions. Now the idea is similar to that of Refs. [14,15,16]: By measuring one of the two outcome signals of the BS, the conditional photon number distribution of the other signal varies depending on the result obtained [18]. In the asymptotic limit of an infinite long experiment, it turns out that the secret key rate provided by such a passive scheme is similar to the one delivered by an active decoy state setup with infinite decoy settings [18]. A similar result can also be obtained when Alice uses heralded single-photon sources showing non-Poissonian photon number statistics [19]. In this paper we extend the results presented in Ref. [18], now showing specifically the analysis for other practical scenarios with different light sources and photodetectors. In particular, we consider sources emitting thermal states and phase randomized WCP in combination with threshold detectors and photon number resolving (PNR) detectors. In the case of threshold detectors, we include as well the effect that detection inefficiencies and dark counts present in current measurement devices might have on the final secret ket rate. For simplicity, these measurement imperfections were not considered in Ref. [18]. On the other hand, PNR detectors allows us to obtain ultimate lower bounds on the maximal performance that can be expected at all from this kind of passive setups. We also present a passive scheme that employs strong coherent light and does not require the use of single photon detectors, but it can operate with a simpler classical photo-detector. This fact makes this setup specially interesting from an experimental point of view. Finally, we provide an estimation on the effects that statistical fluctuations due to a finite data size can have in practical implementations. The paper is organized as follows. In Sec. II we review very briefly the concept of decoy state QKD. Next, in Sec. III we present a simple model to characterize the behavior of a typical quantum channel. This model will be relevant later on, when we evaluate the performance of the different passive schemes that we present in the following sections. Our starting point is the basic passive decoy state setup introduced in Ref. [18]. This scheme is explained very briefly in Sec. IV. Then, in Sec. V we analyze its security when Alice uses a source of thermal light. Sec. VI and Sec. VII consider the case where Alice employs a source of coherent light. First, Sec. VI investigates the scenario where the states prepared by Alice are phase randomized WCP. Then, Sec. VII presents a passive decoy state scheme that uses strong coherent light. In Sec. VIII we discuss the effects of statistical fluctuations. Finally, Sec. IX concludes the paper with a summary. II. DECOY STATE QKD In decoy state QKD Alice prepares mixtures of Fock states with different photon number statistics and sends these states to Bob [9,10,11,12]. The photon number distribution of each signal state is chosen, independently and at random, from a set of possible predetermined settings. Let p l n denote the conditional probability that a signal state prepared by Alice contains n photons given that she selected setting l, with l ∈ {0, . . . , m}. For instance, if Alice employs a source of phase randomized WCP then p l n = e −µ l µ n l /n!, and she varies the mean photon number (intensity) µ l of each signal. Assuming that Alice has choosen setting l, such states can be described as where |n denote Fock states with n photons. The gain Q l corresponding to setting l, i.e., the probability that Bob obtains a click in his measurement apparatus when Alice sends him a signal state prepared with setting l, can be written as where Y n denotes the yield of an n-photon signal, i.e., the conditional probability of a detection event on Bob's side given that Alice transmitted an n-photon state. Similarly, the quantum bit error rate (QBER) associated to setting l, that we shall denote as E l , is given by with e n representing the error rate of an n-photon signal. Now the main idea of decoy state QKD is very simple. From the observed data Q l and E l , together with the knowledge of the photon number distributions p l n , Alice and Bob can estimate the value of the unknown parameters Y n and e n just by solving the set of linear equations given by Eqs. (2)-(3). For instance, in the general scenario where Alice employs an infinite number of possible decoy settings then she can estimate any finite number of parameters Y n and e n with arbitrary precision. On the other hand, if Alice and Bob are only interested in the value of a few probabilities (typically Y 0 , Y 1 , and e 1 ), then they can estimate them by means of only a few different decoy settings [10,11,12]. In this paper we shall consider that Alice and Bob treat each decoy setting separately, and they distill secret key from all of them. We use the security analysis presented in Ref. [10], which combines the results provided by Gottesman-Lo-Lütkenhaus-Preskill (GLLP) in Ref. [8] (see also Ref. [20]) with the decoy state method. Specifically, the secret key rate formula can be written as where R l satisfies The parameter q is the efficiency of the protocol (q = 1/2 for the standard BB84 protocol [6], and q ≈ 1 for its efficient version [21]); f (E l ) is the efficiency of the error correction protocol as a function of the error rate E l [22], typically f (E l ) ≥ 1 with Shannon limit f (E l ) = 1; e 1 denotes the single photon error rate; is the binary Shannon entropy function. To apply the secret key rate formula given by Eq. (5) one needs to solve Eqs. (2)-(3) in order to estimate the quantities Y 0 , Y 1 , and e 1 . For that, we shall use the procedure proposed in Ref. [12]. This method requires that the probabilities p l n satisfy certain conditions. It is important to emphasize, however, that the estimation technique presented in Ref. [12] only constitutes a possible example of a finite setting estimation procedure and no optimality statement is given. In principle, many other estimation methods are also available for this purpose, like, for instance, linear programming tools [23], which might result in a sharper, or for the purpose of QKD better, bounds on the considered probabilities. III. CHANNEL MODEL In this section we present a simple model to describe the behavior of a typical quantum channel. This model will be relevant later on, when we evaluate the performance of the passive decoy state setups that we present in the following sections. In particular, we shall consider the channel model used in Refs. [10,12]. This model reproduces a normal behavior of a quantum channel, i.e., in the absence of eavesdropping. Note, however, that the results presented in this paper can also be applied to any other quantum channel, as they only depend on the observed gains Q l and error rates E l . A. Yield There are two main factors that contribute to the yield of an n-photon signal: The background rate Y 0 , and the signal states sent by Alice. Usually Y 0 is, to a good approximation, independent of the signal detection. This parameter depends mainly on the dark count rate of Bob's detection apparatus, together with other background contributions like, for instance, stray light coming from timing pulses which are not completely filtered out in reception. In the scenario considered, the yields Y n can be expressed as [10,12] where η sys represents the overall transmittance of the system. This quantity can be written as where η channel is the transmittance of the quantum channel, and η Bob denotes the overall transmittance of Bob's detection apparatus. That is, η Bob includes the transmittance of any optical component within Bob's measurement device and the detector efficiency. The parameter η channel can be related with a transmission distance d measured in km for the given QKD scheme as where α represents the loss coefficient of the channel (e.g., an optical fiber) measured in dB/km. B. Quantum bit error rate The n-photon error rate e n is given by [10,12] where e d is the probability that a signal hits the wrong detector on Bob's side due to the misalignment in the quantum channel and in his detection setup. For simplicity, here we assume that e d is a constant independent of the distance. Moreover, from now on we shall consider that the background is random, i.e., e 0 = 1/2. IV. PASSIVE DECOY STATE QKD SETUP The basic setup is rather simple [18]. It is illustrated in Fig. 1. Suppose two Fock diagonal states interfere at a BS of transmittance t. If the probabilities p n and r n are properly selected, then it turns out that the photon number distributions of the two outcome signals can be classically correlated. By measuring the signal state in mode b, therefore, the conditional photon number statistics of the signal state in mode a vary depending on the result obtained. In the following sections we analyze the setup represented in Fig. 1 for different light sources and photodetectors. We start by considering a simple source of thermal states. Afterwards, we investigate more practical sources of coherent light. V. THERMAL LIGHT Suppose that the signal state ρ which appears in Fig. 1 is a thermal state of mean photon number µ. Such state can be written as and let σ be a vacuum state. In this scenario, the joint probability of having n photons in output mode a and m photons in output mode b (see Fig. 1) has the form That is, depending on the result of Alice's measurement in mode b, the conditional photon number distribution of the signals in mode a varies. In particular, we have that whenever Alice ignores the result of her measurement, the total probability of finding n photons in mode a can be expressed as Next, we consider the case where Alice uses a threshold detector to measure mode b. A. Threshold detector Such a detector can be characterized by a positive operator value measure (POVM) which contains two elements, F vac and F click , given by [24] The parameter η d denotes the detection efficiency of the detector, and ǫ represents its probability of having a dark count. Eq. (14) assumes that ǫ is, to a good approximation, independent of the incoming signals. The outcome of F vac corresponds to "no click" in the detector, while the operator F click gives precisely one detection "click", which means at least one photon is detected. The joint probability for seeing n photons in mode a and no click in the threshold detector, which we shall denote as pc n , has the form with the parameter r given by Fig. 1): qc n (black) versus q c n (grey) when ρ is given by Eq. (11), and σ is a vacuum state. We use µ = 1 and t = 1/2, and we study two situations: (A) A perfect threshold photon detector, i.e., ǫ = 0 and η d = 1, and (B) ǫ = 3.2 × 10 −7 and η d = 0.12. These last data correspond to the experiment reported by Gobby et al. in Ref. [25]. If the detector produces a click, the joint probability of finding n photons in mode a is given by Figure 2 shows the conditional photon number statistics of the outcome signal in mode a depending on the result of the threshold detector (click and not click): q c n = p c n /(1 − N th ) and qc n = pc n /N th , with B. Lower bound on the secret key rate We consider that Alice and Bob distill secret key both from click and no click events. The calculations to estimate the yields Y 0 and Y 1 , together with the single photon error rate e 1 , are included in Appendix A. For simulation purposes we use the channel model described in Sec. III. After substituting Eqs. (6)-(9) into the gain and QBER formulas we obtain that the parameters Qc, Ec, Q t , and E t can be written as where The resulting lower bound on the secret key rate is illustrated in Fig. 3 Fig. 1 with two intensity settings. The signal state ρ is given by Eq. (11), and σ is a vacuum state. We consider two possible scenarios: (A) A perfect threshold detector, i.e., ǫ = 0 and η d = 1, and (B) ǫ = 3.2 × 10 −7 and η d = 0.12 [25]. Both cases provide approximately the same final key rate and they cannot be distinguished with the resolution of this figure (dashed line). The solid line represents a lower bound on R when Alice employs a PNR detector instead of a threshold detector (see Appendix B 1). that q = 1, and f (E c ) = f (Ec) = 1. 22. These data are used as well for simulation purposes in the following sections. We study two different scenarios: (A) A perfect threshold detector, i.e., ǫ = 0 and η d = 1, and (B) ǫ = 3.2 × 10 −7 and η d = 0.12 [25]. In both cases we find that the values of the mean photon number µ and the transmittance t which maximize the secret key rate formula are quite similar and almost constant with the distance. In particular, µ is quite strong (around 200 in the simulation), while t is quite weak (around 10 −3 ). This result is not surprising. When µ ≫ 1 and t ≪ 1, Alice's threshold detector produces a click most of the times. Then, in the few occasions where Alice actually does not see a click in her measurement device, she can be quite confident that the signal state that goes to Bob is quite weak. Note that in this scenario the conditional photon number statistics qc n satisfy qc 0 ≈ 1 and qc n≥1 ≈ 0. Similarly to the one weak decoy state protocol proposed in Ref. [12], this fact allows Alice and Bob to obtain an accurate estimation of Y 1 and e 1 , which results into an enhancement of the achievable secret key rate and distance. The cutoff point where the secret key rate drops down to zero is l ≈ 126 km. One can improve the resulting secret key rate further by using a passive scheme with more intensity settings. For instance, Alice may employ a PNR detector instead of a threshold detector, or she could use several threshold detectors in combination with beam splitters. In this context, see also Ref. [16]. Figure 3 illustrates also this last scenario, for the case where Alice uses a PNR detector (solid line). As expected, it turns out that now the legitimate users can estimate the actual value of the relevant parameters Y 0 , Y 1 , and e 1 with arbitrary precision (see Appendix B 1). The cutoff point where the secret key rate drops down to zero is l ≈ 147 km. This result shows that the performance of the passive setup represented in Fig. 1 with a threshold detector is already close to the best performance that can be achieved at all with such an scheme and the security analysis provided in Refs. [8,20]. VI. WEAK COHERENT LIGHT Suppose now that the signal states ρ and σ which appear in Fig. 1 are two phase randomized WCP emitted by a pulsed laser source. That is, with µ 1 and µ 2 denoting, respectively, the mean photon number of the two signals. In this scenario, the joint probability of having n photons in output mode a and m photons in output mode b can be written as [18] where the parameters υ, γ, and ξ, are given by This result differs from the one expected from the interference of two pure coherent states with fixed phase relation, | √ µ 1 e iφ1 and | √ µ 2 e iφ2 , at a BS of transmittance t. In this last case, p n,m is just the product of two Poissonian distributions. Whenever Alice ignores the result of her measurement in mode b, then the probability of finding n photons in mode a can be expressed as which turns out to be a non-Poissonian probability distribution [18]. Let us now consider the case where Alice uses a threshold detector to measure output mode b. A. Threshold detector The analysis is completely analogous to the one presented in Sec. V A. In particular, the joint probability for ε=3.2×10 , η =0.12 Fig. 1): q c n (black) versus qc n (grey) when the signal states ρ and σ are two phase randomized WCP given by Eq. (20). We consider that µ1 = µ2 = 1 and t = 1/2, and we study two situations: (A) A perfect threshold photon detector, i.e., ǫ = 0 and η d = 1 [18], and (B) ǫ = 3.2 × 10 −7 and η d = 0.12. These last data correspond to the experiment reported by Gobby et al. in Ref. [25]. Cases C and D represent q c n (black) versus a Poissonian distribution of the same mean photon number for the two scenarios described above (perfect and imperfect threshold photon detector). seeing n photons in mode a and no click in the threshold detector has now the form On the other hand, if the detector produces a click, the joint probability of finding n photons in mode a is given by Eq. (17). Figure 4 (Cases A and B) shows the conditional photon number statistics of the outcome signal in mode a depending on the result of the detector (click and no click): q c n = p c n /(1 − N w ) and qc n = pc n /N w , with and where I q,z represents the modified Bessel function of the first kind [26]. This function is defined as [26] I q,z = 1 2πi e (z/2)(t+1/t) t −q−1 dt. Figure 4 includes as well a comparison between q c n and a Poissonian distribution of the same mean photon number (Cases C and D). Both distributions, q c n and qc n , are also non-Poissonian. B. Lower bound on the secret key rate To apply the secret key rate formula given by Eq. (5), with l ∈ {c,c}, we need to estimate the quantities Y 0 , Y 1 , and e 1 . For that, we follow the same procedure explained in Appendix A. This method requires that p t n and pc n satisfy certain conditions that we confirmed numerically. As a result, it turns out that the bounds given by Eqs. (A10)-(A16) are also valid in this scenario. The only relevant statistics to evaluate Eqs. (A10)-(A16) are p t n and pc n , with n = 0, 1, 2. These probabilities can be obtained by solving Eqs. (23)- (24). They are given in Appendix C. Note that p c n can be directly calculated from these two statistics by means of Eq. (17). After substituting Eqs. (6)-(9) into the gain and QBER formulas we obtain with the parameter ω given by The resulting lower bound on the secret key rate is illustrated in Fig. 5. We assume that t = 1/2, i.e., we consider a simple 50 : 50 BS. Again, we study two different situations: (A) ǫ = 0 and η d = 1 [18], and (B) ǫ = 3.2 × 10 −7 and η d = 0.12 [25]. In both cases the optimal values of the intensities µ 1 and µ 2 are almost constant with the distance. One of them is quite weak (around 10 −4 ), while the other one is around 0.5. The reason for this result can be understood as follows. When the intensity of one of the signals is really weak, the output photon number distributions in mode a are always close to a Poissonian distribution (for click and no click events). This distribution is narrower than the one arising when both µ 1 and µ 2 are of the same order of magnitude. In this case, a better estimation of Y 1 and e 1 can be derived, and this fact translates into a higher secret key rate. It must be emphasized, however, that from an experimental point of view this solution might not be optimal. Specially, since in this scenario the two output distributions p c n and pc n might be too close to each other for being distinguished in practice. This effect could be specially relevant when one considers statistical fluctuations due to finite data size (see Sec. VIII). For instance, small fluctuations in a practical system could overwhelm the tiny difference between the decoy state and the signal state in this case. Figure 5 includes as well the secret key rate of an active asymptotic decoy state QKD system with infinite decoy settings [10]. The cutoff points where the secret key rate drops down to zero are l ≈ 128 km (passive setup with two intensity settings) and l ≈ 147 km (active asymptotic setup). From these results we see Fig. 1 with two intensity settings. The signal states ρ and σ are two phase randomized WCP given by Eq. (20). The transmittance of the BS is t = 1/2. We consider two possible scenarios: (A) ǫ = 0 and η d = 1 [18] (i.e., a perfect threshold photon detector), and (B) ǫ = 3.2 × 10 −7 and η d = 0.12 [25]. Both cases provide approximately the same final key rate and they cannot be distinguished with the resolution of this figure (dashed line). The solid line represents a lower bound on R for an active asymptotic decoy state system with infinite decoy settings [10]. This last result coincides approximately with the case where Alice employs a PNR detector (see Appendix B 2), and the secret key rate is both scenarios cannot be distinguished with the resolution of this figure. that the performance of the passive scheme with a threshold detector is comparable to the active one, thus showing the practical interest of the passive setup. Like in Sec. V, one can improve the performance of the passive scheme further by using more intensity settings. The case where Alice uses a PNR detector is analyzed in Appendix B 2. The result is also shown in Fig. 5. It reproduces approximately the behavior of the asymptotic active setup and the secret key rate is both scenarios cannot be distinguished with the resolution of this figure (solid line). This result is not surprising, since in both situations (passive and active) we apply Eq. (5) with the actual values of the parameters Y 0 , Y 1 , and e 1 . The only difference between these two setups arises from the photon number distribution of the signal states that go to Bob. In particular, while in the passive scheme the relevant statistics are given by Eq. (B9), in the active setup these statistics have the form given by Eq. (B12). C. Alternative implementation scheme The passive setup illustrated in Fig. 1 requires that Alice employs two independent sources of signal states. This fact might become specially relevant when she uses phase randomized WCP, since in this situation none of the signal states entering the BS can be the vacuum state. Otherwise, the photon number distributions of the output signals in mode a and mode b would be statistically independent. Alternatively to the passive scheme shown in Fig. 1, Alice could as well employ, for instance, the scheme illustrated in Fig. 6. This setup has only one laser diode, but follows a similar spirit like the original scheme in Fig. 1, where a photo-detector is used to measure the output signals in mode b. It includes, however, an intensity modulator (IM) to block either all the even or all the odd pulses in mode a. This requires, therefore, an active control of the functioning of the IM, but note that no random number generator is needed here. The main reason for blocking half of the pulses in mode a is to suppress possible correlations between them. That is, the action of the IM guarantees that the signal states that go to Bob are tensor product of mixtures of Fock states. Then, one can directly apply the security analysis provided in Refs. [8,10,20]. Thanks to the one-pulse delay introduced by one arm of the interferometer, together with a proper selection of the transmittance t 1 , it can be shown that both setups in Fig. 1 and Fig. 6 are completely equivalent, except from the resulting secret key rate. More precisely, the secret key rate in the active scheme is half the one of the passive setup, since half of the pulses are now discarded. VII. STRONG COHERENT LIGHT Let us now consider the passive decoy state setup illustrated in Fig. 7. This scheme presents two main differences with respect to the passive system analyzed in Sec. VI. In particular, the mean photon number (intensity) of the signal states ρ and σ is now very high; for instance, ≈ 10 8 photons. This fact allows Alice to use a simple classical photo-detector to measure the pulses in mode b, which makes this scheme specially suited for experimental implementations. Moreover, it has an additional BS of transmittance t 2 to attenuate the signal states in mode a and bring them to the QKD regimen. Due to the high intensity of the input signal states ρ and σ, we can describe the action of the first BS in by means of a classical model. Specifically, let I 1 (I 2 ) represent the intensity of the input states ρ (σ), and let I a (θ) [I b (θ)] be the intensity of the output pulses in mode a (b). Here the angle θ is just a function of the relative phase between the two input states. It is given by where φ 1 (φ 2 ) denotes the phase of the signal ρ (σ). Like in Sec. VI, we assume that these phases are uniformly distributed between 0 and 2π for each pair of input states. This can be achieved, for instance, if Alice uses two pulsed laser sources to prepare the signals ρ and σ. With this notation, we have that I a (θ) and I b (θ) can be expressed as I a (θ) = t 1 I 1 + r 1 I 2 + 2 t 1 r 1 I 1 I 2 cos θ, I b (θ) = r 1 I 1 + t 1 I 2 − 2 t 1 r 1 I 1 I 2 cos θ, where t 1 denotes the transmittance of the BS, and r 1 = 1 − t 1 . A. Classical threshold detector For simplicity, we shall consider that Alice uses a perfect classical threshold detector to measure the pulses in mode b. For each incoming signal, this device tells her whether its intensity is below or above a certain threshold value I M that satisfies I b (π) > I M > I b (0). That is, the value of I M is between the minimal and maximal possible values of the intensity of the pulses in mode b. Note, however, that the analysis presented in this section can be straightforwardly adapted to cover also the case of an imperfect classical threshold detector, or a classical photo-detector with several threshold settings. Figure 8 shows a graphical representation of I b (θ) versus the angle θ, together with the threshold value I M . The angle θ th which satisfies I b (θ th ) = I M is given by θ th = arccos r 1 I 1 + t 1 I 2 − I M 2 √ t 1 r 1 I 1 I 2 . (31) Fig. 7) versus the angle θ. IM represents the threshold value of the classical threshold detector, and θ th is its associated threshold angle. Whenever the classical threshold detector provides Alice with an intensity value below I M , it turns out that the unnormalized signal states in mode c can be expressed as This means, in particular, that the joint probability of finding n photons in mode c and an intensity value below I M in mode b is given by Similarly, we find that p >IM n can be written as B. Lower bound on the secret key rate Again, to apply the secret key rate formula given by Eq. (5), with l ∈ {< I M , > I M }, we need to estimate the quantities Y 0 , Y 1 , and e 1 . Once more, we follow the procedure explained in Appendix A. We confirmed numerically that the probabilities p <IM n and p >IM n satisfy the conditions required to use this technique. As a result, it turns out that the bounds given by Eqs. (A10)-(A16) are also valid in this scenario. For simplicity, we impose I 1 = I 2 = I M ≡ I. This means that θ th = π/2. The relevant statistics p <IM n and p >IM n , with n = 0, 1, 2, are calculated in Appendix D. After substituting Eqs. (6)-(9) into the gain and QBER formulas we obtain where the parameter κ is given by and L q,z represents the modified Struve function [27] defined by Eq. (D2). The resulting lower bound on the secret key rate is illustrated in Fig. 10. We study two different situations: (A) We impose t 1 = 1/2, i.e., we consider a simple 50 : 50 BS, and we optimize the parameter κ, and (B) we optimize both quantities, t 1 and κ. In both scenarios the optimal values of the parameters are almost constant with the distance. In the first case κ is around 0.2, while in the second case we obtain that t 1 and κ are, respectively, around 0.06 and 0.25. The cutoff point where the secret key rate drops down to zero is l ≈ 132 km both in case A and B. These results seem to indicate that this passive scheme can offer a better performance than the passive setups analyzed in Sec. V and in Sec. VI with a threshold photon detector. This fact arises mainly from the probability distributions p <IM n and p >IM n , which, in this scenario, approach a Poissonian distribution when t 2 is sufficiently small. Again, one can improve the performance of this system even further just by using more threshold settings in the classical threshold detector. Moreover, from an experimental point of view, this configutation might be more feasible than using PNR detectors. To conclude this section, let us mention that, like in Sec. VI C, Alice could as well employ, for instance, the alternative active scheme illustrated in Fig. 11. This setup has only one pulsed laser source, but includes an intensity modulator (IM) to block either all the even or all the odd pulses in mode c. The argumentation here goes exactly the same like in Sec. VI C and we omit it for simplicity. The resulting secret key rate in the active scheme is half the one of the passive setup. VIII. STATISTICAL FLUCTUATIONS In this section, we discuss briefly the effect that finite data size in real life experiments might have on the final secret key rate. For that, we follow the statistical fluctuation analysis presented in Ref. [12]. This procedure is based on standard error analysis. That is, we shall assume that all the variables which are measured in the experiment each fluctuates around its asymptotic value. Our main objective here is to obtain a lower bound on the secret key rate formula given by Eq. (5) under statistical fluctuations. For that, we realize the following four assumptions: 1. Alice and Bob know the photon number statistics of the source well and we do not consider their fluctuations directly. Intuitively speaking, these fluctuations are included in the parameters measuring the gains and QBERs. 2. Alice and Bob use a real upper bound on the single photon error rate e 1 , thus no fluctuations have to be considered for this parameter. In particular, we use the fact that the number of errors within the single photon states cannot be greater than the total number of errors. 3. Alice and Bob use a standard error analysis procedure to deal with the fluctuations of the variables which are measured. To illustrate our results, we focus on the passive decoy state setup introduced in Sec. VI. Note, however, that a similar analysis can also be applied to the other passive schemes presented in this paper. A. Active decoy state QKD In order to make a fair comparison between the active and the passive decoy state QKD setups with two intensity settings, from now on we shall consider an active scheme with only one decoy state [12]. In this last case, the quantities Y 1 and e 1 can be bounded as where µ (ν) denotes the mean photon number of a signal (decoy) state, Q µ (Q ν ) and E µ (E ν ) represent, respectively, its associated gain and QBER, and Y 0 is a free parameter. Using the channel model described in Sec. III, we find that these parameters can be written as If we now apply a standard error analysis to these quantities we obtain that their deviations from the theoretical values are given by where N µ (N ν ) denotes the number of signal (weak decoy) pulses sent by Alice, and u α represents the number standard deviations from the central values. That is, the total number of pulses emitted by the source is just given by N = N µ + N ν . Roughly speaking, this means, for instance, that the gain of the signal states lies in the interval Q µ ± ∆ Qµ except with small probability, and similarly for the other quantities defined in Eq. (39). For example, if we select u α = 10, then the corresponding confidence interval is 1 − 1.5 × 10 −23 , which we use later on for simulation purposes. For simplicity, here we have assumed that Alice and Bob use the standard BB84 protocol, i.e., they keep only half of their raw bits (due to the basis sift). This is the reason for the factor 2 which appears in the last two expressions of Eq. (40). In this context, see also Ref. [28] for a discussion on the optimal value of the parameter q. B. The background Y0 The bounds given by Eq. (38) depend on the unknown parameter Y 0 . When a vacuum decoy state is applied, the value of Y 0 can be estimated. Alternatively, one can also derive a lower bound on Y 1 and an upper bound on e 1 which do not depend on Y 0 . Specifically, from Eqs. (2)-(3) we obtain that The gains Q µ and Q ν , together with the QBERs E µ and E ν , are directly measured in the experiment, and their statistical fluctuations are given by Eq. (40). On the other hand, we have that with the parameter B given by Combining Eqs. (41)-(42) we find The quantities A and B can be obtained directly from the variables measured in the experiment. Moreover, if one considers the secret key rate formula given by Eq. (5) as a function of the free parameter e 1 , then one should select an upper bound on e 1 , which gives a value (may not be a bound) for Y 1 as where the equation for e U 1 comes from solving the two inequalities given by Eqs. (41)-(42). Again, using a standard error analysis procedure, we find that the deviations of the parameters A and B from their theoretical values can be written as where the coefficients c 1 and c 2 have the form and the deviations of the gains and the QBERs are given by Eq. (40). For simplicity, we assume now that A and B are statistically independent. Thus, the statistical deviation of the crucial term Y 1 [1 − H 2 (e 1 )] in the secret key formula can be written as In all passive setups the transmittance of the BS is t = 1/2 and we use ǫ = 0. Furthermore, we pick the data size (total number of pulses emitted by Alice) to be N = 6 × 10 9 . The confidence interval for statistical fluctuations is ten standard deviations (i.e., 1 − 1.5 × 10 −23 ). From Eqs. (40), (46) and (48) one can directly calculate the final secret key rate with statistical fluctuations for an active decoy state setup with only one decoy state [12]. The result is illustrated in Fig. 12 (dashed line). Here we use again the experimental data reported by Gobby et al. in Ref. [25]. Moreover, we pick the data size (total number of pulses emitted by Alice) to be N = 6 × 10 9 . We calculate the optimal values of µ and ν for each fiber length numerically. It turns out that both parameters are almost constant with the distance. One of them is weak (it varies between 0.03 and 0.06), while the other is around 0.48. This figure includes as well the resulting secret key rate for the same setup without considering statistical fluctuations (thick solid line). The cutoff points where the secret key rate drops down to zero are l ≈ 129.5 km (active setup with statistical fluctuations) and l ≈ 147 km (active setup without considering statistical fluctuations). From these results we see that the performance of this active scheme is quite robust against statistical fluctuations. C. Passive decoy state QKD The analysis is completely analogous to the previous section. Specifically, we find that the parameters A and B are now given by while Eq. (45) is still valid in this scenario. The deviations of A and B have the form On the other hand, the deviations of the gains and the QBERs can now be written as where Nc denotes the number of pulses where Alice obtained no click in her threshold detector, and N is the total number of pulses emitted by the source. The deviation of the term Y 1 [1 − H 2 (e 1 )] is again given by Eq. (48). The secret key rate for the passive decoy state scheme with WCP introduced in Sec. VI with two intensity settings and considering statistical fluctuations is illustrated in Fig. 12. We assume that t = 1/2, i.e., we consider a simple 50 : 50 BS, and ǫ = 0. The data size is equal to the one of the previous section, i.e., N = 6 × 10 9 . We study two different situations depending on the efficiency of Alice's threshold detector: η d = 1 (thin solid line), and η d = 0.4 (dash-dotted line). In both cases the optimal values of the intensities µ 1 and µ 2 are almost constant with the distance. One of them is weak (it varies between 0.1 and 0.17), while the other is around 0.5. Figure 12 includes as well the resulting secret key rate for the same setup with η d = 1 and without considering statistical fluctuations (dotted line). The cutoff points where the secret key rate drops down to zero are l ≈ 53 km (passive setup with statistical fluctuations and η d = 0.4), l ≈ 80 km (passive setup with statistical fluctuations and η d = 1), and l ≈ 128 km (passive setup without considering statistical fluctuations, see Sec. VI). From these results we see that the performance of the passive schemes introduced in Sec. VI (with statistical fluctuations) depends on the actual value of the efficiency η d . In particular, when Alice's detector efficiency is low, the photon number statistics of the signal states that go to Bob (conditioned on Alice's detection) become close to each other. This effect becomes specially relevant when one considers statistical fluctuations due to finite data size. In this last case, small fluctuations can easily cover the difference between the signal states associated, respectively, to click and no click events on Alice's threshold detector. As a result, the achievable secret key rate and distance decrease. IX. CONCLUSION In this paper we have extended the results presented in Ref. [18], now showing specifically the analysis for other practical scenarios with different light sources and photodetectors. In particular, we have considered sources emitting thermal states and phase randomized WCP in combination with threshold detectors and photon number resolving (PNR) detectors. In the case of threshold detectors, we have included as well the effect that detection inefficiencies and dark counts present in current measurement devices might have on the final secret ket rate. For simplicity, these measurement imperfections were not considered in the original proposal. On the other hand, PNR detectors have allowed us to obtain ultimate lower bounds on the maximal performance that can be expected at all from this kind of passive setups. We have also presented a passive scheme that employs strong coherent light and does not require the use of single photon detectors, but it can operate with a simpler classical photo-detector. This fact makes this setup specially interesting from an experimental point of view. Finally, we have provided an estimation on the effects that statistical fluctuations due to a finite data size can have in practical implementations. since, as we have seen above, p t 2 pc 1 − pc 2 p t 1 ≥ 0. After a short calculation, it turns out that Eq. (A7) can be further simplified to both for l = c and l =c. Finally, from the definition of the probabilities p t n and pc n given by Eqs. (13)-(15), we find that which is greater or equal than zero for all n ≤ 1, and negative otherwise. Note that the first term on the r.h.s. of Eq. (A9) is always greater or equal than zero, and the sign of the second term depends on the value of n, since r ≥ 1 + µt ≥ 1. We obtain, therefore, that for all l ∈ {c,c}, and where Y u 0 denotes an upper bound on the background rate Y 0 . This parameter can be calculated from Eq.(3). In particular, we have that and similarly for the product QcEc. We find Upper bound on e1 For this, we proceed as follows: where the inequality condition comes from the fact that p t n pc 0 − pc n p t 0 = (1 − ǫ)(µt) n (1 + µt)r 1 (1 + µt) n − 1 r n ≥ 0, (A14) for all n ≥ 1. From Eq. (A13) we obtain, therefore, that e 1 is upper bounded by (pc where Y L 1 is given by Eq. (A5) with the parameter Y 0 replaced by Y u 0 . On the other hand, note that Eq.(3) also provides a simple upper bound on e 1 . Specifically, and similarly for the product QcEc. Putting all these conditions together, we find that where Y L 0 represents a lower bound on the background rate Y 0 . To calculate this parameter we use the following inequality: since, as we have seen above, p t 1 pc n − pc 1 p t n ≤ 0 for all n ≥ 2. From Eq. (A17) we obtain, therefore, that In this Appendix we study the case where Alice uses a perfect PNR detector to measure the signal states in mode b. The main goal of this analysis is to obtain an ultimate lower bound on the secret key rate that can be achieved at all with the passive decoy state setups introduced in Sec. V and Sec. VI, in combination with the security analysis provided in Refs. [8,20]. A perfect PNR detector can be characterized by a POVM which contains an infinite number of elements, with m = 0, 1, . . . , ∞. The outcome of F m corresponds to the detection of m photons in mode b. Thermal light Let us begin by considering the passive scheme analyzed in Sec. V with Alice using a PNR detector. Whenever she finds m photons in mode b, then the joint probability distribution of having n photons in mode a is just when Alice uses a PNR detector, ρ is given by Eq. (11), and σ is a vacuum state: p 0 n (black), p 1 n (grey), and p 2 n (white). We consider that µ = 1, t = 1/2, and n ≤ 5. given by Eq. (12). Figure 13 shows the conditional photon number statistics in mode a given that mode b contains exactly m photons: p m n = p n,m /N m , with In this scenario, it turns out that Alice and Bob can always estimate any finite number of yields Y n and error rates e n with arbitrary precision. In particular, they can obtain the actual values of the parameters Y 0 , Y 1 , and e 1 . To see this, let Q m denote the overall gain of the signal states sent to Bob when mode b contains exactly m photons, and let the parameters X m and V n be defined as (B3) With this notation, and using the definition of p n,m given by Eq. (12), we find that Eq. (2) can be rewritten as That is, the coefficient matrix of the system of linear equations given by Eq. (B4) for all possible values of m is a symmetric Pascal matrix [29]. This matrix has determinant equal to one and, therefore, in principle can always be inverted [29]. Then, from the knowledge of the coefficients V n , the legitimate users can directly obtain the values of the yields Y n by means of Eq. (B3). A similar argument can also be used to show that Alice and Bob can obtain as well the values of e n . After substituting Eqs. : Conditional photon number distribution in mode a when Alice uses a PNR detector: p 0 n (black), p 1 n (grey), and p 2 n (white). The signal states ρ and σ in Fig. 1 are two phase randomized WCP given by Eq. (20). We consider that µ1 = µ2 = 1, t = 1/2, and n ≤ 5. In order to evaluate Eq. (5) we need to find the probabilities p 0,m and p 1,m for all m. From Eq. (12) we have that these parameters can be expressed as (B6) The resulting lower bound on the secret key rate is illustrated in Fig. 3 (solid line). The optimal values of the parameters µ and t are quite constant with the distance. Specifically, in this figure we choose µ around 18.5 and t around 0.02. Weak coherent light Let us now consider the passive scheme illustrated in Sec. VI with Alice using a PNR detector. Whenever her detector finds m photons in mode b, the joint probability distribution of having n photons in mode a is given by Eq. (21). Figure 14 shows the conditional photon number statistics in mode a given that mode b contains exactly m photons: p m n = p n,m /N m , with To show that the experimental observations associated to different outcomes of the PNR detector allow Alice and Bob to obtain the values of the parameters Y 0 , Y 1 , and e 1 with arbitrary precision, one could follow the same procedure explained in Appendix B 1. That is, one could try to prove that the determinant of the coefficient matrices associated to the systems of linear equations given by Eqs. (2)-(3) is different from zero also in this scenario. For simplicity, here we have confirmed this statement only numerically.
2009-11-14T22:59:39.000Z
2009-11-14T00:00:00.000
{ "year": 2009, "sha1": "1d04532ed86a85c0efc469a0d9788910157c0de3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0911.2815", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1d04532ed86a85c0efc469a0d9788910157c0de3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
235452169
pes2o/s2orc
v3-fos-license
NMPylation and de-NMPylation of SARS-CoV-2 nsp9 by the NiRAN domain Abstract The catalytic subunit of SARS-CoV-2 RNA-dependent RNA polymerase (RdRp) contains two active sites that catalyze nucleotidyl-monophosphate transfer (NMPylation). Mechanistic studies and drug discovery have focused on RNA synthesis by the highly conserved RdRp. The second active site, which resides in a Nidovirus RdRp-Associated Nucleotidyl transferase (NiRAN) domain, is poorly characterized, but both catalytic reactions are essential for viral replication. One study showed that NiRAN transfers NMP to the first residue of RNA-binding protein nsp9; another reported a structure of nsp9 containing two additional N-terminal residues bound to the NiRAN active site but observed NMP transfer to RNA instead. We show that SARS-CoV-2 RdRp NMPylates the native but not the extended nsp9. Substitutions of the invariant NiRAN residues abolish NMPylation, whereas substitution of a catalytic RdRp Asp residue does not. NMPylation can utilize diverse nucleotide triphosphates, including remdesivir triphosphate, is reversible in the presence of pyrophosphate, and is inhibited by nucleotide analogs and bisphosphonates, suggesting a path for rational design of NiRAN inhibitors. We reconcile these and existing findings using a new model in which nsp9 remodels both active sites to alternately support initiation of RNA synthesis by RdRp or subsequent capping of the product RNA by the NiRAN domain. INTRODUCTION Coronaviruses (CoVs) are single-stranded positive-sense (+) RNA viruses that constitute the Coronaviridae family in the order Nidovirales (1). CoVs cause many respiratory and gastrointestinal infections in humans, from mild common colds to severe respiratory diseases, including the ongoing COVID-19 pandemic (2,3). Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the etiological agent of COVID-19, is the third zoonotic CoV to have caused a major disease outbreak in humans in the last two decades (4). The prevalence of CoVs in animal reservoirs argues that future viral pandemics are all but certain (3,5) and makes advance preparations imperative. While the availability of effective vaccines against SARS-CoV-2 has been a game changer for the current COVID-19 pandemic, broad-spectrum antiviral drugs are needed to protect unvaccinated and immunocompromised individuals and to buy time needed for the development of new vaccines at the onset of the next viral epidemic. CoVs have very large (approximately 30 kb) genomes that encode non-structural proteins (nsps) required for viral gene expression and replication. Upon infecting human cells, the SARS-CoV-2 RNA genome is translated to produce a long polyprotein that is cleaved into nsps 1 through 16 by the viral protease nsp5 (6). The catalytic subunit of RNA-dependent RNA polymerase (RdRp), nsp12, stands out as the only protein that is present in all RNA viruses (7) and is therefore an attractive target for broad-spectrum antivirals (8). The high degree of conservation of the RdRp structure and its key catalytic elements (7,9) encouraged efforts to repurpose existing antivirals, such as remdesivir (10,11) and favipiravir (12), for the treatment of COVID-19 (13). However, these pursuits have not yet produced an effective clinical treatment, suggesting that a detailed mechanistic analysis of the viral replication cycle may be required to identify the best points for intervention. SARS-CoV-2 RdRp holoenzyme is a four-subunit complex of the catalytic nsp12 and accessory nsp7 and nsp8 proteins (14)(15)(16), nsp12•7•8 2 ( Figure 1A). The holoenzyme binds to two copies of the superfamily 1 helicase nsp13 (17) and associates with a proofreading exonuclease nsp14, capping enzymes, and other proteins to form large multi-subunit replication-transcription complex (RTC) that mediates synthesis and modification of viral RNAs (6). The nsp12 subunit also contains a second catalytic module, an N-terminal 250-residue Nidovirus RdRp-Associated Nucleotidyl transferase (NiRAN) domain (9). The NiRAN domain displays significant sequence divergence as compared to RdRp, with only four conserved motifs (preA N , A N , B N and C N ) comprising the NiRAN signature (18). The NiRAN domain is present in all nidoviruses but has no homologs in other RNA viruses and, together with the nsp13 helicase (HELD) domain, is a genetic marker for the order Nidovirales (18). As first shown with an equine arteritis virus (EAV) enzyme from the Arteriviridae family of nidoviruses, RdRp self-NMPylates in vitro with a clear preference for UTP as a substrate and the Mn 2+ ion as a cofactor (18). Such an activity is common among AMPylases, which frequently transfer AMP to their autoinhibitory domains (19). Substitutions of several conserved NiRAN residues abolish nucleotidyl transfer in vitro and abrogate EAV replication in cell culture to the same extent as do substitutions of the catalytic RdRp residues (18). These results demonstrate that the Ni-RAN domain plays a critical role in the viral life cycle and thus is a valid target of antiviral drug discovery. Subsequent studies of two viruses from Coronaviridae, HCoV-229E and SARS-CoV-2, led to similar conclusions (20). The location of the NiRAN nucleotidyl transfer site is well established, but the identity of the NMP acceptor remains debated. Single-particle cryogenic electron microscopy (cryoEM) studies of SARS-CoV-2 RdRp (17,21) revealed nucleotides bound to nsp12 residues shown to be required for self-NMPylation in vitro (18). A finding that NiRAN active site is structurally homologous to that of a protein pseudokinase, selenoprotein O/SelO (16,17,22), supported the proposed role of NiRAN in covalent NMPylation of protein targets (18). Consistently, Ziebuhr and colleagues recently showed that HCoV-229E and SARS-CoV-2 nsp12s efficiently transfer NMPs to nsp9 (20), a small (113 residues) RNA-binding protein that is essential for viral replication (23)(24)(25). nsp9 and nsp12 modifications shared the requirements for NTP substrates, metal cofactors, and NiRAN residues, arguing that both reactions utilize similar mechanisms. Mass spectrometry identified the primary amine of the N-terminal Asn, which is conserved among CoVs ( Figure 1B), as a site of nsp9 modification (20). Mutational analysis revealed that (i) the Asn2 residue was critical for modification; (ii) Asn1 could be substituted with Ala or Ser with a modest loss of reactivity; and (iii) the presence of even one additional N-terminal Ala residue abolished nsp9 NMPylation (20). In support of the essential role of its NMPylation, these nsp9 substitutions had parallel effects on NMP transfer in vitro and on viral replication (20). Furthermore, in a cryoEM structure of nsp9 bound to the SARS-CoV-2 RdRp-helicase complex (21), the nsp9 Asn1 is adjacent to the NiRAN active site ( Figure 1A). Strikingly, however, Yan et al. did not detect nsp9 modification and instead observed GMP transfer to RNA, which they proposed represents a key early step in the capping pathway (21). The lack of nsp9 reactivity is most likely explained by the presence of two additional, non-native residues, Gly and Ser, at the N-terminus of the recombinant nsp9 used to obtain the structure. While these residues were not modeled in PDB: 7CYQ, the GSNNELSPVALR tryptic peptide was identified by mass-spectrometry analysis and the density for Gly-2/Ser-1 residues is discernible in the EM map ( Figure 1A). In the presence of these additional residues, the cognate NMPylation site, the N1 amine, is eliminated. Different metal ion cofactors, protein tags, or other reaction variables could also explain discrepancies in observed nsp12 catalytic properties. Our findings that noncognate NTPs and nucleoside analogs modulate RdRp activity suggested that the RdRp and NiRAN active sites (thereafter referred to as AS1 and AS2, respectively) could be allosterically connected (26). Testing this hypothesis necessitates parallel assays of both nucleotidyl transfer activities and in turn requires using cognate NiRAN substrates. In agreement with HCoV-229E studies by Ziebuhr and colleagues (20), we show that SARS-CoV-2 nsp12 efficiently NMPylates nsp9 that has the native N-terminus, but not an nsp9 variant that bears two additional N-terminal residues. Substitutions of the invariant NiRAN residues abolished nsp9 NMPylation, whereas substitution of a catalytic RdRp residue, Asp760, did not. We found that NMPylation proceeds equally efficiently with Mg 2+ and Mn 2+ , is largely insensitive to the identity of the natural NTP and can utilize nucleotide analogs such as remdesivir triphosphate. We also show that NMPylation is reversible in the presence of pyrophosphate. Nucleotide analogs that lack the triphosphate moiety and pyrophosphate analogs bisphosphonates inhibit the forward reaction, suggesting a starting point for identification of Ni-RAN inhibitors. Construction of expression vectors Plasmids used in this study are shown in Supplementary Table S1. The SARS-CoV-2 nsp7/8/9/12 genes were codonoptimized for expression in Escherichia coli and synthesized by GenScript and subcloned into standard pET-derived expression vectors under control of the T7 gene 10 promoter and lac repressor, as described previously (26). The RdRp-helicase complex bound to a non-native nsp9 with a two-residue extension at the N-terminus (21). Left. Overall structure of the complex; PDB: 7CYQ; nsp13 helicase is not shown. Proteins are shown as colored molecular surfaces (as shown in the key) and RNA as black cartoon. The color coding corresponds to the figures throughout this manuscript unless otherwise specified. Right. Zoom in on the active site of the NiRAN domain (AS2) with GDP-Mg 2+ (lime carbon atoms and magenta sphere, respectively). Side chains of key conserved residues from the pre-A N , B N and C N motifs that were substituted in this work are shown as sticks. Four N-terminal residues of nsp9 (GSNN) are shown; the cryo-EM difference density for Gly and Ser residues from EMDB: 30504 is shown (gray mesh). Structural figures were prepared with Coot (61), UCSF ChimeraX 1.2 and PyMOL Molecular Graphics System, version 2.4.1, Schrodinger, LLC. (B) Conservation of residues at the N-terminus of nsp9 and four conserved NiRAN motifs in alpha-, beta-, gamma-and deltacoronavirus genera. (C) Mutations in the NiRAN active site and nsp9 N-terminal GS extension abolish NMP transfer, but Mn 2+ is dispensable. NMPylation efficiency was compared to that observed with the wild-type nsp12 in the presence of 1 mM Mg 2+ (set at 1) and is shown as mean ± SD (n = 3); nd, no signal detected above the background. (D) NMPylation of nsp9 is not inhibited by an excess of the artificially-extended GSN nsp9. NMPylation efficiency was compared to that observed with N nsp9 present at 1.5-fold molar excess over nsp12 in the absence of GSN nsp9 (set at 1) and is shown as mean ± SD (n = 3). derivative plasmids were constructed by standard molecular biology approaches with restriction and modification enzymes from New England Biolabs. DNA oligonucleotides for vector construction and sequencing were obtained from Millipore Sigma, synthetic DNA fragments for Gibson Assembly--from IDT. The sequences of all plasmids were confirmed by Sanger sequencing at the Genomics Shared Resource Facility (The Ohio State University). All plasmids were deposited to Addgene. De-NMPylation To determine if PP i can reverse the NMPylation reaction, 0.5 M nsp12, 5 M nsp9, 25 M GTP and 10 Ci [␣ 32 P]-GTP were incubated in NMPylation buffer for 15 min, then 0.5 mM PP i was added. To determine which active site of nsp12 is responsible for the de-NMPylation activity, 0.5 M His-tagged nsp12, 20 M nsp9, 50 M GTP, and 10 Ci [␣ 32 P]-GTP were incubated in NMPylation buffer (2 mM DTT in the buffer was replaced by 2 mM ␤mercaptoethanol) for 20 min. Dynabeads (ThermoFisher, Cat#10103D) were added to remove the His-tagged nsp12, followed by adding 0.5 M nsp12 variants and 0.5 mM PP i . Samples were quenched at indicated time points and analyzed by electrophoresis. Inhibition by bisphosphonates 0.5 M nsp12 and 5 M nsp9 were incubated with different concentrations of Risedronate (Sigma-Aldrich, Cat#PHR1888) or Foscarnet (Sigma-Aldrich, Cat#PHR1436) in the NMPylation buffer for 5 min, then 25 M GTP and 10 Ci [␣ 32 P]-GTP were added to start the reaction. Reactions were performed for 10 min. RNA extension and cleavage An RNA oligonucleotide (5 -UUUUCAUGCUACGCG UAGUUUUCUACGCG-3 ; 4N) with Cyanine 5.5 at the 5 -end was obtained from Millipore Sigma (USA); this RNA hairpin serves as both the primer and the template (15). The RNA scaffold was annealed in 20 mM HEPES, pH 7.5, 50 mM KCl by heating to 75 • C and then gradually cooling to 4 • C. To test RdRp activity, reactions were carried out at 37 • C with 500 nM nsp12 variants, 1 M nsp7, 1.5 M nsp8, 250 nM RNA and 250 M NTPs in the transcription buffer (20 mM HEPES, pH 7.5, 15 mM KCl, 5% glycerol, 1 mM MgCl 2 , 2 mM DTT) for 20 min at 37 • C. For pyrophosphorolysis, holo RdRp was preincubated with the RNA scaffold at 37 • C for 5 min in the transcription buffer; then the indicated combinations of PP i and NTPs were added. Reactions were stopped by adding 2× stop buffer (8 M Urea, 20 mM EDTA, 1× TBE, 0.2% bromophenol blue). Sample analysis Protein samples were heated for 5 min at 95 • C and separated by electrophoresis in NuPAGE™ 4-12% gels (Ther-moFisher, Cat# NP0329BOX). RNA samples were heated for 2.5 min at 95 • C and separated by electrophoresis in denaturing 9% acrylamide (19:1) gels (7 M Urea, 0.5× TBE). The gels were visualized and quantified using Typhoon FLA9000 (GE Healthcare) and ImageQuant. All assays were carried out in triplicates. The means and standard deviation (SD) were calculated by Excel (Microsoft). NMPylation requires the native N-terminus of SARS-CoV-2 nsp9 Impressive progress in the structural studies of SARC-CoV-2 transcription machinery, reviewed in (9), far outpaces its functional analysis. The presence of two active sites that utilize the same NTP substrates in nsp12 complicates mechanistic analyses of RdRp, yet also provides means to assess overall 'quality' of a newly purified nsp12 variant (Supplementary Figure S1). A substitution that leads to large defects in both nucleotidyl transfer activities likely triggers gross protein misfolding because AS1 and AS2 are located very far apart and substitutions that abolish catalysis in one active site do not have reciprocal effects on the other (20,26). Using their cognate substrates is essential for the analysis of both nucleotidyl transfer activities and identical conditions should be employed if possible. In our experiments, we used standard solution conditions that support efficient RNA synthesis (Supplementary Figure S1), [␣ 32 P]-GTP, which supports efficient NMPylation (18,20) as an NMP donor, and SARS-CoV-2 nsps containing native Nand C-termini (shown in Supplementary Figure S2). Low efficiency of nsp12 self-NMPylation and its dependence on non-physiological concentrations of Mn 2+ ion (18,20,30) suggested to us that this reaction may be fortuitous. Furthermore, a recent study identified multiple sites of NMPylation in nsp7 and nsp12 using mass spectrometry (30). By contrast, a nearly complete, although still Mndependent, modification of nsp9 and additional functional data reported by Slanina et al. strongly support a model in which nsp9 is a genuine NiRAN target (20). We first assayed NMP transfer of nsp12 alone using two nsp9 proteins: a variant with the native N-terminus ( N nsp9; confirmed by MS analysis; see below) and an extended nsp9 with two additional residues at -1 and -2 ( GSN nsp9), identical to that used in (21). These recombinant proteins were produced by cleavage of tagged nsp9 precursors by Ubiquitin-likespecific protease 1 (Ulp1) and Tobacco Etch Virus (TEV) proteases, respectively. We observed efficient GMP transfer to N nsp9 but not to GSN nsp9 by the wild-type (WT) nsp12 ( Figure 1C). NMPylation was abrogated by substitutions of conserved NiRAN residues (K50A in preA N , R116A in B N and D218A in C N ; Figure 1B) that inhibit viral replication in cell culture (20), but not by the D760A substitution in AS1. As expected, NiRAN mutants did not affect RNA synthesis, whereas the D760A variant was inactive (Supplementary Figure S1). The Y129A substitution at the NiRAN/RdRp domain interface modestly reduced both activities ( Figure 1C and Supplementary Figure S1), suggesting that the mutant protein could have an altered fold. We conclude that, as shown for the HCoV-229E system (20), SARS-CoV-2 nsp9 that has the native N terminus is NMPylated by AS2. Under our reaction conditions (25 M GTP, 10 min incubation at 37 • C, 10-fold molar excess of nsp9), approximately 25% of nsp9 was GMPylated ( Supplementary Figure S3A). This corresponds to 2.5 molecules of GMP-nsp9 per one molecule of nsp12, the expected ratio of proteins generated during translation of the viral genome (31). The modification efficiency could be affected in the presence of other RTC components and physiological solutes and is expected to increase dramatically at physiological NTP concentrations: the fraction of GMP labeling of SARS-CoV nsp7 + 8 (at 6 mM Mn 2+ ) increased from 0.01% at 0.2 M GTP to 1% at 200 M GTP (30). Our findings and those of Slanina et al. (20) underscore the potential importance of native termini to protein function. The N-and C-termini are commonly modified to include purification tags, an approach that is justified when these ends are phylogenetically variable and make no functional interactions. However, in the context of CoV protein maturation, the 'correct' ends are generated upon proteolytic cleavage of the polyproteins by the viral protease. Given that the very first residue of nsp9 is the target of NMPylation, the potential importance of the identity of this residue is obvious, as is the prudence of preserving its native identity in experiments unless and until that identity is proven to be unimportant. NMPylation occurs in the presence of nsp7/8 cofactors and does not require Mn 2+ To match the previously published conditions, we carried out GMPylation assays with nsp12 alone. To ascertain that this activity is preserved in context of the transcribing RdRp holoenzyme (nsp12•7•8 2 ), we repeated our assays in the presence of nsp7, nsp8, and an RNA scaffold under conditions that support robust RNA extension by SARS-CoV-RdRp (9,26). Our results demonstrate comparable nsp9 modification by nsp12 alone or as part of an active transcription complex (Supplementary Figure S3B). Unlike that of its structural homolog SelO (22), the Ni-RAN domain's activity was thought to be dependent on the Mn 2+ ion, at least for the EAV and HCoV-229E RdRps (18,20). Surprisingly, we observed equally efficient GMPylation in the presence of 1 mM Mg 2+ or 1 mM Mn 2+ (Figure 1C (32). Mg 2+ is the major cellular cofactor in electrophilic catalysis, in part due to its superior bioavailability and environmental abundance (32). Although Mn 2+ can also function as the cofactor for the nucleotidyl transfer reaction for diverse nucleic acid polymerases, Mn 2+ binding alters the active site geometry (32) to promote base misincorporation (33) and other inefficient reactions (34). The Mn 2+ ion overrides a requirement for a canonical signal to fortuitously activate cyclic GMP-AMP [cGAMP] synthase (35) and can resuscitate a catalytically-compromised RNA polymerase II (36). Although Mn 2+ and Mg 2+ can support similar octahedral coordination in the active site (32), Mn 2+ has also been observed to form a strikingly different network, independent of the catalytic triad residues (35). Consistent with the Mn 2+ -induced gain-of-function, we observed GMP transfer to BSA in the presence of 1 mM Mn 2+ (Supplementary Figure S4) and Mn 2+dependent modifications of the accessory nsp7 and nsp8 subunits have been reported (30,37). These transfer reactions are very inefficient, <<1% at low GTP concentrations (30), when compared to NMPylation of nsp9 (Supplementary Figure S3A)and could be easily mistaken for the 'cognate' modification when observed in the absence of a real target. These large differences in transfer efficiencies could explain why we did not detect GMP transfer to nsp7 or nsp8 with either metal cofactor. We do not know why only the Mn 2+ -dependent NMP transfer was observed with EAV and HCoV-229E RdRps since we used very similar reaction conditions (18,20). We speculate that differences in RdRp folding could explain Mn 2+ dependence. CoV RdRps are highly dynamic enzymes that undergo large conformational changes during the transcription cycle (38) and can become misfolded during expression in heterologous hosts (26). In particular, the NiRAN domain has been captured in different conformational states in cryoEM structures and becomes more ordered upon ligand binding to the active site (9,14,17,39,40). nsp9 modification is not required for its release from nsp12 Enzymes that mediate protein NMPylation frequently have low affinity for their targets, necessitating covalent linkage of enzyme:substrate complexes for structural analysis (41). The modified nsp9 appears to be released from SARS-CoV-2 RdRp (Supplementary Figure S3A) and multi-round NMPylation of HCoV-229E nsp9 has also been reported (20). The formation of an apparently stable complex between SARS-CoV-2 RdRp and the modification-resistant GSN nsp9 visualized by cryoEM (21) raises a possibility that NMPylation is a prerequisite for nsp9 release. To test this idea, we used competition between the native N nsp9 and the extended GSN nsp9 variant ( Figure 1C). We found that preincubation of nsp12 with a three-fold molar excess of GSN nsp9 only slightly inhibited modification of N nsp9 ( Figure 1D). Although it is possible that NMPylation alters nsp9 affinity for RdRp, a possibility that we intend to evaluate in the future, we conclude that free nsp9 is in a dynamic equilibrium with the nsp9•12 complex regardless of the presence of the N-terminal nucleotide moiety. The role(s) of nsp9 in the viral life cycle remains to be elucidated. Substitutions of nsp9 residues that interact with nsp12 (21) abolish viral replication (24), an effect that has been attributed to a loss of nsp9 dimerization observed in structural studies (23,25) but is at least equally likely to be due to the loss of nsp9 binding to RdRp. The essentiality of nsp9 modification for viral replication (20) makes NMPylation a valid target for inhibition. Our results indicate that peptidomimetic compounds that resemble the nsp9 N-terminus are unlikely to serve as efficient inhibitors of NMPylation. However, substrate analogs that bind to AS2 may either interfere with NMP transfer to nsp9 or lead to modified but non-functional nsp9. SARS-CoV-2 NiRAN can bind diverse nucleotides Previous studies of NMPylation revealed differences in substrate utilization by different RdRps, which could be expected based on significant sequence divergence of the Ni-RAN domains ( Figure 1B) that reflects a long evolutionary history of Nidovirales (42). For example, the His75 residue in the A N motif, which contacts ADP•AlF 3 in the SARS-CoV-2 RdRp-helicase structure (17), is represented by Cys in HCoV-229E and by Val in EAV NiRAN domains. EAV RdRp displayed a strong preference for UTP, followed by GTP, and ATP and CTP were barely used (18), while HCoV-229E RdRp utilized all NTPs with preference for UTP (20). Structures of SARS-CoV-2 transcription complexes with NiRAN-bound nucleotides do not reveal any base-specific contacts (Figure 2A), suggesting that all NTPs would be used as substrates for NMPylation, and direct transfer of GMP and UMP to protein has been demonstrated by mass spectrometry (30). Consistently, competition experiments in which [␣ 32 P]-GMP transfer to nsp9 was assayed in the presence of cold NTPs show that while GTP and UTP are marginally more effective competitors, the differences among all NTPs are small ( Figure 2B). Thus, unlike EAV and HCoV-229E RdRp, the SARS-CoV-2 Ni-RAN does not appear to have a strong substrate preference. The assay design may also contribute to the observed discrepancies. Commercial radiolabeled NTP preparations contain impurities that compromise some sensitive assays, in contrast to high-purity NTPs (see Methods) that we use for all in vitro transcription experiments. Using competition of highly purified NTPs against the same radiolabeled NTP substrate minimizes concerns about variable purity of four different [␣ 32 P]-NTPs and also reduces the cost. These results suggest that nsp9 modification in vivo will be controlled by the relative abundance of natural NTPs and stabilities of the NMP adducts. Furthermore, it is possible that synthetic nucleoside triphosphates, such as the ATP analog remdesivir triphosphate (RTP) or the GTP analog AT-9010 that binds to AS2 (37), could transfer the NMP moiety to nsp9, whereas other analogs may act solely as competitive inhibitors. To test this idea, we used several nucleotide analogs as competitors of nsp9 GMPylation. We found that GDP, GMP, ITP (inosine triphosphate), and GMPCPP efficiently competed with [␣ 32 P]-GMP transfer to nsp9, whereas ppGpp was less effective ( Figure 2C). Unexpectedly, we also observed that the NMPylation reaction was strongly inhibited when inorganic pyrophosphate PP i was present along with the GTP substrate ( Figure 2C, last lane). Surprisingly, and in contrast to ATP ( Figure 2B), we found that RTP was a relatively poor competitor ( Figure 2C). Analysis of RNA synthesis by SARS-CoV-2 RdRp demonstrated that RTP binds to AS1 with much higher affinity than ATP and is a better substrate than ATP (43). Why does RTP fail to efficiently compete with GTP during NMPylation? In RTP, a cyano-group is attached to the 1 position of the ATP ribose sugar; while the cyanogroup does not interfere with RMP incorporation into the nascent RNA, it clashes with the Ser861 residue in nsp12 after RdRp adds three more nucleotides downstream of RMP, leading to a temporary stall during RNA chain extension (11,44). When remdesivir diphosphate is modeled in place of ADP into the structure of SARS-CoV-2 RdRp with the NiRAN-bound ADP•AlF 3 (Figure 2A, right), the cyanogroup at the 1 position clashes with His75, potentially explaining why RTP is a poor competitor of the NMPylation reaction. In these experiments, an apparent reduction of [␣ 32 P]-GMP transfer to nsp9 can be due to competitive inhibition of GTP binding (e.g. by GDP or GMP) or to nsp9 modification by an NTP analog (e.g. by ITP or RTP). To evaluate the second possibility, we used mass spectrometry. Our results show that, as reported by Slanina et al. (20), the N-terminus of nsp9 is modified by GMP (Supplementary Figure S5). We also observed NMPylation in the presence of ITP and RTP (Supplementary Figure S5). Although at present we cannot determine the efficiency of nsp9 modification by either nucleotide, our findings suggest that non-natural NTPs can be utilized as NiRAN substrates. In turn, this raises a possibility that antiviral nucleoside analogs have a potential to interfere with yet-to-be determined function of nsp9 in viral replication. Pyrophosphate promotes the removal of GMP from nsp9 The nucleotidyl transfer reactions of AS1 and AS2 generate two products: PP i , in each case, and either a onenucleotide-extended RNA or NMP-nsp9, respectively. A reverse reaction, pyrophosphorolysis, is unfavorable at physiological concentrations of NTPs and PP i , but is commonly used to evaluate the translocation register of multi-subunit DNA-dependent RNA polymerases (45), and has also been observed in RdRps (46,47). RNA polymerases behave as thermal ratchets that oscillate between the pre-and posttranslocated registers on the template (48). This motion is The active site (marked by the position of the catalytic Mg 2+ ; magenta sphere) consists of two sub-sites: the P-site (product; yellow) and the Asite (acceptor; cyan). Following nucleotide addition, the pRNA 3 end is bound in the A-site in the pre-translocated state. Upon forward translocation, the 3 end moves to the P-site and the incoming substrate NTP (blue) can bind to the A-site through base pairing with the acceptor base of the RNA template strand (black). The pre-translocated state is sensitive to pyrophosphorolysis, which generates the NTP product and a one-nt shortened pRNA. Bottom: SARS-CoV-2 RdRp transcription complex assembled on the 5 Cy5.5-labeled hairpin, which comprises both the template and the product RNAs, is completely resistant to PP i even in the absence of NTPs. (B) GMP removal from nsp9 in the presence of PP i . (C) The handover assay in which pre-GMPylated nsp9 is incubated with WT or mutant nsp12 variants. Signal intensity was compared to that observed with nsp9 incubated with buffer (set at 1) and is shown as mean ± SD (n = 3). rectified by binding of the incoming substrate NTP, which binds in the acceptor site ( Figure 3A) and locks the posttranslocated state, or of PP i , which induces cleavage of the 3 -terminal nucleotide in the product site when the enzyme is in the pre-translocated register ( Figure 3A). PP i cleavage leads to shortening of the nascent RNA by one nucleotide and subsequent backward translocation, sometimes in several successive steps (49). The nascent RNA cleavage typi-cally requires superphysiological concentrations of PP i because the transcription elongation complex is biased toward the post-translocated state at most template positions, for bacterial RNA polymerases and SARS-CoV-2 RdRp alike (38,48). Consistently, we observed that scaffoldassembled SARS-CoV-2 complexes were relatively resistant to pyrophosphorolysis even in the absence of NTPs (Figure 3A), a result that is comparable to those obtained with hepatitis C virus (HCV) RdRp (46,47). Interestingly, we observed non-canonical PP i -induced RNA cleavage by two nucleotides in a fraction of complexes, reminiscent of 'reverse pyrophosphorolysis' by noncognate NTP substrates in HCV RdRp that also generates a 2-nt cleavage product (46). Similar to the results obtained for the HCV enzyme (46), when PP i was present in 200-fold molar excess over NTPs, polymerization reaction was favored and no cleavage was apparent ( Figure 3A). Unlike some RNA polymerases, e.g. HCV RdRp (46) and E. coli RNA polymerase (34), which cleave the nascent RNA in the presence of noncognate NTPs, SARS-CoV-2 RdRp did not (Supplementary Figure S6). However, we observed that PP i efficiently inhibited nsp9 NMPylation ( Figure 2C), consistent with the finding that PP i binds to AS2 in SARS-CoV-2 RdRp/favipiravir complex (39). This inhibition could be due to PP i competition with the substrate GTP, direct reversal of NMPylation reaction (pyrophosphorolysis), or hydrolysis assisted by the PP i -bound Mg 2+ ion. In cellular RNA polymerases, diverse small molecules and accessory proteins can deliver Mg 2+ to the active site to stimulate the cleavage of the nascent RNA (48). To test if PP i can de-NMPylate [ 32 P]-GMP-nsp9, we preincubated nsp9 with nsp12 prior to addition of PP i (or water). In the presence of 0.5 mM PP i , we observed rapid disappearance of the labeled nsp9 ( Figure 3B), indicating that NMPylation is reversible; similar results were obtained with the nsp12•7•8 2 holoenzyme (Supplementary Figure S7). In nsp12, two active sites mediate NMP transfer. A model in which NMPylated nsp9 serves as a primer for RNA synthesis implies that nsp9 binds to AS1 and positions the NMP for extension (20). Thus, both active sites could in principle mediate the PP i -driven de-NMPylation. To evaluate the contribution of each active site, we carried out a 'hand-over' assay, in which histidine-tagged nsp12 used to NMPylate nsp9 was subsequently removed, and another, untagged nsp12 was added post facto ( Figure 3C). We found that the WT and D760A nsp12s mediated de-NMPylation, whereas the D218A enzyme did not ( Figure 3C), ruling out an essential contribution of AS1 to the reversal of nsp9 modification. Interestingly, while D760A is more efficient in NMPylating nsp9 ( Figure 1C), it was slightly less efficient in the reverse direction. Thus, we cannot preclude the possibility of some involvement of AS1 in de-NMPylation, but the difference between the WT and D760A was barely significant (P = 0.16), necessitating a more detailed analysis with additional variants of AS1 and AS2 residues. Only a few examples of de-AMPylation are known, and most utilize different catalytic domains, either in the same or in different proteins (19). An example in which the same Fic domain mediates AMPylation and de-AMPylation of BiP, a major ER chaperone required for protein homeosta-sis in metazoans, has been recently reported (50). However, de-AMPylation releases AMP, not ATP, showing that FicD active site has both AMP transferase and phosphodiesterase activities (50). To elucidate the mechanism of the 'reverse' reaction catalyzed by the NiRAN domain, we analyzed the products of PP i -induced de-GMPylation of Nsp9 using thin layer chromatography ( Supplementary Figure S8). Our results suggest that de-GMPylation generates GTP and is therefore a true reversal of the forward reaction. However, the product pattern is complicated owing to an intrinsic nucleotide hydrolysis activity of nsp12 (Supplementary Figure S8), also observed by Yan et al. (21). Future experiments will be required to reveal the detailed mechanisms of all catalytic reactions catalyzed by RdRp. Bisphosphonates inhibit nsp9 modification Strong inhibition of NMPylation reaction by PP i ( Figure 3B) suggests that similar ligands that bind to AS2 ( Figure 4A) may competitively inhibit nsp9 modification or trigger its reversal. To evaluate this possibility, we used chemically stable PP i analogs bisphosphonates. We chose two FDAapproved compounds, Foscarnet (Fos) and Risedronate (Ris), as representative non-nitrogenous and nitrogenous bisphosphonates, respectively. Fos inhibits viral DNA polymerases, including HIV reverse transcriptase (51,52), and is used for treatment of infections caused by viruses in Herpesviridae. Ris is broadly used to treat diseases associated with bone loss, such as osteoporosis (53). We show that Fos and Ris inhibit nsp9 NMPylation, although less efficiently than PP i, ( Figure 4B). While PP i reduced NMPylation more than ten-fold when present at 50 M, only 2-fold inhibition was achieved at 0.75 mM of either bisphosphonate ( Figure 4B). These results suggest that while PP i actively promotes de-NMPylation, bisphosphonates may act solely as competitive inhibitors of the forward reaction. Indeed, unlike PP i , neither compound induced the removal of the GMP moiety from nsp9 ( Figure 4C). We propose that bisphosphonates could be explored as inhibitors of NiRAN-mediated NMPylation; while neither of the two compounds tested was a potent inhibitor, many bisphosphonates are available or can be made to support structureguided drug discovery. Roles and targets of vital NiRAN NMPylation The NiRAN domain is essential for replication of several human respiratory viruses, including the alphacoronavirus HCoV-229E, which causes the common cold, and betacoronaviruses SARS-CoV (18) and SARS-CoV-2 (20). Nucleotidylation activity of the NiRAN domain, which lacks any sequence homologs, was initially suggested by elegant bioinformatics analysis and confirmed by a proofof-principle demonstration that EAV RdRp was capable of self-NMPylation (16)(17)(18). Structural similarities between SelO and NiRAN (16,17) further strengthened by identification of nsp9 as a NiRAN target among HCoV-229E proteins (20), argue that the NiRAN domain is a protein NMPylase. In their pioneering study suggesting and confirming the existence of the NiRAN domain, Lehmann et al. postulated three potential roles of NiRAN-mediated NMPylation in the nidoviral replicative cycle (18). One possible role is that of an RNA ligase, although the identity of the substrates, and indeed the step itself, remains entirely hypothetical to date. Another is that of a guanylyltransferase (GTase) involved in 'capping' the 5 -end of transcribed RNA. Such capping is essential for viral replication and successful host infection, and all enzymes involved in the capping pathway, save the GTase, had already been identified years previously. The third possibility is that it serves a protein 'primer' of RNA synthesis, by covalently binding a nucleotide and, following its extension to a dinucleotide, delivering it to the 3 -end of the viral RNA template. Such priming is widely used across viral families (54). In discussing each of these putative roles, the authors noted that the sum total of structural, functional, and phylogenetic evidence then available, including their own findings, could not be entirely reconciled with any single role, much less definitively preclude the two others. Several recent studies have been less hesitant, assigning to NiRAN exactly one of these roles. First, Yan et al. posited that NiRAN performs a 'capping' role. In support of this assignment, they cited primarily structural arguments based on a cryo-EM snapshot of an RdRp-helicase complex in which the N-terminus of nsp9 was observed deep within the NiRAN active site, where it contacted a bound GDP molecule in a conformation stabilized by base-stacking with the His75 residue (21). They reasoned therefore that nsp9 must be either the target of Ni-RAN NMPylation or a competitive inhibitor of it, concluding the latter since their functional assays detected the formation of capped RNA but not the NMPylation of nsp9. Second, Slanina et al. posited instead that NiRAN NMPylates nsp9, which then serves as a primer of RNA synthesis (20). Their functional studies provided direct evidence of NMPylation of nsp9 and NiRAN mediation thereof, with mutational and phylogenetic data supporting the additional conclusions that this NMPylation requires a free N terminus and allows little variation within the Nterminal tripeptide ( Figure 1B). In particular, the indispensability both of Asn2 for nsp9 NMPylation in vitro and of Ni-RAN activity for viral replication provided a profound and elegant explanation why Asn2 is the only invariant residue across all nsp9 homologs (20). Passing the baton: a speculative but integrative model How can such findings be reconciled with one another, let alone with preceding or succeeding findings, including our own? Our results unequivocally demonstrate the importance of the native N-terminus of nsp9 for its NMPylation ( Figure 1C), and thus we concur with Slanina et al. in arguing that the failure by Yan et al. to observe any such NMPylation is entirely due to their use of an artificially extended nsp9. The conclusion put forward by the latter -that NiRAN must therefore cap 5 pRNA, and do so directlyis thus unfounded. However, if NiRAN is not the GTase 'missing link' in the capping pathway, no obvious candidate for this essential function remains. As expected due to the lack of sequence-specific contacts between the nucleotide base and NiRAN residues (17,20), we found that all natural NTPs compete with GTP (Figure 2B), suggesting that nsp9 can be modified by diverse nucleotides (including remdesivir monophosphate, Supplementary Figure S5), and their respective cellular abundances will largely determine the identity of the adduct. However, it is possible that AS2 specificity may be 'tuned' in the presence of other RTC components. We also show that, unlike the RNA chain synthesis, nsp9 modification is readily reversible in the presence of PPi (Figure 4B and Supplementary Figure S8) and that nsp9 interactions with AS2 are highly dynamic, i.e., NMPylated nsp9 released from nsp12 can be handed over to another enzyme for de-NMPylation ( Figure 3C). Finally, we show that ligands that bind to AS2, including nucleoside mono-and diphosphates ( Figure 2C) and bisphosphonates ( Figure 4B), inhibit nsp9 NMPylation. Taken together, these results strongly argue for NMPylation of nsp9 at NiRAN AS2. If so, to what end? nsp9 binds RNA, with no apparent sequence specificity (23,25), and nsp12 (21), but it is not clear how Asn1 modification would affect either interaction: residues thought to bind RNA are far away from Asn1 (23,25), and our results are inconsis-tent with any significant thermodynamic contribution of the NMPylation of nsp9 to its binding to nsp12 ( Figure 1D). Rather, nsp9 appears to be ideally suited to deliver NMP to secondary acceptors: the NMP moiety is attached to the primary amine of N-terminal Asn1 (20) located at the end of a flexible N-terminal tail, and protein-N-NMP linkages are common in nucleotidyl transferases that catalyze ligation and capping reactions (55,56). Therefore, we envision an essential role for NMPylated nsp9 in both priming and capping ( Figure 5), perhaps as vital to the outcome as a baton passed between runners in a race. First, nsp9 is NMPylated by the NiRAN domain at AS2 and then dissociates from nsp12. Second, NMP-nsp9 binds to AS1 and serves as a primer for RNA synthesis; although nsp9 is not known to bind to specific RNA sequences (23,25), it is possible that, when bound to an RTC, NMP-nsp9 recognizes a specific sequence/structure in the viral RNA to direct precise initiation. It is likely that different RdRp complexes synthesize (+) and (-) RNA strands, complicating this analysis; priming of the (-) strand synthesis by NMP-nsp8 has been recently proposed (37). Third, as the nascent pRNA chain grows and is displaced from tRNA, pRNA-nsp9 rebinds to AS2 and a second nucleotidyl transfer reaction takes place to cap the pRNA, releasing the unmodified nsp9 and resetting the cycle. Avenues and implications for future research We admit that this is a very speculative model and propose it to provoke investigation rather than to provide concrete answers. Mechanistic studies of SARS-CoV-2 RTCs are in their infancy, and future experiments will be needed to elucidate various aspects of its function and regulation. However, we argue that this model is a worthy starting point for several avenues of future research. Below we give several reasons for this claim and answer some anticipated objections. First, such a capping mechanism is not unprecedented, for an analogous one has been described in rhabroviruses, such as vesicular stomatitis virus (VSV). VSV encodes a giant 2100-residue L protein, which contains RdRp, nucleotidyl transferase, and methyl transferase modules (54). Via a covalent (L-histidyl-N ε2 )-pRNA intermediate, L transfers the pRNA moiety to GDP to yield GpppA-RNA (54). Can NiRAN use GDP as an acceptor? We show that GDP competes with GTP during nsp9 NMPylation ( Figure 2C) and concentration of GDP in infected cells may be sufficient (57). While very little is known about the NiRAN catalytic mechanism, other AMPylases possess surprising catalytic diversity: in addition to NMP, Fic proteins can transfer phosphocholine and phosphate to their targets (58,59). Second, nsp9 can be more than just a passive delivery vehicle for pRNA. Capping enzymes are composed of a nucleotidyl transfer domain fused to a distal OB-fold domain (55), suggesting that nsp9 OB-fold domain (23,25) may co-operate with the NiRAN domain during pRNA capping, remodeling AS2. For example, Gre and TFIIS transcription factors, which reactivate arrested RNA polymerases in all domains of life, deliver the second catalytic Mg 2+ ion to the active site to switch it into an RNA cleavage mode (48). Such remodeling of AS2, possibly in partnership with other components of the RTC, might mediate sequence specificity of NMPylation of nsp9 in order to prime the initiation of positive-versus negative-polarity RNA. Third, we also admit that neither we nor others have definitively precluded all other possible protein targets of NiRAN. Enzymes that catalyze post-translational protein modifications, including AMPylation, have broad specificities--SelO was found to transfer biotin-AMP to a number of targets, including common control substrates, and only some cellular targets of SelO are thought to be genuine (22). The high efficiency of NMP transfer to nsp9 (Supplementary Figure S3A), conservation of nsp9 N-terminus ( Figure 1B), and the essentiality of N-terminal nsp9 residues for viral replication (19) all argue that nsp9 is a true protein target of NiRAN. In addition to nsp9, NiRAN could also modify some other viral or host proteins, complicating the extension of in vitro results to the viral replicative cycle or to the infection process as a whole. An interesting question, prompted by RdRp self-NMPylation observed in several studies (18,20,30), is whether the NiRAN domain is autoinhibited in the absence of a cognate substrate, a common feature among AMPylating enzymes (19). A conformational change upon substrate binding would trigger displacement of the autoinhibitory module, a target of self-NMPylation, making the active site accessible. Our observation that nsp12 GMPylates BSA only in the presence of Nucleic Acids Research, 2021, Vol. 49, No. 15 8833 both Mn 2+ and nsp9 (Supplementary Figure S4), together with the lack of BSA GMPylation reported by Conti et al. (30), is consistent with this idea. Fourth, our model has at minimum the virtue of not merely reconciling various seemingly contradicting findings, but also suggesting how they might be integrated into a more holistic understanding of the role of NiRAN, in concert with RNA and protein factors, in the entire nidoviral replicative cycle. We recently showed that over-optimization of the SARS-CoV-2 RdRp coding sequence to replace rare codons in a heterologous expression platform can lead to an inactive enzyme (26). We also showed that both AS1 and AS2 not only share substrates and inhibitors but also 'crosstalk' via an allosteric pathway, and therefore that drug discovery and functional studies are myopic to focus exclusively on AS1 and unjustified in judging it to be the cause of all observed effects. Similarly, a recent study found that whereas all existing cryo-EM structures of SARS-CoV-2 RdRp modeled nsp12 as chelating Zn centers, the physiological cofactors are in fact Fe-S clusters, which become replaced by Zn 2+ ions in the aerobic conditions in which proteins are typically purified (60). Furthermore, such clusters were found to be essential, for their disassembly via oxidative degradation inhibited both RdRp activity and viral replication. This result, obtained using a well-characterized nitroxide, suggests a potentially rich vein of COVID-19 therapeutics that might have been completely overlooked had the suitability of the cryo-EM structural preparations not been properly questioned. Both these previous results and those presented here clearly demonstrate the inherent dangers in using reductionist approaches to draw conclusions for more complex and holistic systems, such as viral replicative cycles and processes of host infection. Such approaches have advantages for quickly yielding insights into a narrow and welldefined question, and so it is wholly understandable why they are particularly attractive for research into systems like SARS-CoV-2 RdRp, where pressing concerns motivate researchers to obtain practical results as rapidly as possible. On the other hand, such frenetic research can easily outpace the self-correction normally occurring in science, suggesting that wherever possible, researchers should strive to holistically validate reductionist findings (e.g. verifying replication of mutant viruses in cell culture) and clearly communicate aspects of research methods that might be expected to restrict the applicability of their results. DATA AVAILABILITY All data that support the findings of this study are available from the corresponding author upon request.
2021-06-16T13:09:42.422Z
2021-06-14T00:00:00.000
{ "year": 2021, "sha1": "f3eea4bedd04482dddcac8fa26bcef8e34ebf56a", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/nar/article-pdf/49/15/8822/40082997/gkab677.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9f683631024d8b29036f87d8bfc244a946d71186", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
2427957
pes2o/s2orc
v3-fos-license
Low expression of circulating microRNA-328 is associated with poor prognosis in patients with acute myeloid leukemia Background Dysregulation of circulating miR-328 has been identified in several tumors and is associated with prognosis of patients. However, the expression pattern of miR-328 and the impact on prognosis has not yet been studied in acute myeloid leukemia (AML). The purpose of this study is to investigate the expression status of miR-328 and its clinical significance in AML patients. Methods RNA was extracted from plasma of 176 patients with newly diagnosed AML and 70 healthy volunteers. The miR-328 expression was examined by Realtime quantitative PCR. The association of circulating miR-328 expression with clinicopathological factors and prognosis of AML patients was statistically analyzed. Results The expression of miR-328 was significantly downregulated in AML patients (median value 22.99, range: 3.63-242.0) compared with those of healthy controls (median value 89.17, range: 12.05-397.7; P < 0.001), and miR-328 expression was markedly increased in patients after treatment than before (23.40 ± 1.76 vs. 46.61 ± 3.83, P < 0.001). Moreover, low levels of miR-328 were associated with a higher white blood cell count and BM blast count (P = 0.026 and P = 0.003, respectively), and lower hemoglobin and platelet count (P = 0.004 and P = 0.022, respectively). Patients with low miR-328 expression had a relatively poor overall survival (P = 0.022) and shorter relapse-free survival (P = 0.008) than those with high miR-328 expression. In addition, low miR-328 expression was an independent prognostic factors for both OS (P = 0.017) and RFS (P = 0.023). Conclusions Circulating miR-328 downregulation is a common event and is associated with poor clinical outcome in AML patients. Background Acute myeloid leukemia (AML), the most common type of acute leukemia in adults, is a clonal disorder caused an accumulation and differentiation arrest of myeloid blasts in the bone marrow and blood. The pathologic mechanism of AML can be largely explained by cytogenetic aberrations, acquired mutations and dysregulated gene expression [1,2]. Based on cytogenetic information, AML patients are classified into three risk-based categories: favorable, intermediate, and poor, with a 5-year overall survival (OS) rate of 55 %, 24 %-42 %, and 11 %, respectively [3]. Treatment of AML has dramatically improved over the past several decades, with improvements in risk assessment, post-remission chemotherapy and hematopoietic stem-cell transplantation. However, the cause of AML is not yet fully understood. Therefore, early and accurate diagnosis of AML is essential for optimal treatment outcome and may deeply improve the prognosis of patients with AML. MicroRNAs (miRNAs) are a class of non-coding small RNAs of~22 nucleotides that regulate expression of target genes at the post-transcriptional level [4]. MicroRNAs function by directly binding to their potential target site in the 3'untranslated region (3'UTRs) of specific target mRNAs, resulting in the repression of mRNA translation or the degradation of target mRNAs [5]. Since the discovery of the first miRNAs, these small genes have added a new layer of complexity to the regulation of normal and pathological cell functions. Recent studies have indicated a key role of miRNAs in biological processes including cell proliferation, differentiation, apoptosis, as well as cancers and cardiovascular diseases [5,6]. Currently, aberrant expression of miRNAs appears to be a common characteristic of hematological malignancies, including leukemias [7,8]. Dysregulation of single miRNAs such as miR-212 [9], miR-124-1 [10], miR-181 [11] and let-7a-3 [12] has been found to be associated with the outcome of AML patients. Recently, it has been reported that miRNAs are present in serum or plasma in a stable and reproducible fashion, and the unique expression patterns of serum or plasma miRNAs can be used as a new class of effective biomarkers for various diseases [13][14][15]. MiR-328, known as a tumor suppressor, is involved in the cancer development and progression [16,17]. MiR-328 was reported down-regulated in chronic myelogenous leukemia blasts and glioblastoma tissues. However, a previous report found that peripheral blood miR-328 expression was up-regulated in non-small cell lung cancer (NSCLC) patients [18]. Wang et al. found that plasma miR-328 concentrations were significantly elevated in acute myocardial infarction (AMI) patients compared to those control subjects [19]. However, to the best of our knowledge, no previous reports exist concerning the expression status of circulating miR-328, the prognostic value and the role of this miRNA in AML. Thus, the aim of the present study was to investigate the correlation of circulating miR-328 with clinicopathological features as well as the prognosis of the patients with AML. Our findings may provide the better understanding on the roles and its clinic implications of circulating miR-328 in the development and progression of AML. Patients and follow-up From February 2010 to September 2014, 176 newly diagnosed de novo AML patients from the Department of Hematology at Tangdu Hospital of Fourth Military Medical University were enrolled in this study; there were 86 males and 90 females, with a medium age of 39.7 (range 16.2-67.6) years. 70 unrelated healthy adult donors were collected as controls; all the control subjects were matched with patient population in terms of age and sex. None of these controls had previously been diagnosed with any type of malignancy or other benign disease. AML patients were diagnosed according to standard diagnostic methods including cytomorphological, cytochemical, immunological and cytogenetic evaluation. The diagnosis and classification of AML patients were based on the French-American-British (FAB) and World Health Organization (WHO) criteria combined to immunophenotyping and cytogenetic analysis [20][21][22][23]. 124 patients received standard induction chemotherapy consisted of 1 or 2 courses of daunorubicin (45 mg/m 2 daily for 3 days) combined with cytarabine (100 mg/m 2 ) by a 7-day continuous intravenous infusion. AML complete remission (CR) was defined as a normocellular BM containing less than 5 % blasts and showing evidence of normal maturation of other marrow elements; a neutrophil count of 1 × 10 9 /L and a platelet count of 100 × 10 9 /L. 76 patients achieved CR, and then given high-or medium dose cytarabine-based chemotherapy for consolidation according to their physical condition. Patients were followed up for a median 26 months (range 5-51 months); Patients without death or relapse by the time of last follow-up were censored on that date. Overall survival (OS) was defined as the time from the diagnosis of AML to any cause of death. Relapse-free survival (RFS) was defined as the time between the achievement of complete remission and the time of the hematological relapse or death. This study was approved by the Ethics Committee Board of Tangdu Hospital of Fourth Military Medical University. Informed consent was obtained from each participant according to the committee's regulations. Details of clinical characteristics of the patients are provided in Table 1. Plasma collection and RNA extraction Blood samples were collected in EDTA-K 2 tubes and processed within 1 h of collection. Cell and nucleic acids free plasma was isolated from all blood samples using a 2-step centrifugation protocol (3000 g for 10 min and 12000 g for 5 min, all at 4°C). The supernatant was transferred to RNase/DNase free tubes and stored at −80°C. The plasma was first spiked with miScript miRNA mimic SV40 (Qiagen, Hilden, Germany, 2 μM, 1 μl per 100 μl plasma). Total RNA was isolated from the plasma using TRI reagent BD (MRC, USA) according to the manufacturer's instructions and dissolved in 20 μl of RNase-free water. RNA sample concentration was quantified by NanoDrop ND-2000 (Thermo Fisher Scientific, USA). Quality of RNA was generally checked by the ratios of A260/A280 and A260/A230 and RNA integrity was assessed by electrophoresis through denaturing agarose gels. qRT-PCR analysis of plasma miR-328 Total RNA (1 μg) from each sample were converted into cDNA using PrimeScript RT reagent kit with gDNA Eraser (TaKaRa, Japan) and miRNA-specific stem-loop RT primer or SV-40 primers (Applied Biosystems, USA). Briefly, the reverse transcription reaction was performed in 20 μL mixture containing 10 μL of genomic DNA elimination reaction solution, 4 μL 5 × PrimeScript Buffer, 1 μL PrimeScript RT Enzyme Mix, 1 μL stem-loop RT primer or SV-40 primers, and 2 μL RNase Free water. For synthesis of cDNA, the reaction mixture was incubated at 42°C for 15 min, 85°C for 5 s, and then held at 4°C. Quantitative reverse transcriptase polymerase chain reaction (qRT-PCR) was performed on ABI 7500 fast real-time PCR system (Applied Biosystems, USA) using SYBR Premix Ex Taq The Ct values greater than 36 were considered as not expressed. Resultant miRNA levels were normalized using spiked-in SV40. The relative expression level of miR-328 was calculated by the equation of 2 -ΔCt (ΔC t = C t miR-328 -C t spiked-in SV40 ) [24]. The fold changes in miR-328 were calculated using the 2 -ΔΔCt method [25]. Each sample was analyzed in triplicate and the mean expression level was calculated. Statistical analysis Statistical analysis was performed with SPSS 16.0 for Windows (SPSS, Chicago, IL). Continuous data are presented as mean ± SD or median with interquartile range. Categorical variables are presented as counts and percentage. The Mann-Whitney U-test was used to evaluate the significant difference of expression of miR-328 between the AML patients and healthy controls. The paired t test was used to evaluate the difference expression of miR-328 before and after chemotherapy. Chi-square analysis or Fisher exact test was performed to evaluate the difference of categorical variables. Univariate logistic regression analyses for the association with the risk of survival and relapse to AML were tested first for miR-328 expression, age, gender and other clinical characteristics, and those factors were included into a second multivariate logistic analysis. Survival curves were plotted using the Kaplan-Meier method, and differences were tested using the log-rank test. Differences were considered to be statistical significant when P value was less than 0.05. MiR-328 was downregulated in AML patients The miR-328 expression levels were detected in plasma samples from patients with AML and healthy controls by qRT-PCR. As shown in Fig. 1a, Correlations between the levels of miR-328 and the clinicopathological factors in AML patients To identify the clinical relevance of miR-328 expression in AML patients, correlations between miR-328 expression and clinicopathological parameters were made. AML patients expressing miR-328 at levels less than the mean expression level (33.1) were assigned to the low expression group (mean expression value 20.87, n = 125), and those samples with expression above the mean value were assigned to the high expression group (mean expression value 63.03, n = 51). As shown in Table 1, low levels of miR-328 were associated with a higher white blood cell count and BM blast count (P = 0.026 and P = 0.003, respectively), and lower hemoglobin and platelet count (P = 0.004 and P = 0.022, respectively). However, other clinical characteristics, including age (P = 0.997), gender (P = 0.847), FAB subtype (P = 0.909), WHO classification (P = 0.074) and karyotype classification (P = 0.570) were not directly related to the low level of miR-328. Association between miR-328 expression and clinical outcomes of AML patients To investigate the prognostic impact of miR-328 low expression in AML, survival analysis was performed in 176 cases. There were no differences in the OS and RFS between two groups (P = 0.137 and P = 0.339, data not shown). Among 176 cases, 124 patients received standard induction chemotherapy, The CR rate after two cycles of chemotherapy was 44.0 % (55/125) in the lowexpression group, compared with 41.2 % (21/51) in the high-expression group (P = 0.861), there was no significant difference between the two groups. Moreover, the OS of 124 AML patients with high miR-328 expression was shorter than those with low expression, but the difference was not statistically significant (P = 0.176). However, among those obtained CR, overall survival curves and relapse-free survival curves in high-miR-328 group (n = 21) and low-miR-328 group (n = 55) are shown in Fig. 2. Patients with low mR-328 expression have shown significantly poorer overall survival (P = 0.022, Fig. 2a) and shorter relapse-free survival (P = 0.008, Fig. 2b) than those with high miR-328 expression. Univariate analyses showed that higher white blood cell count (P = 0.004), lower hemoglobin (P = 0.009), platelet count (P = 0.017), BM blast count (P = 0.012) and miR-328 level (P = 0.009) were significantly associated with OS (Table 2), while higher white blood cell count (P = 0.009), lower hemoglobin (P = 0.04) and lower miR-328 level (P = 0.01) were found to be prognostic factors for RFS (Table 2). Furthermore, the multivariate Cox regression analysis revealed that low miR-328 expression was an independent prognostic factors for both OS ( Table 2. Discussion Nowadays, it is becoming evident that expression patterns of microRNAs appears to be a common characteristic of hematological malignancies including leukemias, some of them can be a valuable tool for the diagnosis and prognosis of human cancer [26,8]. Recently, it has been reported that microRNAs are circulating in serum/ plasma. Additionally, microRNAs, such as miR-134 [19], miR-218 [27,28], miR150 and miR-324 [29] in human serum or plasma have been shown to have much stronger stability than high molecular weight RNA due to their resistance to RNase digestion [15]. These findings make microRNAs a potentially non-invasive tools for cancer diagnosis using blood samples [15]. The present study has confirmed, for the first time, that the plasma miR-328 may serve as useful diagnostic and prognostic biomarkers for patients with AML. MiR-328 has been suggested to be a tumor suppressor by targeting proto-oncogene serine/threonine-protein kinase PIM1 and translational regulator protein hnRNP E2 [26]. Eiring et al. reported that miR-328 was down-regulated in chronic myelogenous leukemia blasts, and low expression of miR-328 in CML is associated with progression to the blast crisis phase of the disease [16]. Wu et al. observed that miR-328 expression is decreased in high-grade gliomas and is associated with worse survival in primary glioblastoma [17]. However, miR-328 was also expressed at high levels in several cancers. Ulivi et al. reported that circulating miR-328 expression was significantly higher in non-small cell lung cancer (NSCLC) patients than in healthy donors [18]. Wang et al. found that plasma miR-328 concentrations were significantly elevated in acute myocardial infarction (AMI) patients compared to those control subjects [19]. In our research, plasma concentration of miR-328 was markedly downregulated in patients with newly diagnosed AML compared with healthy controls. Moreover, the expression of miR-328 was significantly elevated after chemotherapy when patients achieved CR, suggesting that expression of miR-328 is consistent with tumor burden. Our results were consistent with other studies regarding CML and glioblastoma [17,16], indicating that miR-328 plays an essential role in the original and/or progression of AML. MiR-328 is proposed as a suppressor gene because its expression is decreased in several types of cancers and mediates proliferation, invasion and metastasis of cancer cells. It is demonstrated that enforced expression of miR-328 could remarkably attenuate glioma cell proliferation, invasion and migration [30]. MiR-328 could also inhibit epithelialmesenchymal transition (EMT) via targeting CD44 [31]. These findings indicate that miR-328 plays a direct role in the modulation of cancer progression and may be useful as a novel prognostic or progression marker for cancer. In the current study, we found that downregulatation of miR-328 in AML patients was significantly associated with a higher WBC count and blast count in BM, and lower HGB and PLT counts, which represented more aggressive clinicopathological features. In addition, AML patients with low miR-328 expression tend to have poorer OS and RFS than those with high miR-328 expression, indicating that expression of miR-328 has an important value in AML prognosis classification. In a logistic regression analysis, an association was observed between miR-328 expression and the risk of both survival and relapse of AML patients. It was observed that those patients with low expression of miR-328, presented a high risk of OS (P = 0.009) and RFS (P = 0.017) to AML compared to those patients who had high expression of miR-328 expression. In addition, multivariate analyses performed showed that miR-328 low expression is an independent predictor for OS (HR = 2.67, 95 % CI, 1.12-4.73; P = 0.017) and RFS (HR = 1.914, 95 % CI, 1.01-3.27; P = 0.023) of AML patients, which was in agreement with recent findings in glioblastoma [17]. Taken together, our results suggest that circulating miR-328 maybe function as a suppressor gene in the development of AML, and may have an adverse effect on prognosis in apart of AML patients. Conclusion In summary, our study offers the evidence for the first time that circulating miR-328 is downregualted in AML patients, and lower miR-328 level is closely associated with distinct clinical and biologic characteristics in AML patients. Furthermore, lower miR-328 level is an independent poor prognostic factor for OS and RFS. However, the precise molecular mechanisms by which miR-328 is downregulated in AML need to be further investigation.
2016-05-12T22:15:10.714Z
2015-07-17T00:00:00.000
{ "year": 2015, "sha1": "b3562b32358cd71dd20458abc904ac714d465c7f", "oa_license": "CCBY", "oa_url": "https://diagnosticpathology.biomedcentral.com/track/pdf/10.1186/s13000-015-0345-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b3562b32358cd71dd20458abc904ac714d465c7f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
117233668
pes2o/s2orc
v3-fos-license
$^{6}$He + $\alpha$ clustering in $^{10}$Be In a kinematically complete measurement of the $^{7}$Li($^{7}$Li,$\alpha$$^{6}$He)$^4$He reaction at $E_{i}$ = 8 MeV it was observed that the $^{10}$Be excited states at 9.6 and 10.2 MeV decay by $^{6}$He emission. The state at 10.2 MeV may be a member of a rotational band based on the 6.18 MeV 0$^+$ state. the only way how the 6 He+α+n final state can be reached. The 11 Be high excited states can also decay into the 6 He+ 5 He and α+ 7 He channels with subsequent disintegration of neutron unstable 5 He and 7 He nuclei. Taking into account experimental conditions in the 11 Li decay measurements, the involvment of the 10 Be states and their α+ 6 He decay cannot be unambigously claimed. Another indication of possible 6 He+α decay of the 10 Be states came from the studies of the 7 Li ( 7 Li , 6 He) 8 Be reaction [3]. The 6 He spectra could not be explained exclusively by contributions from the sequential processes through different 8 Be states. A broad structure in the total 9 Be(n,α) 6 He reaction cross section [4], centered around 9.6 MeV in 10 Be, may be another indication of the α+ 6 He clustering of the states in this region. Nuclei in the middle of the 1p shell exhibit collective nature. Although the independent particle model can account for many features of the nuclei, there are many exceptions, like enhanced electromagnetic transitions, large quadrupole moments, "unexpected" low lying nonnormal parity states (like the 1/2 + ground state in 11 Be), large rms radii and α-decay widths etc. Some of these properties could be easily explained by the cluster structure of the nuclei. There have been several theoretical studies of the structure of the A = 10 nuclei. Gabr and Hackenbroich [5] chose the cluster functions of 10 B to belong to spatial symmetry [442], which correspond to 8 Be core [44] and an extra deuteron or a 6 Li core [42] and an extra α-cluster. The intercluster relative motions were represented by a small number of Gaussian functions. Only positive parity states with low excitations were determined. In a two-α-particle-plus-dinucleon cluster model by Nishioka [6], both 10 Be and 10 B states were calculated. The (1 + 3 ,0), (0 + 2 ,1), and (2 + 3 ,1) 10 B level energies were reproduced, which had not been the case by any other model investigation. These states were found to have a well developed 6 Li g.s. +α or 6 Li(0 + ,1)+α cluster structure, respectively. In another investigation the 10-nucleon system was studied with the multiconfiguration and multichannel resonating group method [7]. The model space employed was spanned by the α+ 6 Li, d+ 8 Be, d+ 8 Be * , and α+ 6 Li * cluster configurations with 6 Li * and 8 Be * being the rotational excited states of 6 Li and 8 Be having d+α and α+α cluster structure and L = 2. Bound and resonant levels obtained in such a way correspond fairly well to the known low lying states. However, in all these theoretical studies the states at higher excitations (> 9 MeV) were not investigated. These states can be easily reached by the 7 Li+ 7 Li reactions. The 7 Li( 7 Li,α 6 He) 4 He reaction was chosen for the search of the α+ 6 He cluster states in 10 Be for the following reasons: i) it has high positive Q-value (7.37 MeV) allowing measurements at low energies; ii) only the well known 8 Be states (0, 3.0, and 11.4 MeV) together with those from 10 Be can contribute to the coincident spectra; iii) the complex nature of the reaction at low energies should be more suitable for the excitation of these special states than some "simple" reactions like (d,p), (α, 3 He) etc. Experiment -A 3 particle nA 8 MeV 7 Li beam from the Ruder Bošković Institute EN Tandem Van de Graaff accelerator was used to bombard isotopically enriched 7 LiF targets (100 -320 µg/cm 2 ) evaporated on a thin carbon backing. Reaction products were observed with two solid state detector telescopes, each consisting of a thin ∆E detector (9 µm) and a thick (280 µm) rectangular position sensitive detector (PSD). PSD covered an angle of 12 o in horizontal and 1.5 o in vertical axis. Their horizontal angular resolution was better than 0.3 o . The telescopes were positioned on the opposite sides with respect to the beam. The measurements were performed for several setting angles between 40 o and 65 o . The particle energy, "position" and energy loss pulses were recorded by a data acquisition system [8]. From these measured values the energy-momentum (EP) plots as well as Q-value spectra were made [9]. Other details on the experiment and analysis can be found in [10]. Results and discussion - Fig. 1 shows measured Q-value spectra for the 6 He-α and α-α coincidences. In the case of 6 He-α coincidences background is very small, but in the second case the contributions from the 19 F( 7 Li,αα) 18 O and 12 C( 7 Li,αα) 11 B reactions are also present. In this case, in addition to the 6 He ground state peak, five-body (3α+2n) continuum starting at Q = 6.4 MeV together with a peak corresponding to the 6 He first excited state (E x = 1.8 MeV) are also visible. Other structures cannot be unambigously attributed to the 6 He states. The α-6 He-α events have also been sorted into three relative energy plots. The 6 He-α coincidences, measured at setting angles Θ 1 =Θ 2 =45 o (∆Φ = 180 o ), are displayed in the first three plots of Fig. 2. Two prominent groupings are observed: one corresponding to the excitation energies in the 6 He-α system (E 13 ) between 2.5 and 3 MeV and the other corresponding to the energies in the α-α system (E 23 ) around 3 MeV i.e. to the first excited state of 8 Be. Although a wider range of the 10 Be excitation energies was covered in the experiment, we concentrate here only on the part of the spectra below 4 MeV in the α-6 He relative energy, which is not affected by the sequential processes in other two pairs. From the recorded data the α-X coincidences (X being heavy particles with A ≥ 7) are also selected with the aim to obtain information on the α-9 Be-n contributions. These heavy particles, detected in the ∆E-detectors, were not identified -only their energy was measured. With an additional requirement that the events fall into the allowed kinematical regions for the 7 Li( 7 Li,α 9 Be)n reaction, they are sorted into the E ij -Θ L α plots. E ij are the relative energies in the 9 Be-n pairs, calculated directly from the energy and detection angle of αparticles, Θ L α . An example is shown on the fourth plot of Fig. 2. The vertical structures seen in the plot obviously correspond to the sequential processes through the neutron decaying states of 10 Be. The "background" is due to the sequential processes through the 5 He and 13 C states from the same 7 Li( 7 Li,α 9 Be)n reaction as well as to the reactions on 19 F. (The low energy cut was made in order to avoid random coincidences caused by the 7 Li elastic scattering). Fig. 3 shows the results of the experiment: the 10 Be excitation energy spectra from the 7 Li( 7 Li,α 6 He) 4 He and 7 Li( 7 Li,α 9 Be)n reactions. The uncertainties of peak positions and the resolution in excitation energy were estimated by the peaks corresponding to the ( 7 Li,α) reactions to bound states of 10 Be. Their values were found to be <100 keV and 250 keV, respectively. In the α-9 Be spectrum two structures are visible. The first one corresponds to a doublet of 10 Be states (9.27 and 9.64 MeV) and the second one to a state at 10.57 MeV, all of them previously observed in different processes. Until recently the value of 9.4 MeV was quoted for the energy of the second member of the doublet. The present measurements support recent findings from the study of the 7 Li(α,p) 10 Be reaction [11] that the energy is somewhat higher, i. e. 9.64 MeV. This state also decays into α+ 6 He channel, which supports previous claims about the involvement of this state in one of the final stages of the 11 Li decay. The other spectrum, 6 He-α and α-α coincidencies, has a distinctive peak centered at 10.2 MeV. The width of this state is less than 400 keV. There hasn't been any mention of a state at this energy except for the 7 Li(α,p) 10 Be measurement [11]. A double peaked structure in this energy region can also be seen in an α-particle spectrum from the 7 Li( 7 Li,α) 10 Be reaction measured at 30.3 MeV [12]. This state does not decay into n+ 9 Be channel, which explains very well why it was not observed in any neutron transfer reaction on 9 Be [13]. One can also mention that its energy coincides with the threshold for 10 Be disintegration into two 5 He g.s. . It is interesting to note here that the proton angular distributions from the 7 Li(α,p) 10 Be reaction for the 10.2 and 11.8 MeV states are almost identical in shape [11]. If the 0 + and 2 + states. These two states, both in 10 Be and 10 B, are known to have well developed 6 He+α and 6 Li(0 + ,1)+α cluster structure, respectively (see e. g. [6]). Following the sequence of these states in 10 Be (6.18, 7.54, 10.2 MeV) and the first two in 10 B (7.56, 8.89 MeV) one may expect that the 10 B level at 11.5 MeV, the only well established state between 10.9 and 12.5 MeV, is the third member of this band. Small energy separation between these states would then imply a large moment of inertia, i. e. they would be very extended nuclear systems. Because the state at 10.2 MeV decays by emission of 6 He, the well established two-neutron halo nucleus (see e. g. [14]), and because other two states (6.18 and 7.54 MeV) have also the α+ 6 He structure, one may ask what is the relation between these states and the two-neutron halo states, which may be expected in 10 Be close to the 2n emission threshold (8.48 MeV). To conclude: the existence of the poorly known 10 Be state at 10.2 MeV is confirmed. This state decays into the α+ 6 He channel, but not into n+ 9 Be. Together with other two states (0 + at 6.18 MeV and 2 + at 7.54 MeV) it may make a rotational band of a very
2019-04-14T03:07:25.998Z
1995-09-12T00:00:00.000
{ "year": 1995, "sha1": "642672ac690abf944e10d860dc66d42a55887edf", "oa_license": null, "oa_url": "http://arxiv.org/pdf/nucl-ex/9509001", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5e41f037dd9bd0e7418d7a91026c96e7aa680ec0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
195880527
pes2o/s2orc
v3-fos-license
Sequence and phylogenetic analysis of novel porcine parvovirus 7 isolates from pigs in Guangxi, China Parvoviruses are a diverse group of viruses that infect a wide range of animals and humans. In recent years, advances in molecular techniques have resulted in the identification of several novel parvoviruses in swine. In this study, porcine parvovirus 7 (PPV7) isolates from clinical samples collected in Guangxi, China, were examined to understand their molecular epidemiology and co-infection with porcine circovirus type 2 (PCV2). In this study, among the 385 pig serum samples, 105 were positive for PPV7, representing a 27.3% positive detection rate. The co-infection rate of PPV7 and PCV2 was 17.4% (67/385). Compared with the reference strains, we noted 93.9%-97.9% similarity in the NS1 gene and 87.4%-95.0% similarity in the cap gene. Interestingly, compared with the reference strains, sixteen of the PPV7 strains in this study contained an additional 3 to 15 nucleotides in the middle of the cap gene. Therefore, the Cap protein of fourteen strains encoded 474 amino acids, and the Cap protein of the other two strains encoded 470 amino acids. However, the Cap protein of the reference strain PPV7 isolate 42 encodes 469 amino acids. This is the first report of sequence variation within the cap gene, confirming an increase in the number of amino acids in the Cap protein of PPV7. Our findings provide new insight into the prevalence of PPV7 in swine in Guangxi, China, as well as sequence data and phylogenetic analysis of these novel PPV7 isolates. Introduction The family Parvoviridae is classified into two subfamilies, Parvovirinae and Densovirinae, whose hosts are vertebrates and arthropods, respectively [1,2]. Most members of the subfamily Parvovirinae cause only mild clinical symptoms, but a small number are causative agents of important diseases, for example, goose parvovirus (geese: Gosling Plague), porcine parvovirus 1 (pigs: mainly reproductive disorders) and parvovirus B19 (humans: infectious erythema) [2,3]. Parvoviruses are small, single-stranded linear, non-enveloped DNA viruses with a genome of approximately 4-6 kb [2]. The genome contains two major open reading frames (ORFs) [4]. ORF1 encodes non-structural proteins (NS) involved in viral replication, while ORF2 encodes structural (Cap) proteins [5]. An additional ORF, ORF3, encodes nuclear phosphoproteins (NP) and is located in the middle of ORF1 and ORF2. It is characteristic of members of the Bocaparvovirus genus [6,7]. PPV1 is one of the major causative agents of reproductive failure syndromes in pigs and is characterized by infertility, mummified foetuses, early embryonic death, and stillbirths [12]. This virus is also known to contribute to the development of porcine circovirus-associated disease (PCVAD) [13,14]. PPV6 was first identified in aborted pig foetuses in China in 2014 and was subsequently reported to be co-infected with porcine reproductive and respiratory syndrome virus (PRRSV) in the USA [15,16]. The impact of other PPVs on pig health remains unknown. However, recent research has indicated an association of PPV2, PPV4 and PPV6 with PCV2 infection [8,14]. Furthermore, the presence of PPV4 and PPV6 was detected in foetal tissues [15]. PPV is considered to be a co-factor of PCV2, and concurrent infection with PCV2 and PPV increases disease and lesion severity compared to mono-infection with PCV2 [17,18]. Previous studies have reported PPV3 and PCV2 co-infections in Chinese swine populations and PPV2 and PPV4 co-infection in wild boars in Europe [19]. Recent studies report that at least 3 countries have found PPV7 in their porcine populations, including America, Poland and Korea [11,20,21]. In China, PPV7 was first reported in Guangdong and Anhui provinces in 2017 [22,23]. Interestingly, the PPV7 prevalence of 65.5% on PCV2-positive farms was significantly higher than on PCV2-negative farms, indicating that PPV7 might be associated with PCV2 infection [23]. The purpose of this study was to evaluate the prevalence and diversity of PPV7 in Guangxi, China. The availability of novel porcine parvoviruses allowed us to conduct a comprehensive genetic evolution analysis based on the NS1 and Cap proteins and examine the diversification of these novel viruses. Competing interests: The authors declare that they have no competing interests. DNA extraction and polymerase chain reaction (PCR) Total DNA was isolated from tissue samples using the TIANamp Genomic DNA Kit (Tiangen Biotech, China). Four primer pairs were designed based on the reference sequences of isolate 42 (GenBank No. KU563733), and published primers and protocols were used to detect PCV-2 and PPV6 ( Novel porcine parvovirus 7 isolates Phylogenetic analysis Sequences were assembled using SeqMan software (DNASTAR Inc., Madison, Wisconsin, USA) and aligned using MegAlign (DNASTAR Inc., Madison, Wisconsin, USA) with the Clustal W alignment method for genomic similarity analysis. The phylogenetic tree was calculated using the maximum likelihood method (LG+G+I model) with 1,000 bootstrap replicates and constructed on the aligned data set using the MEGA7 program. Detection of PPV7 and PCV2 PPV7 was detected in the six cities. The positive rates of PPV7 and PCV2 in these samples were 27.3% (105/385) and 36.4% (140/385), respectively. The co-infection rate of PPV7 and PCV2 was 17.4% (67/385). Interestingly, the positive rate of PPV7 ranged from 16.3 to 33.3%, with the highest rate recorded in Liuzhou, and the lowest in Yulin (Table 2). Multiple sequence alignment and phylogenetic analysis Seventeen nearly complete PPV7 genome sequences were amplified by PCR. The two major ORFs, ORF1 (encoding NS1) and ORF2 (encoding Cap), were identified in the 17 sample sequences. Based on nucleotide similarity analysis of the complete coding region, the 17 sequences shared 94.1%-100% similarity, with 94.8%-100% similarity in the NS1 gene and 90.3%-100% similarity in the cap gene. In addition, the 17 sample sequences shared 93.9%-97.9% similarity in NS1 and 87.4%-95.0% similarity in the cap gene compared with the reference strain. Of note, the PPV7 cap gene has a length of 1410 nt or 1401 nt; however, in this study, 14 strains with a cap region of 1425 nt and two sequences (Gx28 and Gx44) with a cap length of 1413 nt were identified. Only one strain (Gx47) was found to have a cap gene with a length of 1410 nt. Based on these findings, the sequences in our study contained an additional 3 to 15 nucleotides in the middle of the cap gene (Fig 2). The Ca 2+ binding loop (YXGXG) is present in the capsid proteins of PPV1, PPV2, PPV3 and PPV5 [2,9]. The amino acid sequence of the Ca 2+ binding loop was "YXGXR" in PPV6 [15]. However, Ca 2+ binding loops are absent in PPV4. In this study, the conserved amino acid sequence of the Ca 2+ binding loop is the "YXGXXG" motif in PPV7, rather than the "YXGXR" or "YXGXG" motif found in other parvoviruses (Fig 3). On the other hand, a single amino acid mutation was present at 304 aa (Y to N) in the VP1 protein of all PPV7 strains. Therefore, the catalytic residues (HDXXY) of the putative secretory phospholipase A2 (PLA2) are lacking in PPV7 [9]. To better understand the genetic relationship between the strains identified in this study, a phylogenetic tree was constructed using the maximum likelihood method comparing the NS1 amino acid sequences from our strains and 33 reference strains of Parvoviridae family members downloaded from GenBank. Phylogenetic analyses of the amino acid sequences of NS1 revealed that all strains used in this study were in the same branch as PPV7 isolate 42, with all strains belonging to the Chapparvovirus genus (Fig 4). Discussion A high level of PCV2 and PPV co-infection in pigs is common in most pig-producing countries [8]. Previous reports revealed that the prevalence of PPV1 ranges from 25.8% to 71.88% [8,17,24]. PPV6 was reported to be co-infected with multiple viruses and associated with abortion in pregnant sows [13,14]. Recently, a new species of the Parvoviruses genus, PPV7, was discovered in rectal swabs from adult pigs [11] and subsequently in Poland and Korea [20,21]. In addition, this virus has become prevalent in Guangdong and Anhui provinces in China [23]. PPV2 and PCV2 are commonly present with PPV7. In this study, we noted a higher PPV7 prevalence in serum samples than in other studies. PPV7 is 4103 nt in length and contains two major ORFs encoding proteins 672 and 469 amino acids in length [11]. In this study, we noted that the majority of the isolates contained additional nucleotides in the middle of the cap gene. Sequence comparison revealed that within nucleotide residues 541-557 at the 5' end of the cap gene, 14 strains had an additional 15 nucleotides, while two strains had an additional three nucleotides, leading to five additional amino acids (within residues 181-186) or one additional amino acid (within residues 181-182). Because of the increased number of amino acids, it may have an effect on the structure and function of the protein. Therefore, the influence of this change on PPV7 requires further study. Parvoviruses are rapidly evolving viruses with high sequence diversity [2,25]. Frequent recombination between different parvoviruses has long been observed [26]. Several novel porcine parvoviruses have already spread worldwide and show some geographic variation [2,8]. To further study porcine parvoviruses, several studies have attempted to establish cell culture models for virus propagation in different cell types, including porcine kidney (PK-15 and PK-13) cells, swine testicular cells and African green monkey kidney (Vero) cells [15,27,28]. Unfortunately, PPV7 has not yet been successfully isolated. PCV2 is the main causative agent of PCVAD [29]. Co-infection with PCV2 and other viruses (for example, PCV3, PPV or PRRSV) [18], may lead to a secondary infection following the PCV2-induced depletion of lymphocytes and aggravate clinical symptoms [30]. Some studies have found that co-infection with PCV2 and PPV4 causes more severe disease and lesions than pigs infected with PCV2 alone [14,18]. Allan etc. suggested that PPV-induced immune dysfunction promotes enhanced replication of PCV2 [14]. In this study, nearly one-third of clinical samples were PPV7-positive. Interestingly, the PCV2-positive rate was significantly higher in the PPV7-positive samples than in the non-PPV7 samples, and the difference was extremely significant (P<0.01). The results suggest that PPV7 is likely a significant co-factor in porcine circovirus-associated disease; however, further investigation is still needed to confirm this. PCV2 and PPV contribute to severe disease. Further research is needed to determine if there is any clinical significance associated with novel porcine PPV7 infection. Conclusion In this study, we investigated the prevalence of PPV7 in Guangxi province and conducted genome sequencing of the PPV7 strains found in this province. The high prevalence of PPV7 and high co-infection rate with PCV2 suggests that PPV7 might be co-transmitted with PCV2. Analysis of the Cap protein showed that the protein has significant variability compared with the reference isolate. To date, the number of studies focused on PPV7 is limited. Co-infection with PCV2 and the effects of Cap protein mutations on the virus should be considered in subsequent studies. Supporting information S1
2019-07-12T13:14:33.664Z
2019-07-10T00:00:00.000
{ "year": 2019, "sha1": "c8890a9b42202555aaa02d13e9c7c2bd5f947d4b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0219560", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c8890a9b42202555aaa02d13e9c7c2bd5f947d4b", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
226766540
pes2o/s2orc
v3-fos-license
Worrying cadmium and lead levels in a commonly cultivated vegetable irrigated with river water in Zimbabwe Abstract Vegetable cultivation using river water, which may be polluted with heavy metals, can cause health problems to consumers. A study to establish cadmium and lead levels in water from Msasa, Manyame; Mukuvisi and Nyatsime Rivers was conducted in 2019. A questionnaire survey involving 105 randomly selected urban vegetable growers was conducted to examine farmer knowledge of the potential of polluted water to contaminate produce through heavy metals. Water, soil and vegetable samples were also collected and analysed for heavy metal presence using atomic absorption spectrophotometry. Results showed that some farmers (62%) were aware that wastewater could contain heavy metals. The majority of farmers (67%) applied phosphate-based fertilisers, a potential source of cadmium. Tested at P < 0.05, the results showed that sampled water from the four sites failed to meet the Standards Association of Zimbabwe 5560 (1997) standards. Cadmium tissue concentration from wastewater from Msasa and Manyame rivers was 1.3 and 1.17 mg g−1 respectively, which were 59 and 65 times higher than 0.02 mg g−1 from the control. Water from Manyame and Nyatsime rivers contains levels of heavy metals which exceed the Environmental Management Agency (EMA) safety guidelines. Farmers need to be educated on health hazards from contaminated wastewater. Enforcing regulations on effluent disposal, licencing of vegetable vendors and labelling of vegetables with information on source of water used to irrigate the crop can help reduce exposure of unsuspecting vegetable consumers. PUBLIC INTEREST STATEMENT When heavy metals such as cadmium and lead are emitted into the soil, air and water, they pollute the environment. Use of polluted river water for irrigation, which is common in urban agriculture in Zimbabwe, exposes consumers to heavy metal poisoning. The study reported here investigated farmer knowledge of the potential of river water to contaminate vegetables through heavy metals. It also tested levels of cadmium and lead in river water, soils and a commonly cultivated vegetable crop. Comparison of heavy metals in river water, soils and the vegetable crop showed that contamination with cadmium and lead was higher than the Environmental Management Agency guidelines. Key recommendations from the study were that the government of Zimbabwe should enforce regulations on discharge of mining and industrial effluent in rivers, and impose strict fines to ensure adherence to the protection of river water. Introduction Africa's urban population is growing at a rapid rate compared to any other continent (FAO, 2012). Possible causes of population growth in urban areas include increased rural-urban migration in search for employment opportunities (Sims et al., 2018;Tibugari et al., 2020a). Increases in urban population exert pressure on food supply (Bricas, 2019). Peri-urban agriculture has been embraced in many countries as a way of alleviating poverty and ensuring household food security (United States Department of Agriculture [USDA], 2016). However, limited water resources in urban areas (Cosgrove & Loucks, 2015) force farmers to use wastewater for irrigation. Wastewater has been used in agriculture because of its high nutrient value (Zwolak et al., 2019). However, it can have environmental and human health effects (El-Gamal & Housian, 2016;Helmecke et al., 2020) due to presence of heavy metals from activities such as mining. Mined metal and metalloid elements, which are non-biodegradable (Tytła & Kostecki, 2019;Yan et al., 2020) may enter waterbodies and become a source of water pollution. Heavy metals can accumulate and persist in the environment (Abah et al., 2020). Among the heavy metals commonly found in wastewater and soils are cadmium (Cd), and lead (Pb), which can negatively affect the environment and human health (Kinuthia et al., 2020). Cadmium and lead can reach the soil naturally or by anthropogenic activities (FAO, 2019;Kubier et al., 2019;Sun et al., 2020). Emission to soil, water, air and food (Rahimzadeh et al., 2017) can be caused by non-ferrous metal mining and refining (Ahn et al. 2020;Ćwieląg-Drabek et al., 2020). In humans, long-term exposure to Cd leads to cancer and organ (e.g., skeletal, cardiovascular, respiratory and central and peripheral nervous) system toxicity (Rahimzadeh et al., 2017). It can damage the testis, induce DNA damage and cause male subfertility (Zhu et al., 2020). In a study, Pant et al. (2014) established that cadmium and lead levels were higher in infertile than in fertile men, and they concluded that Pb (5.29-7.25 µg dl-1) and Cd (4.07-5.92 µg dl-1) could affect semen profile. Exposure to Pb, Cd and chromium was found to cause kidney damage (Bot et al., 2020). Before using wastewater in agriculture, an analysis must be done to determine its suitability for crop production. Most countries have set irrigation water guidelines based on their conditions in addition to the general guidelines set by FAO (Environmental Management Agency [EMA], 2007;Holmes, 1996). The guidelines for long-term use of water put the limits for cadmium and lead at 0.01 and 5 mg L −1 respectively in the FAO limits and 0.01 and 5mgL −1 respectively in the long-term EMA limits. In the short-term use category, cadmium limits are at 0.05 mg L −1 while lead is at 10 mgL −1 for the FAO limits and 0.05 and 20 mgL −1 for the EMA limits. Consumption of vegetables grown on heavy metal contaminated soil could be a possible route for human exposure to heavy metals (Schaefer et al., 2020) and possibly a ticking time bomb for urban dwellers who reside in cities such as Harare, who are heavily dependent on leafy vegetables in their diets. Understanding the extent of vegetable production using wastewater and the extent of consumption of these vegetables can help inform policy. Knowledge of concentrations of heavy metals in the vegetables allows scientists to determine safety of the vegetables when compared to concentrations recommended by the World Health Organisation (WHO) (Sayo et al., 2020). Heavy metals in air (Manisalidis et al., 2020), water (Mahmood et al., 2020), soil (Huang et al., 2020) and food (Nkwunonwo et al., 2020) samples can be detected using atomic absorption spectroscopy. Atomic absorption spectroscopy can be used in agriculture, medicine, mining and pharmaceuticals as a cheap and simple technique of obtaining accurate results, with measurements going down to parts per billion. Little research has been conducted to establish the levels of heavy and poisonous chemicals from wastewater used as irrigation water on commonly cultivated vegetables such as Covo (Brassica oleracea var. acephala). Knowledge of Pb and Cd content in river water, soils and subsequent uptake by selected leafy vegetables is essential to determine potential consumer exposure rate. The objective of the study was to determine the presence of toxic heavy metals in Covo leaf vegetables produced along Nyatsime, Manyame and Mukuvisi Rivers in Harare and Chitungwiza. Study site The study was conducted at four major sites that were irrigated by untreated wastewater in the suburban areas of Harare and Chitungwiza in Zimbabwe. The study was conducted within the Manyame Catchment area and samples collected along Msasa (−17.848365S; 31.122772E: −17.848495E; 31.127867S: −17.848151; 31.125405S), Mukuvisi (17°55ʹ03.5"S 30°59ʹ17.9"E), (17.86 S, 30.96 E), Nyatsime, and Manyame (17°55ʹ03.5" S 30°59ʹ17.9"E) rivers ( Figure 1). Upstream of these rivers are residential areas and industrial areas. The area lies within agro-ecological Region IIb and has a unimodal rainy season, experiencing between 700 and 1000 mm of rainfall. The area has a long dry period (April-October) with low temperatures between April and May. August-October is a warm period and mean daily temperatures in summer can exceed 32°C. The choice of the location was influenced by the vast number of water treatment sites as well as industry bases in Harare and Chitungwiza. There are four water treatment works within Manyame catchment, namely Zengeza, Firie, and Crowborough. Water, soil and vegetable specimens were collected from St Mary, Crowborough, and Mukuvisi garden sites. Some of the vegetables were bought from the St Mary's, Crowborough, and Mbare vegetable markets. Research design Obtaining a representative sample is desirable among urban horticulture farmers growing crops along Mukuvisi, Manyame, Nyatsime and Msasa rivers. Simple random sampling was used in the study since the respondents were scattered along the river and there was no complete list of the target population. Every individual in the population had an equal probability of being selected and there was therefore no basis for differentiating using demographics. The sampling frame were the plot holders practicing urban agriculture along the river. As a result, this led to the selection of 105 participants for the study. The selected respondents were then interviewed using a questionnaire. Questionnaire design The tool for data collection was a structured questionnaire that was designed for personal interviews and covered key aspects of the study. Questions required participants to rank response options given on a continuum basis in order of preference. Double-barreled questions and biased wording were avoided during questionnaire design. The questionnaire was administered verbally to each of the 105 respondents along the Mukuvisi, Msasa, Nyatsime, and Manyame rivers, in the Manyame catchment area. Prior to the main survey, pretesting was done. This was meant to check phrasing of questions, how respondents would interpret them (Hilton, 2017) and to check how long it took to interview one farmer (Tibugari et al., 2020b). After pretesting, necessary revisions were made. Snowballing was used to some extent to elicit responses from respondents who were absent from their plots. Data collection and analysis Glass tubes were washed with tap water and then washed again with a 1 mol L −1 HNO3 solution for about 24 hours. Deionised water was then used to rinse them. Soil samples were collected from three depths (1, 10 and 20 cm) from the surface. At each depth, five grab samples were collected at each sampling point and mixed to make a composite sample. Soil sampling was done using a hand auger. The soil samples were dried in an oven for 24 h. The soils were analysed for pH, Cd and Pb. Brassica oleracea var. capitata plots were randomly sampled in fields, and the vegetables were also randomly sampled at markets. Water samples were collected from three different sections of each river as well as different depths of wells. For the water tests, the control referred to tap water which was the standard to which samples from different waterbodies and sources were compared for pH and heavy metals. All plant samples were oven dried, ground and digested in aqua regia and analysed for Pb and Cd using atomic absorption spectrophotometry (AAS). Water samples were filtered and analysed for Pb and Cd using AAS (Standards Association of Zimbabwe Method 586). Sample analyses were conducted at Zimlabs (123 Borgward Road, Msasa, Harare, Zimbabwe). Analysis of variance and correlations were done using the Statistical Package for Social Sciences (IBM SPSS Statistics), version 22. Household survey In response to crops cultivated, vegetables (95%) dominated the list of crops cultivated. A small proportion (5%) of respondents did not cultivate any vegetables but only cultivated cereal crops. Some respondents who cultivated vegetables also grew other crops such as cereals and fruit crops. A combination of cereals and vegetables was cultivated by 47% of the respondents. The result that a large majority of urban farmers grew vegetables possibly suggests that vegetable production is more rewarding compared to growing other crops. It could also be an indication that alternative relish such as beef and other meats are very expensive in the cities. Additionally, while other crops such as maize are seasonal, irrigated vegetables can be grown all-year round. The result could also suggest that the land available for cropping is limited in size, which warrants intensive vegetable production guided by optimisation of resources. Respondents were asked about the fertilisers that they applied on crops. The amount of fertilizer applied by respondents differed. At least 67% of the farmers used fertilizers of different amounts and a third of them did not use any form of fertilizer. Although urban farming is done for household food security, in Zimbabwe it is considered an illegal activity. In most Zimbabwean towns and cities, it is a municipal tradition to routinely raid urban farmers' fields and slash crops such as maize towards crop maturity. Not surprisingly, urban agriculture is not supported by government services. For example, agricultural extension advice is largely provided by non-governmental organisations (Pedzisai et al., 2014). This limited agricultural extension advice may cause urban farmers not to follow good and sustainable agricultural practices. Because fertiliser applications may not be guided by soil analyses recommendations, there are high chances of injudicious and misuse of fertilisers by urban horticultural farmers. Indiscreet applications of fertilisers can increase levels of heavy metals that are available for plant uptake from the soil. Phosphorus fertilizers contain Cd (Roberts, 2014). Mar et al. (2012) found that applying high rates of superphosphate fertiliser on Brassica rapa L. var. perviridis increased Cd concentration in dry vegetable leaves compared to the control where no phosphate fertiliser was applied. If urban farmers who grow crops are to reduce the Cd accumulation in agricultural lands, they must apply fertilisers in modest quantities that are based on soil analysis recommendations. Respondents were asked about the sources of water they use to irrigate crops. Although farmers used a wide range of sources of water, almost 70% of them drew water for irrigation from nearby rivers. The rivers from which the respondents drew water for irrigation were Manyame, Mukuvisi, Nyatsime and Msasa. About 25% of the farmers utilised wells whereas 5% used tap water. Other potential sources of water such as delivered water and boreholes were not used. The high number of urban farmers who use river water to irrigate crops versus the low number of those who use tap water was not surprising. Farmers possibly preferred using river water because it is free water, whereas using tap water increases water bills. However, river water can be polluted by industrial and sewage effluent, which naturally contains high amounts of heavy metals (Khatri & Tyagi, 2015;Vetrimurugan et al., 2017). If heavy metal uptake by crops grown using river water is to be lowered, irrigation water from rivers must be treated so as to lower the concentrations of these pollutants. Regarding knowledge of existence of heavy metals in irrigation water, most respondents (62%) agreed that wastewater could potentially be contaminated by heavy metals. About 5% of the respondents were certain that wastewater is contaminated by heavy metals. Some respondents (33%) however disagreed that wastewater could be contaminated by heavy metals. The result that some farmers knew about possible contamination of water they used for irrigation by heavy metals was encouraging. If alternative sources of water were to be made available for them, such farmers would easily shift to using safe water for crop irrigation. Respondents who argued that polluted river water does not contain heavy metals will require to be educated, possibly through observing experiments that demonstrate the presence of the pollutants in river water under field conditions. Laboratory tests Water, soil and plant laboratory test results are presented in Table 1. The pH and heavy metal concentration of the industrial wastewater used for irrigation in Harare was compared to that of tap water, which served as the control. The pH of tap water was neutral (7.0). In contrast, the pH of industrial wastewater in Msasa River was acidic (4.46). The pH of tap water was, however, nearer to that of industrial wastewater from Nyatsime, Mukuvisi and Manyame rivers (Table 1). Cadmium and lead were the two heavy metals of interest in this study. The control (tap water) had 0 mg l −1 but the industrial wastewater from other rivers had different concentrations of cadmium. Industrial wastewater from Msasa River had the highest concentration of cadmium at 0.14 mg l −1 . This concentration was almost as twice as high as that in Nyatsime, Mukuvisi and Manyame had 0.07, 0.08 and 0.07 mg l −1 respectively. The pattern was similar in lead (Table 1). Heavy metal concentration from leafy vegetables watered with wastewater from Msasa, Nyatsime, and Mukuvisi and Manyame rivers was compared to that from vegetables watered with tap water. Heavy metal concentration of vegetables watered with tap water was used as the control. The control had 0.02 mg g −1 of cadmium in leafy vegetable tissue. The tissue from vegetables watered with wastewater from different rivers had at least 59 times more cadmium than those watered with tap water. Vegetable plants watered with wastewater from Nyatsime and Mukuvisi rivers had the highest cadmium tissue heavy metal concentration of 1.47 and 1.43 mg g −1 respectively. The lowest cadmium tissue heavy metal concentration was from wastewater from Msasa and Manyame rivers at 1.3 and 1.17 mg g −1 which were still 59 and 65 times higher than 0.02 mg g −1 from the control (Table 1). Vegetable crops watered with tap water had the lowest lead tissue concentration at 2.7 mg g −1 . Vegetable crops watered with wastewater from rivers had at least 3 times more lead concentration than the control. The highest vegetable lead tissue concentration was from vegetables watered with wastewater from Nyatsime (13.4 mg g −1 ) and Mukuvisi (10.4 mg g −1 ) rivers. The least vegetable lead tissue concentration was from wastewater from Manyame and Msasa Rivers at 9.3 and 8.7 mg g −1 but these were still far higher than the 2.7 mg g −1 from the control ( Table 2). Comparison of heavy metal load in the four rivers against FAO and EMA standards indicated that the river water is polluted and unsuitable for irrigation of edible produce. However, irrigation guidelines, especially in the long term, are unnecessarily strict and are designed to give guidance. Actual use depends on the crop choice and the soil's ability to retain nutrients and reduce or prevent uptake of the element of concern. In the case of the four sites (Nyatsime, Manyame, Msasa and Mukuvisi), the soils were not able to reduce availability and subsequent uptake of the Cd and Pb, leading to a significantly high leaf Cd and Pb. This could be attributed to the low pH, particularly in the Msasa river water, which had very low pH. The high concentrations of Pb and Cd in the river water together with the low pH result in increased uptake and hence high tissue concentration. Water pH was however neutral in the Nyatsime, Mukuvisi and Manyame but the uptake and subsequent concentration in plant tissue was still high. This could be attributed to the high concentrations and redox conditions. Farmers used flood irrigation to water their crops and this possibly resulted in increased availability accumulation of Pb and Cd in plant tissue (Kabata-Pendias & Pendias, 2001). The leaf tissue concentration of Pb and Cd at all sites was above the recommended minimum levels for vegetables, making them unsafe for human consumption. Consumption of these vegetables may lead to exposure and may result in health complications in the long term. For example, in human beings, exposure to Pb has been found to affect the developing foetus (Green & Pain, 2019). Conclusions and recommendations A large proportion of Harare's urban farmers who grow crops using river water are aware that the water may contain heavy metals although a large number of them apply fertilisers that include phosphates, which are a potential source of cadmium. River water from all the four sites is not suitable for irrigation of crops that are consumed directly by human beings due to low pH and high Cd and Pb levels. Concentrations of Cd and Pb in B. oleracea var. acephala irrigated using sewage-polluted river water are higher than the WHO maximum acceptable levels for human consumption, and consumption of this vegetable over a long period of time might result in bioaccumulation which is a health risk. The government of Zimbabwe should enforce regulations regarding discharge of mining and industrial effluent in rivers through strict fines to ensure adherence to the protection of river water. To protect unsuspecting consumers who buy vegetables from the market and grocery shops, there should be legislation that makes it mandatory for farmers, vendors and shop keepers to label vegetables indicating the source of water that was used to irrigate the produce, and estimates of heavy metals likely to be in the vegetables. Providing alternative water sources for irrigated cropping, such as boreholes, will to some extent reduce exposure of crops to heavy metals. Strategies for remediation of heavy metal contaminated agricultural soils and polluted rivers must be planned and implemented by relevant government and municipal authorities. Future research should examine the impact of other heavy metals, apart from Cd and Pb, on a wider range of consumable vegetables and fruits.
2020-08-13T10:10:39.514Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "70d63124a02d002a85f9db3988397f71db23c13b", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23312025.2020.1802814?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "885387b570f9e008dad4390aebbec77f715331d7", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
5256708
pes2o/s2orc
v3-fos-license
Static and Dynamic Adaptation of Insect Photoreceptor Responses to Naturalistic Stimuli We describe a new nonlinear dynamic model of insect phototransduction using a NLN (nonlinear, linear, nonlinear) block structure. The first nonlinear stage provides a single exponential decline in gain and mean following the start of light stimulation. The linear stage uses a two-parameter log-normal convolution model previously applied alone to insect photoreceptors. The final stage is a static quadratic function. The model fitted current and voltage responses of isolated single photoreceptors from three different insect species with reasonable fidelity when they were stimulated by naturalistic time series having wide bandwidth and contrast, over a light intensity range of >1:104. Mean squared error values for receptor current and receptor potential varied over ~2–60%, with many values below 10%. Linear log-normal filter parameters did not vary strongly with species or light intensity. Initial gain reduction was only large for the highest light levels, while the time constant of gain and mean reduction decreased with light intensity. The final nonlinearity changed from positively to negatively quadratic with increasing light intensity, indicating a change from threshold, or expansion to saturating compression with greater signal strength. Photoreceptor information transmission was estimated by linear information capacity and signal entropy measurements of both experimental data and predicted outputs of the model for identical stimuli at each light level. Comparison of actual and predicted data indicated significant added noise during phototransduction, with information being progressively lost by nonlinear behavior with increasing light intensity. INTRODUCTION Dynamic responses of vertebrate and invertebrate photoreceptors are difficult to explain, either by analytical descriptions or by photochemical reaction cascades. A single flash of light produces a delayed, transient change in membrane current that is a nonlinear function of flash intensity and background illumination (Hartline and McDonald, 1947;Fuortes and Hodgkin, 1964). Existing molecular models of insect phototransduction cannot account for these system dynamics, at least partially because the mechanisms that open ion channels to create the receptor current are still unclear (Hardie and Juusola, 2015). An analytical model comprising a cascade of simple linear filters was used to explain the time course of single flash responses in the Limulus eye, particularly the delay between the flash and the initial rise in current (Fuortes and Hodgkin, 1964). Although such filters could plausibly be explained by simple chemical reactions (Borsellino et al., 1965), the number of filters required was so large (often exceeding 10), that the model seemed unrealistic. One alternative was to incorporate a fixed delay, of unknown mechanism, which allowed a simpler linear filter model with a smaller number of parameters to explain the remaining response to both flashes and randomly fluctuating light signals (French, 1980). Another suggestion was to convolve the light signal with a nonlinear function of time, the log-normal function, which using only two parameters could account for the delayed response in a range of insect photoreceptor responses (Payne and Howard, 1981;Howard et al., 1984), including single photon responses (Henderson et al., 2000). Although linear convolution with a filter function provided a close description to single flash responses and random fluctuations around a mean light intensity, insect photoreceptors clearly demonstrate nonlinear adaptation, even under asymptotically small signal conditions (Marmarelis and McCann, 1977;Laughlin and Hardie, 1978;Pece and French, 1992). Nonlinear analyses of flash responses and frequency responses suggested that the processes between light absorption and membrane conductance change included both an early gain reduction and a late saturation with light intensity (Weckström et al., 1988;Pece et al., 1990;French et al., 1993). Known sources of nonlinearity include electrical shunting by ion channels in the cell membrane (Weckström et al., 1988, dynamic changes in the size, shape, and latency of quantum bumps (Song et al., 2012), and blockage or depletion of Ca 2+ entry through light-activated channels (Hardie and Mojet, 1995;Chu et al., 2013b). Additionally, a range of interactions between voltage-activated channels and the transduced light current are now well-established (Weckström and Laughlin, 1995;Niven et al., 2004). Photoreceptors are inherently noisy transducers because of the stochastic distribution of photon arrival, but additional sources of noise include variability in the transduction cascade and stochastic properties of membrane ion channels (Barlow, 1956;Wu and Pak, 1978;Lillywhite and Laughlin, 1979;Laughlin and Lillywhite, 1982;Henderson et al., 2000;Chu et al., 2013a). Both noise and nonlinearity can cause a loss of information as the light signal is transduced, but initial attempts to quantify such losses concentrated on signal-to-noise ratios estimated from linear models of transduction (Bendat and Piersol, 1980;Kouvalainen et al., 1994;Niven et al., 2003). More recent work has considered nonlinear effects on sensory information transmission in several sensory receptors, using naturalistic stimuli that approximate the natural range of amplitude distributions and dynamics (van der Schaaf and van Hateren, 1996;Juusola and de Polavieja, 2003;Niven et al., 2004). Accompanying this development has been a change of emphasis from communication channel information capacity (Shannon and Weaver, 1949) to nonlinear measurements of signal information based on entropy, as estimated from probability distributions (Juusola and de Polavieja, 2003;Takalo et al., 2011) or by data compression (Pfeiffer and French, 2009). In the present study, we developed a new nonlinear model of phototransduction based on an extension of the log-normal method (Payne and Howard, 1981) to include early gain adaptation and a final nonlinearity. The model combines lognormal convolution with the nonlinear-linear-nonlinear cascade structure developed previously for several sensory systems, including phototransduction (Marmarelis and Marmarelis, 1978;Weckström et al., 1988;French et al., 1993). Model construction was also guided by evidence of early gain changes in insect photoreceptors (Pece et al., 1990;Friederich et al., 2012). The final nonlinearity employed a polynomial series, for generality, and as used previously for insect photoreceptors (French et al., 1993). We fitted the model to photoreceptor membrane potential and membrane current recordings produced by naturalistic light fluctuations from three different types of insects that operate in widely varying visual environments. We required the model to account for the transient adaptation at the start of light stimulation from a dark background, as well as the static adaptation represented by changes in dynamic response to different mean light intensity stimuli. The model was able to reproduce responses to naturalistic stimulation of 60 s duration, starting from dark, and over a range of more than 1:10,000 in stimulus amplitude, with mean squared error between model and fitted data as low as 2%. Initial gain adaptation was strongest and fastest under the brightest conditions, but the two parameters of the log-normal component did not change strongly with species or light intensity. The final nonlinearity, approximated by a second-order polynomial function, changed from positively to negatively quadratic with light intensity, indicating an appropriate adaptation to available signal strength. Although linear coherence (signal-tonoise) suggested relatively poor information transfer during transduction under all conditions, we found that most of the input signal entropy was actually recovered by the nonlinear models at the lowest illumination levels. Animals, Stimulation, and Recording All experiments were conducted in accordance with EU Directive 2010/63/EU for animal experiments. Cockroaches, Periplaneta americana, and crickets, Gryllus bimaculatus, were obtained from Blades Biological Ltd. (Edenbridge, Kent, UK) and maintained at 25 • C under inverse 12-12 h illumination conditions, with experiments performed on dark-adapted insects during daytime. Adult backswimmers (Notonecta glauca) were collected locally in Oulu (Finland) or purchased from Blades Biological Ltd. Photoreceptors were always allowed to adapt to dark conditions for periods of several minutes before recordings. Some recordings from N. glauca and G. bimaculatus were used previously (Frolov and Weckström, 2014;Immonen et al., 2014a). Ommatidia were dissociated as described previously (Krause et al., 2008;Immonen et al., 2014b). Whole-cell recordings from dissociated ommatidia were performed at room temperature (20-22 • C) as described previously (Hardie et al., 1991;Krause et al., 2008). In brief, an Axopatch 1-D patch-clamp amplifier and pClamp 9.2 software (Axon Instruments/Molecular Devices, CA, USA) were used for data acquisition and analysis. Patch electrodes were fabricated from thin-walled borosilicate glass (World Precision Instruments, Sarasota, FL, USA). Electrodes had a resistance of 5-15 M . Bath solution contained (in mM): 120 NaCl, 5 KCl, 4 MgCl 2 , 1.5 CaCl 2 , 10 N-Tris-(hydroxymethyl)-methyl-2-amino-ethanesulfoncic acid (TES), 25 proline and 5 alanine, pH 7.15. Patch pipette solution contained (in mM): 140 KCl, 10 TES, 2 MgCl 2 , 4 Mg-ATP, 0.4 Na-GTP, and 1 NAD, pH 7.15. All chemicals were purchased from Sigma Aldrich Inc. (St. Louis, USA). The liquid junction potential (LJP) between bath and intracellular solution was −4 mV. A holding potential of −74 mV (including LJP) was used for voltage-clamp recordings. The series resistance was compensated by at least 80%, with access resistance after compensation typically not exceeding 15 M . Recordings were performed from green-sensitive photoreceptors. A 60 s naturalistic contrast sequence from the van Hateren natural image database was used as the input signal to drive the light stimulus (van der Schaaf and van Hateren, 1996). Data Analysis Membrane current and membrane potential were initially sampled at a rate of 1200 Hz (0.833 ms sample interval). Preliminary measurements found negligible power in the input or output signals above 50 Hz, so all data files were ten-point averaged to give a resolution of 8.33 ms. Coherence functions, γ 2 (f ), where f is frequency, for each input-output set were obtained from the spectra of the input, S xx (f), output, S yy (f ), and cross-spectra, S xy (f ) (Bendat and Piersol, 1980): (1) where < > indicate ensemble averages. Linear information capacity, R, was estimated from (Juusola and French, 1997): Signal entropy was estimated as described previously (Pfeiffer et al., 2012). Signals were normalized and digitized so that the maximum amplitude range could be represented by 10bit numbers or 1024 different amplitude levels. Entropy was obtained by context-independent data compression of regularly sampled continuous signals. Each of the 1024 numerical values representing the digitized signal was treated as an independent symbol in a linear sequence, or message. Data compression was performed by repeatedly replacing pairs of symbols that occurred with greatest frequency by new symbols, until no further compression was achieved. The entropy, E, was then given by: where N is number of symbols in the compressed message and M is the number of different symbols in the message and the division by 10 compensates for digitization (Cover and Thomas, 1991). Photoreceptor Model The same model system (Figure 1) was used to simulate both photoreceptor current and potential. The model was based on the log-normal model of Payne and Howard (1981), shown in the center box of Figure 1, but preceded by a nonlinear component that reduces the overall gain of the system with time from the start of stimulation by including an additional amplitude parameter, α, whose effect declines exponentially with time constant, η. Since the initial gain change was usually accompanied by a small change in mean current or potential, we included an addition to the mean, µ, that decays by the same time constant, η. The final stage of the model consists of a static (memory-less) nonlinear change in amplitude and mean approximated by a second order polynomial function with parameters a, b, and c (Figure 1). The overall gain of the model, including conversion from light intensity to membrane potential or current is assumed to occur in the final stage, but the polynomial displays were normalized to unit input and output for graphical display. Fitting the model to the data was performed on 9000 inputoutput data pairs by simulated annealing (Kirkpatrick et al., 1983;Press et al., 1990), brute force and Levenberg-Marquardt (Marquardt, 1963) methods to minimize the mean square error (MSE) between receptor current or receptor potential output, y(t), and the simulated output, y s (t): where [] indicate time averages (French and Marmarelis, 1999). All software for model fitting, entropy and information capacity estimation was custom written in multi-threaded C++ and operated on standard desktop personal computers. RESULTS Experiments were performed on six dissociated receptor cells from Periplaneta, plus single cells from Gryllus and Notonecta. The naturalistic stimulus sequence was from a collection obtained by an animal (human) moving forward through a Light fluctuation as a function of time, x(t), passes through an initial stage that reduces its amplitude, and changes its mean level by an exponentially decaying function of time after initial stimulation. Three parameters define this stage: α, the total proportional change in amplitude; µ, the total change in mean level; and η, the time constant for both. The resulting signal, u(t), is convolved with the log-normal photoreceptor filter function of Payne and Howard (1981), with its two parameters, τ and σ. Finally, the output of the log-normal filter, v(t) passes through a static (time independent) nonlinearity formed by a second-order polynomial function with parameters a, b, and c. natural visual environment under controlled conditions of motion and light detection (van der Schaaf and van Hateren, 1996). Each cell was stimulated with the same naturalistic stimulus sequence a total of 10 times, recording receptor current and receptor potential with five different neutral density filters (ND) in the light path. Each recording started from the dark adapted state, so the maximum contrast (brightest light to dark) increased by a factor of 10 for each ND change. Actual light levels were estimated by counting single photon arrivals as current bumps (effective photons) under the darkest stimulation conditions during the 60 s stimulation. These values were then scaled by the appropriate number of ND filters in each experiment. Some recordings were lost before the set of experiments were complete, so from a total possible of 80 recordings (10 recording from each of eight cells) a total of 47 recordings were obtained (25 receptor potential and 22 receptor current). Mean values of fitted parameters were calculated for the Periplaneta data, but standard deviations are only shown when there were at least three measurements. Each experiment required fitting the eight parameters of the model (Figure 1) to 9000 input-output pairs. We used primarily the simulated annealing approach (Kirkpatrick et al., 1983;Press et al., 1990) for parameter fitting, but each fitting was also tested by brute force and Levenberg-Marquardt methods (Marquardt, 1963) numerous times during the fitting process. We also used different starting parameter values several times to test for convergence in each case. These constraints required periods of hours (sometimes overnight) for each fitting. Note, that error (MSE) values were based on the entire data record during each fitting process because the non-stationary nature of the data and model, combined with the limited data available, prevented validation on separate experimental records. Initial Gain Reduction Membrane current and membrane potential changes during the 60 s of naturalistic light stimulation could be fitted by the model (Figure 2), even at the earliest stimulation times when the gain of the photoreceptors was clearly decreasing. This is an important feature of the model. Error (MSE,Equation 4) values at the completion of fitting ranged from 2.1 to 62.9%, with 16 of the 47 MSE values at or below 10%. MSE values were always higher for receptor current than receptor potential, and error levels were similar for all three species. The highest error values were only observed under the dimmest light conditions. Gain changes after the start of light stimulation (first component of Figure 1) were larger (amplitude parameter α) and more rapid (time constant parameter η) at higher maximum light intensities (Figure 3). These effects were seen in both membrane current and potential recordings, and the fitted gain change parameters agreed for the two types of recordings. Log-Normal Filter In contrast to the initial gain changes, fitted parameters of the log-normal filter (center component of Figure 1, time constant τ and width parameter σ) did not vary strongly with light level or with species (Figures 4, 5). As a Periplaneta example shows (Figure 4), the peak response shifted by less than a factor of two FIGURE 2 | Membrane current and potential changes in a Periplaneta photoreceptor during 60 s naturalistic light stimulation (van der Schaaf and van Hateren, 1996). Light stimulus in the upper trace, with membrane current and potential in the middle and lower traces, respectively. This light level gave an estimated mean response of 620 ep/s. Experimental current and potential (black) are plotted with superimposed responses from the model of Figure 1 to the same stimulus (red), using the best-fitting parameters for this data. Note, that model data reproduces experimental data well-enough to obscure most of the underlying (black) plot. over the light intensity range of 1:1000. The log-normal filter parameters varied with the different species used, being most rapid for Notonecta and slowest for Periplaneta (Figure 4 insets). Mean parameter values (τ and σ) for receptor potential models were approximately constant at different light levels ( Figure 5). Mean parameters for receptor current showed some slowing and broadening of the response at the lowest light levels, but there were not enough data to test for statistical significance. The smaller sets of data for Notonecta and Gryllus agreed well with the mean Periplaneta data, but again showed faster responses, especially for Notonecta, and more clearly at higher light levels. Output Nonlinearity The final static nonlinearity was modeled by a second-order polynomial function of the output from the log-normal filter (last component of Figure 1). Nonlinear functions are shown for the three species over the range of light levels, but with the full ranges of the input and output signals to each function normalized to unity, in order to show the effects of the nonlinearities (Figure 6). There was a clear general transition from positive, expansive functions at low light intensities to saturating, compressive functions as light intensity increased in both current and potential for all species. Negative and positive overshoots of the functions were presumably caused by the limited number of FIGURE 3 | Parameters defining the initial gain change of the phototransduction model (first box of Figure 1). Amplitude, α; and time constant, η; of gain change are shown as functions of light level, estimated from photon counts, in Periplaneta photoreceptors. Numbers of experiments contributing to each data value were: 1, 6, 5, 2, and 1 for increasing light levels. Mean values of multiple experiments are shown, and standard deviations are shown for experiments with five and six estimates. Note, that α is dimensionless because the conversion to current or potential was considered to occur in the final nonlinear stage of the model. polynomial terms in the estimates, suggesting that the responses tend to exhibit threshold behavior at the lowest light intensities and strong saturation at the highest intensities. Information Capacity, Transfer, and Entropy The photoreceptor models did not add uncorrelated or correlated noise to the transduced signal, which allowed some separation of the relative contributions of noise and nonlinearity to limiting information transmission by the experimental photoreceptors. Information capacity between input naturalistic light stimulus and output membrane current and membrane potential were estimated from the coherence function (Equation 3). Similar measurements were then made by feeding the same input sequence into the best-fitting model (Figure 1) for each recording. Mean values of these data are shown for the different light intensities used in the Periplaneta experiments (Figure 7, upper). Total signal entropy of the input time sequence, resulting membrane potential, membrane current, and corresponding model outputs were measured by data compression (Pfeiffer et al., 2012). Mean values of these data are also shown for the different intensities in the Periplaneta experiments (Figure 7, lower). Information capacities were low for the experimental data, both for membrane potential and current, with no definite trend vs. light intensity. The fitted models gave higher values, particularly at low intensities. Information capacity can be reduced by uncorrelated noise or by nonlinearity, but the models were purely parametric and did not add uncorrelated noise. Since a linear, noise-free system has infinite information capacity, it follows that the reduced capacity of the models was entirely due to nonlinearity. Signal entropy also increased at lower light intensities, for both experimental data and modeled responses, and approached the constant value for the input signal entropy in some cases. Input entropy was close to, but below the maximum theoretical entropy that could be produced by this estimation technique (dashed line, Figure 7), indicating that the naturalistic signal exercised the receptors over their full response ranges. DISCUSSION The wide dynamic and intensity ranges of natural light stimulation require nonlinear compression and adaptation processes to avoid saturation and allow adequate signal-to-noise levels in the photoreceptor membrane potential fluctuations (Laughlin and Hardie, 1978;van Hateren, 1997;van Hateren and Snippe, 2001). Relatively simple linear (French, 1980) and nonlinear (French et al., 1993) models give reasonable simulation of controlled inputs such as white Gaussian noise and steps, but the present results show that several interacting nonlinear and linear processes may be necessary to explain complete photoreceptor transduction function. Although the log-normal model has been available for decades (Payne and Howard, 1981) this work describes the first application of the model to naturalistic data. Gain change in the early stages of insect eye transduction models has been described previously (Pece et al., 1990;Friederich et al., 2012), and is clearly justified by the form of the responses (Figure 2). Simple exponential reduction in gain provided good agreement with the experimental data, including the strong amplitude changes at the start of stimulation. Gain change was much stronger and faster at the highest light levels (Figure 4). More complex forms of initial nonlinearity have been suggested for insect phototransduction before, including changing dynamics in Locusta (Pece et al., 1990) and multiple time constants of change in Locusta (Laughlin and Hardie, 1978) and Drosophila (Friederich et al., 2012) but it would be difficult to justify the addition of more fitting parameters for the present Periplaneta data. The model fitted both membrane current and membrane potential. Receptor current fluctuations cause receptor potential fluctuations via the membrane time constant plus any other ionic currents induced by the potential changes. Typical membrane time constants are much smaller than the time scales of the model (Figures 3, 4), and while parameter differences between current and potential, such as the log-normal fits, may reflect receptor physiology, there are not enough data to make statistically valid arguments. Error values were generally higher for current than potential fitting, which may reflect filtering of inherent noise by the membrane or different experimental noise. Fitted Parameters Suggested mechanisms of gain change in insect photoreceptors include optical phenomena, such as changes in the acceptance angle due to rhabdomere or screening pigment migration (Immonen et al., 2014a), changes in the phototransduction cascade itself, and membrane electrochemistry, particularly shunting (Laughlin, 1989). The time course of the gain change that we observed (up to 30 s- Figure 3) suggests a relatively slow process like pigment migration rather than more rapid membrane phenomena. Although the log-normal filter became faster at brighter levels (Figures 4, 5) these changes were not large, and might not even be statistically significant if more data were available. This relative refractoriness may reflect the wide dynamic and amplitude ranges of the naturalistic stimulus; since the model was required to fit the responses over the whole period of stimulation, and may therefore represent an average description of the photoreceptor dynamics over these wide stimulation ranges. In contrast to the slow initial gain change, the nonlinear function at the end of the model cascade was static. While some responses were approximately linear, we observed both expansive and compressive behavior as the light intensity increased. The apparent expansion may represent some form of threshold behavior at low light levels. Compressive saturation of electrical responses is well-known in insect photoreceptors, with at least one mechanism being the shunting of transduction current by voltage activated ion channels as the cell depolarizes (Weckström et al., 1988French et al., 1993). Saturating nonlinearities in receptor current at higher light intensities (Figure 6), suggest FIGURE 6 | Nonlinear functions representing the final stage of the photoreceptor model ( Figure 1) for both receptor current and receptor potential. Axes were normalized to the output range of the filter function, as input, and the final current or potential range as output. Data are shown for single examples of the three species as functions of the input light intensity in effective photons per second, indicated on each curve. Note, that these are only second-order polynomials, so output values exceeding the inputs in some cases are only approximations to the final nonlinearity, which would probably be reduced by higher order terms. that some nonlinearities occur before ion channels are opened. However, current data must always be treated with caution because of the difficulties of achieving accurate voltage clamp of cells with complex membrane geometry, such as photoreceptors, especially at higher current amplitudes. The present experiments used only a second-order approximation to the final nonlinearity, which limits its interpretation. Extension to higher order nonlinearities would be possible, but require much longer experiments to justify the increased number of fitting parameters. While hypotheses of possible links between fitted parameters and physiochemical processes are interesting and may suggest further experiments, it must be emphasized that the present FIGURE 7 | Measures of information transmission by photoreceptors transducing naturalistic stimulation. Upper: linear information capacity calculated from the coherence function between the input and output data for receptor current, receptor potential, and the respective models of current and potential for the Periplaneta receptors as a function of input light intensity. Lower: entropy rates in the photoreceptors measured by data compression for the same signals in the Periplaneta receptors. Numbers of experiments were: 1, 6, 5, 2, and 1 for increasing light levels. Mean values of multiple experiments are shown, and standard deviations are shown for experiments with five and six estimates. All other values represent fitted values to single experiments. Dashed line indicates the entropy rate of the input light signal. Solid upper line shows the maximum entropy rate that could be calculated by this method, corresponding to a uniform distribution of values over the same range. mathematical models were not designed to emulate specific biological mechanisms. Information Transmission by Photoreceptors Linear information capacities of the experimental receptor current and receptor potential were lowest under the dimmest and brightest conditions (Figure 7). These results are not unexpected, because information capacity would be reduced by noise at the lowest light levels and by nonlinearity at the brightest levels. Similar maxima of information capacity at intermediate light intensities were found in the stick insect Carausius morosus (Frolov et al., 2012), the common backswimmer N. glauca (Immonen et al., 2014a), the water strider Gerris lacustris (Frolov and Weckström, 2014), and the lesser water boatman Corixa punctata (Frolov, 2015). This suggestion is also supported by the model values of information capacity. Since noise was absent from the models, information capacity was only limited by nonlinearity, which was maximal under the brightest conditions. Consequently, information was greatest at the lowest light intensity levels (Figure 7). Entropy measurements can include both transduced signal and uncorrelated noise, but they are not dependent on linearity. If the models of receptor current and receptor potential are assumed to represent real photoreceptor behavior, the higher values of entropy seen in the experimental measurements than the model simulations (Figure 7) must represent contributions from uncorrelated noise. In this case, the additional noise in the real cells added about 20 Bits/s of entropy to the signal. A nonlinear dynamic system does not necessarily lose information as long as the receiving system is designed to receive a distorted version of the input signal. However, a nonlinear system can easily lose information that can never be recovered at the output. A trivial example would be a squaring operation that produces a positive output for both positive and negative inputs, so that information about input sign is irretrievably lost. Interpreting the entropy data on this basis indicates that the model transmitted about 80% of the input signal entropy at low light levels, when it was behaving approximately linearly, but lost at least 50% of the input entropy when it became more nonlinear at high light levels. Inspection of the raw data confirms this interpretation (Figure 2). While the average amplitude of the naturalistic stimulus remained constant, the amplitude of the photoreceptor response dropped sharply during the first few seconds. This nonlinear change means that a receiver of the photoreceptor output could not reliably recover the absolute amplitude of the input signal. Information about the amplitude of input signal fluctuation was permanently lost. CONCLUSIONS The three stage nonlinear model of phototransduction was able to predict receptor current and receptor potential output to naturalistic light fluctuations with reasonable fidelity. Importantly, the model could account for the strong change in response that occurs in the first seconds of stimulation to a dark adapted eye. Gain change probably occurs early in the process, possibly via screening pigment migration and feedback mechanisms such as Ca 2+ -dependent inhibition (Hardie and Minke, 1994;Song et al., 2012;Immonen et al., 2014a), and can be approximated by a simple exponential function of time. Other nonlinearities in the response are rapid, and probably include the effects of voltage activated ion channels. The dynamic properties of the main transduction machinery can be well-approximated by the log-normal model, but its basis remains unclear. While the nonlinear properties of photoreceptors cause a loss of information about the absolute level of light stimulation, the level of signal entropy transferred to the output suggests that estimates of information capacity are unrealistically pessimistic. AUTHOR CONTRIBUTIONS EI and RF performed the animal experiments. ASF constructed the model and performed the data analysis. All authors contributed to designing the project and writing the paper.
2017-05-05T00:52:33.837Z
2016-10-25T00:00:00.000
{ "year": 2016, "sha1": "c4c7db71d8dab94a757fb79aa3550d2c65557d93", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2016.00477/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4c7db71d8dab94a757fb79aa3550d2c65557d93", "s2fieldsofstudy": [ "Biology", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
1655226
pes2o/s2orc
v3-fos-license
Calcium, Magnesium, and Nitrate in Drinking Water and Gastric Cancer Mortality The possible association between the risk of gastric cancer and the levels of calcium, magnesium, and nitrate in drinking water from municipal supplies was investigated in a matched case‐control study in Taiwan. Records of gastric cancer deaths among eligible residents in Taiwan from 1987 through 1991 were obtained from the Bureau of Vital Statistics of the Taiwan Provincial Department of Health. Controls were deaths from other causes and were pair‐matched to the cases by sex, year‐of‐birth, and year‐of‐death. Each matched control was selected randomly from the set of possible controls for each case. Data on calcium, magnesium, and nitrate levels in drinking water throughout Taiwan were obtained from the Taiwan Water Supply Corporation. The municipality of residence of the cases and controls was assumed to be the source of the subject's calcium, magnesium, and nitrate exposure via drinking water. The subjects were divided into tertiles according to the levels of calcium, magnesium, and nitrate in their drinking water. The results of the present study show that there is a significant positive association between drinking water nitrate exposure and gastric cancer mortality. The present study also suggests that there was a significant protective effect of calcium intake from drinking water on the risk of gastric cancer. Magnesium also exerts a protective effect against gastric cancer, but only for the group with the highest levels. In Taiwan, gastric cancer is the third leading cause of cancer mortality for males and the sixth for females. 1) The age-adjusted mortality rate for gastric cancer was 13.31 per 100,000 among males and 6.54 among females in 1993. There is substantial geographic variation in gastric cancer mortality within the country. 2) Such a geographic distribution may suggest an environmental risk factor. A hypothesis linking nitrate intake and gastric cancer was presented in 1975 3) and updated in 1988. 4) Many epidemiological studies have indicated an association between nitrate levels of drinking water and mortality from stomach cancer. [5][6][7][8][9][10] Hardness in drinking water has also been suspected to be associated with stomach cancer. [11][12][13] Animal studies indicate that salt-induced damage to the gastric mucosa might be inhibited by increased intake of calcium. [14][15][16] A recent analytical epidemiologic study also found a possible protective effect of calcium against stomach cancer. 17) There are two biologically plausible mechanisms by which magnesium could prevent carcinogenesis. Intracellular magnesium may enhance the fidelity of DNA replication or magnesium on the cell membrane may prevent changes which trigger the carcinogenic process. 18) The hardness of drinking water is determined largely by its content of calcium and magnesium. It is expressed as the equivalent amount of calcium carbonate that could be formed from the calcium and magnesium in solution. In previous studies, however, calcium and magnesium data were not available. The objective of this study was to evaluate the risk of gastric cancer associated with calcium, magnesium, and nitrate exposure in drinking water from municipal supplies in Taiwan. MATERIALS AND METHODS Taiwan is divided into 361 administrative districts, which will be referred to herein as municipalities. They are the units that will be subjected to statistical analysis. Excluded from the analysis were 30 aboriginal townships and 9 islets which have different life-styles and living environments. This elimination of unsuitable municipalities left 322 municipalities for the analysis. Data on all deaths of Taiwan residents from 1987 through 1991 were obtained from the Bureau of Vital Statistics of the Taiwan Provincial Department of Health, which is in charge of the death registration system in Taiwan. For each death, detailed demographic information, including sex, year of birth, year of death, cause of death, place of death (municipality), and residential district (municipality) were recorded on computer tapes. The case group consisted of all eligible gastric cancer deaths (International Classification of Disease, ninth revisions [ICD-9], code 151). A control group was formed using all other deaths excluding those deaths which were associated with gastrointestinal problems (i.e., malignant neoplasm of small intestine (ICD-9 codes 152-154), gastric ulcer (ICD-9 code 531), duodenal ulcer (ICD-9 code 532), peptic ulcer, site unspecified (ICD-9 code 533), gastrojejunal ulcer (ICD-9 code 534), and gastrointestinal hemorrhage (ICD-9 code 578). Subjects who died from prostate, 19) bladder, 19,20) lung, 21) esophageal, 22,23) and head and neck 24,25) cancer were also excluded from the control group because of previously reported associations with nitrate or N-nitroso compounds exposures. Subjects who died from cardiovascular and cerebrovascular diseases [26][27][28][29] were also excluded from the control group because of previously reported associations with hardness levels (calcium and magnesium) in drinking water. Control subjects were pair-matched to the cases by sex, year of birth, and year of death. Each matched control was selected randomly from the set of possible controls for each case. Each case and its matched control had residence and place-of-death in the same municipality. Information on the levels of calcium, magnesium and nitrate-nitrogen (NO 3 -N), in each municipality's treated drinking water supply was obtained from the Taiwan Water Supply Corporation, 30) to whom each waterworks is required to submit drinking water quality data including the levels of calcium, magnesium and nitrate. Four finished water samples, one for each season, were collected from each waterworks. The samples were analyzed by the waterworks laboratory office using standard methods. Since the laboratory office examines calcium, magnesium, and nitrate levels on a routine basis using standard methods, it was thought that the problem of analytical variability was minimal. Among the 322 municipalities, 70 were excluded as they were supplied by more than one waterworks and the exact population served by each waterworks could not be determined. The details were given in earlier publications. 29,31) The final data set consisted of drinking water quality data from 252 municipalities. Hardness (calcium and magnesium) remains reasonably constant for long periods of time and is a quite stable characteristic of a municipality's water supply. 32) Data collected included the annual mean levels of calcium, magnesium, and nitrate for the year 1990. The municipality of residence for all cases and controls was identified from the death certificate and was assumed to be the source of the subject's calcium, magnesium, and nitrate exposure via drinking water. The levels of calcium, magnesium, and nitrate of that municipality were used as the indicator of that individual's exposure to those substances. In the analysis, the subjects were divided into tertiles according to the levels of calcium, magnesium, and nitrate in drinking water. Conditional logistic regression was used to estimate the relative risk in relation to the nitrate levels in drinking water. Odds ratios and their 95% confidence intervals (95% CIs) were calculated using the group with the lowest exposure as the reference group. 33) Coefficients whth P values <0.05 were considered statistically significant. RESULTS A total of 6766 gastric cancer cases with complete records was collected for the period of 1987-1991. Of the 6766 cases, 4480 were males and 2286 were females. The mean nitrate concentration for the gastric cancer cases (n=6766) was 0.45 mg/liter NO 3 -N (SD=0.43). Controls (n=6766) had a mean NO 3 -N exposure of 0.44 mg/liter (SD=0.44). The mean calcium concentration for the gastric cancer cases was 30.4 mg/liter (SD=19.2). Controls had a mean calcium exposure of 34.3 mg/liter (SD=19.0). The mean magnesium concentration for the gastric cancer cases was 10.2 mg/liter (SD=7.2). Controls had a mean magnesium exposure of 11.2 mg/liter (SD=7.5). Both cases and controls had a mean age of 65.2. Cases lived in municipalities in which 89.8% of the population was served by a waterworks. For controls this number was 89.4%. Cases had a slightly higher rate (42.0%) of living in metropolitan municipalities than the controls (36.9%) ( Table I). Table II shows the numbers of cases and controls and the odds ratios in relation to nitrate levels in their drinking water. The odds ratios for death from gastric cancer were not significantly lower or higher for the two groups with high levels of nitrate in the drinking water. However, The urbanization level of each municipality was based on the urban-rural classification scheme of Tzeng and Wu. 52) when adjustments were made for possible confounders the odds ratios were significantly higher. The adjusted odd ratios (95% CI) were 1.10 (1.00-1.20) for the group with water nitrate levels between 0.23 and 0.44 mg/liter and 1.14 (1.04-1.25) for the group with nitrate levels of 0.45 mg/liter or more when compared to the group with the lowest levels. Table III shows the numbers of cases and controls and odds ratios in relation to calcium levels in their drinking water. The odds ratios for death from gastric cancer were significantly lower for the two groups with high levels of calcium in the drinking water. Adjustments for possible confounders only slightly altered the odds ratios. The adjusted odd ratios (95% CI) were 0.77 (0.69-0.88) for the group with water calcium levels between 22.0 and 38.7 mg/liter and 0.70 for (0.62-0.80) for the group with calcium levels of 39.5 mg/liter or more. The odds ratios in relation to magnesium levels in drinking water are shown in Table IV. These odds ratios were also significantly lower than 1 for the higher magnesium levels, but when adjustment was made for possible confounders only the group with the highest levels (≥11.8 mg/liter) had a significantly lower odds ratio (0.86, 95% CI 0.76-0.98). DISCUSSION We have used a death certificate-based case-control study and a drinking water quality ecology study to examine the relationship between gastric cancer mortality and calcium, magnesium, and nitrate exposure from drinking water in Taiwan. The results of the present study show that there is a significant positive association between drinking water nitrate exposure and gastric cancer mortality, and that there is a significant protective effect of calcium intake from drinking water on the risk of gastric cancer. Magnesium also appears to have a protective effect against gastric cancer, when the groups with the highest vs. the lowest tertiles of intake are compared. This study employed methodology similar to that used in our previous study. 31) The results of our previous study showed that calcium, but not magnesium, intake from drinking water has a significant protective effect against colon cancer. Despite their inherent limitations, 34) studies on the ecological correlation between mortality and environmental exposures have been used widely to generate or discredit epidemiological hypotheses. The completeness and accuracy of a death registration system should be evaluated before any conclusion based on mortality analysis is made. In the event of death in Taiwan, the decedent's family is required to obtain a death certificate from the hospital or local community clinic, which then must be submitted to the household registration office in order to cancel the decedent's household registration. The death certificate is required in order to have the decedent's body buried or cremated. Since the death certificates have to be completed by physicians and it is mandatory to register death certificates at the local household registration offices, and since the household registration information is verified annually through a door-to-door survey, the death registration in Taiwan is very complete. Although causes of death may be misdiagnosed and/or misclassified, the problem has been minimized through the improvement in the verification and classification of causes of death in Taiwan since 1972. Furthermore, as in other countries, 35) malignant neoplasms, including gastric cancer, have been reported to be one of the most unequivocally classified causes of death in Taiwan. 36) Because of its fatal outcome, it is believed that all gastric cancer cases from rural or urban areas in Taiwan have had access to medical care, regardless of geographic location, in recent years. Of greater concern is whether the relative levels of calcium, magnesium and nitrate in the period around 1990 correspond to the relative levels in periods 10-20 years previously. This is important since it is likely that exposure to causal factors would precede cancer mortality by at least 20 years (the latency period for carcinogen exposure). Some information on historical levels of nitrate and hardness was available for the study areas in 1980. The correlation between 1980 and 1990 nitrate and hardness levels for the study areas were reasonably high (r=0.86 and 0.85, respectively). Nitrate and hardness data were supplied by the Water Quality Research Center of the Taiwan Water Supply Corporation, which conducts routine water analyses to assess suitability of water for drinking from both the sources and at various points in the distribution system. Also, the waterworks in each municipality received a questionnaire requesting information on whether any changes had occurred in the water supply or the treatment of the water during the past history. No municipalities were excluded because of changes in water quality (e.g., the use of water softeners) during the past few decades. We, therefore, assumed that drinking water nitrate and hardness levels in 1990 were a reasonable indicator of historical levels. Migration from a municipality of high nitrate and hardness exposure to one of low calcium, magnesium, and nitrate exposure or vice versa could have introduced misclassification bias and bias in the odds ratio estimate. 37,38) However, migrant studies have indicated that susceptibility to gastric cancer is strongly related to place of birth (early life exposures), and much less to place of later residence. 39) It is unfortunate that place-of-birth information was not available for the data set and the use of the place-of-death information as the surrogate measure probably introduces bias to some extent. The individuals included in the present study were subjects whose residence and place-of-death were in the same municipality. In the event of a death in Taiwan, there is a social custom that the decedent's family always considers the death to have occurred in the municipality where the person was born. Therefore, the decedent's residence, place-of-birth, and place-of-death are likely to be listed as the same municipality. We believe that this ameliorates the migration problem. Also gastric cancer is a disease of old age, and it is assumed that the elderly are likely to remain in the same residence during the last 20 years of their life. 40) The principal sources of dietary nitrate are drinking water and foods. 41,42) The hypothesis that high nitrate ingestion may increase the risk of gastric cancer has led to concern over rising levels of nitrate in drinking water, but with little consideration as to whether nitrate in water makes a major contribution to total nitrate intake. A previous study has indicated that when the concentration of waterborne nitrate is high, drinking water contributes substantially to total nitrate intake, 43) and the potential for nitrite and N-nitroso compound formation may be increased. There are no available data for assessing the diet of the individual subjects in the present study; however, based on findings from a study by Chilvers et al., 43) we assumed that water is an important consideration in determining environmental exposure to nitrate. There has recently been public concern over possible nitrate contamination in public water supplies in Taiwan, due principally to the increasing use of inorganic fertilizers in areas of arable farming. This makes it pertinent to examine the available evidence for an association between drinking water nitrate ingestion and gastric cancer. Our study provides evidence to support the hypothesis that there is a positive association between drinking water nitrate levels and gastric cancer. The nitrate concentration in drinking water in Taiwan is below the guideline value of 10 mg/liter recommended by the World Health Organization. 44) However, there is no scientific evidence to justify firm conclusions about the safety of any concentration of nitrate in water with regard to gastric cancer risk. Forman 45) notes that although environmental nitrate exposure probably plays a role in the development of gastric cancer, it may not serve as a rate-limiting factor. Our finding of a significant protective of calcium intake from drinking water agrees with three past studies which were ecologic in design and which reported positive associations between gastric cancer mortality and the use of soft water. [11][12][13] These studies [11][12][13] reported only correlation coefficients and not risk estimates as a function of exposure. The hardness of drinking water is determined largely by its content of calcium and magnesium. It is expressed as the equivalent amount of calcium carbonate that could be formed from the calcium and magnesium in solution. In these studies, however, data on calcium and magnesium levels were not available. Our study used a case-control approach based on death certificate records. Exposure was defined in this study as the calcium and magnesium levels of the drinking water source serving the address listed on the death certificate. Lee et al. 46) reported a mean daily intake of 507 mg calcium through food in Taiwan. This figure is only 81.9% of the recommended daily intake. One may hypothesize that waterborne calcium can make an important contribution to the total daily intake for subjects with insufficient calcium intake. The mean calcium concentration in drinking water of Taiwan is 32.4 mg/liter. This figure would contribute, on average, 12.8% to an individual's total dietary calcium intake, given a daily consumption of 2 liters of water. In the general population, the major portion of magnesium intake is via food, and to a lesser extent via drinking water (in Sweden, generally less than 5 percent is from drinking water). 28) There are no available data for assessing the percentage that drinking water contributes to the total magnesium intake in the present study. Nonetheless, in the modern-day world, intake of dietary magnesium is often lower than the recommended dietary amount of 6 mg/kg/day. 47) For individuals at the borderline of magnesium deficiency, waterborne magnesium can make an important contribution to their total intake. In addition, the loss of magnesium from food is lower when the food is cooked in magnesium-rich water. 48) The contribution of water magnesium among persons who use water with high magnesium levels could thus be crucial in the prevention of magnesium deficiency. The fact that a significant protective effect of magnesium intake via drinking water was found in the group with the highest levels of intake suggests that only subjects with magnesium intake via drinking water above a certain level receive a beneficial effect on their risk of stomach cancer. Another reason why both calcium and magnesium in water can play a critical role is their higher bioavailability. Magnesium appears as hydrated ions in water and is therefore more easily absorbed from water than from food, and the situation may be similar for calcium. 49,50) There are a number of major risk factors for gastric cancer in Taiwan, including cigarette smoking and consumption of alcohol, green tea, salted or cured meat, smoked or fried food and fermented beans, 51) which should be taken into account when investigating the possible role of drinking water quality. These risks factors represent possibly important confounders in the present study. There is unfortunately no information available on these variables for individual study subjects and they could not be adjusted for directly in the analysis. However, there is no reason to believe that there would be any correlation between these confounders and the levels of nitrate, calcium, and magnesium of the water. 27) Also, if the association between these potentially confounders on the one hand and stomach cancer risk on the other is not as strong as the one that has been observed for nitrate, calcium, and magnesium, adjustment for these variables will not qualitatively change the conclusion. In conclusion, this study supports the hypothesis that there is a positive association between levels of nitrate in drinking water and mortality from stomach cancer. The present study also suggests that there is a significant protective effect of calcium and magnesium intake from drinking water on the risk of gastric cancer. Our study appears to be the first investigation to report a possible protective effect of calcium and magnesium intake via drinking water against stomach cancer. Future studies should investigate the individual's intake of calcium, magnesium, and nitrate, both via food and water, and control for confounding factors, especially personal risk factors such as smoking, alcohol use, green tea drinking and dietary habits. ACKNOWLEDGMENTS This study was partly supported by a grant from the National Science Council, Executive Yuan, Taiwan (NSC-86-2314-B-037-089).
2018-04-03T03:15:03.707Z
1998-02-01T00:00:00.000
{ "year": 1998, "sha1": "03bff8029ddddd31f0fde6cb3359d9cc4dd1d178", "oa_license": null, "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/j.1349-7006.1998.tb00539.x", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "03bff8029ddddd31f0fde6cb3359d9cc4dd1d178", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256599547
pes2o/s2orc
v3-fos-license
Antarctic Ardley Island terrace — An ideal place to study the marine to terrestrial succession of microbial communities The study of chronosequences is an effective tool to study the effects of environmental changes or disturbances on microbial community structures, diversity, and the functional properties of ecosystems. Here, we conduct a chronosequence study on the Ardley Island coastal terrace of the Fildes Peninsula, Maritime Antarctica. The results revealed that prokaryotic microorganism communities changed orderly among the six successional stages. Some marine microbial groups could still be found in near-coastal soils of the late stage (lowest stratum). Animal pathogenic bacteria and stress-resistant microorganisms occurred at the greatest level with the longest succession period. The main driving factors for the succession of bacteria, archaea, and fungi along Ardley Island terrace were found through Adonis analysis (PERMANOVA). During analysis, soil elements Mg, Si, and Na were related to the bacterial and archaeal community structure discrepancies, while Al, Ti, K, and Cl were related to the fungal community structure discrepancies. On the other hand, other environmental factors also play an important role in the succession of microbial communities, which could be different among each microorganism. The succession of bacterial communities is greatly affected by pH and water content; archaeal communities are greatly affected by NH4+; fungal communities are affected by nutrients such as NO3−. In the analysis of the characteristic microorganisms along terrace, the succession of microorganisms was found to be influenced by complex and comprehensive factors. For instance, environmental instability, relationship with plants and ecological niches, and environmental tolerance. The results found that budding reproduction and/or with filamentous appendages bacteria were enriched in the late stage, which might be connected to its tolerance to rapid changes and barren environments. In addition, the decline in ammonia oxidation capacity of Thaumarchaeota archaeade with succession and the evolution of the fungi-plant relationship throughout classes were revealed. Overall, this research improves the understanding of the effect of the marine–to–terrestrial transition of the Ardley Island terrace on microbial communities. These findings will lay the foundation for more in-depth research regarding microbial adaptations and evolutionary mechanisms throughout the marine–terrestrial transition in the future. Introduction In the past, microbial diversities from various chronosequences were studied using different techniques, such as traditional culture methods, and advanced techniques focusing on genetic, structural, and functional diversity. The study of the marine-terrestrial transition of the microbial community is one of the best ways to study a microbial evolution (Dini-Andreote et al., 2016). However, there are no representable research areas to serve as an ideal place to study the marine-terrestrial transition of microbial communities. The studies on the marine-terrestrial transition of microbial communities are mainly focused on salt marsh chronosequence. The salt marsh ecosystem at the island of Schiermonnikoog (The Netherlands) has been formed through sand accumulation and progressive sedimentation of silt and clay particles, which resulted from cyclic tidal inundation. However, the salinity increased in the late stages, since an accretion and accumulation effect (Dini-Andreote et al., 2014). The salt content in salt marsh chronosequence differs from the decrease of salt content in the Antarctic marine-terrestrial transition. Unlike other ecosystems, the Antarctic environment is harsh, has slow biological growth, and has less human interference. The Fildes Region including the Fildes Peninsula, Ardley Island, and adjacent islands represents one of the largest ice-free areas in the maritime Antarctic. Ardley Island terrace is located on the Fildes Peninsula, King George Island, Antarctica ( Figures 1A,B). It has been identified as a raised beach sequence (Palaeobeaches; Boy et al., 2016). The spatial strata of Ardley Island terrace are distinctly separated ( Figure 1C). The island rise is relatively low with the highest elevation at 65 m altitude. In geomorphological terms, the area comprises mainly tertiary andesitic-basaltic lavas and tuffs, together with raised beach terraces (Management Plan for Antarctic Specially Protected Area No. 150. 2009). In the middle and late Holocene, Ardley Island was almost completely submerged by seawater due to global warming effects, including melting ice sheet and rising sea levels. At the same time, isostatic uplift due to the melting and weight reduction of the ice sheet has resulted in the formation of chronosequences of raised beaches along with the coastal areas (Ingólfsson et al., 1998;Michel et al., 2014). In our previous research on the soil microbial community in Fildes Peninsula, Antarctica, we found that the microbial community in this area was affected by geological events. The microorganisms in the coastal uplifted chronosequence were significantly different from those in other areas and were still in an unstable change stage (Zhang et al., 2018). In this area, the coastal uplifted chronosequence of Ardley Island terrace presents a very complete six-spatial strata. The activities of marine animals along with glacial activities and past sea-level changes play a key role in the formation and development of terrace soil during the formation of Ardley Island terrace (Bölter, 2011). Each spatial stage along Ardley Island coastal uplifted chronosequence was dated, and it found that the entire terrace spans an age range of 200-to-7,200-year BP (Boy et al., 2016). Although there is no obvious succession of vegetation in each stage of these terraces, soil development, especially the enrichment of organic matter, is more advanced. The Ardley Island terrace has less interference from plants, animals, and people. There is no interference from the penguin colonies. The breeding colonies of penguins are located on the eastern side, while the coastal uplifted sequence (Palaeobeaches) is located on the western side of the Ardley Island. Furthermore, because of the clear geological age of the Ardley Island terrace, we chose it as a research site for investigating the microbial communities in the soil along the terrace. We hope to Frontiers in Microbiology 03 frontiersin.org explore the successional pattern of microbial community during marineto-terrestrial transition with the increase of altitude corresponding to the temporal gradients of coastal uplift, and to lay the foundation to study the evolution of microorganisms during the marine-to-terrestrial transition. In this study, soil microbial communities of the Ardley Island coastal uplifted chronosequence were studied based on the geological background. The succession of microbial communities in the terraces formed at different time gradients were analyzed. Besides, the change course of microbial communities after marine-to-terrestrial transition, as well as the geological phenomena and evolutionary history that may be reflected by the change of microbial communities were explored. Soil sampling site The Ardley Island terrace, a typical marine erosion uplift terrace, is located on the Fildes Peninsula, King George Island, Antarctica ( Figure 1). The spatial strata of Ardley Island terrace are distinctly separated into 6 successional stages (S0-S5; Table 1). The late stratum, Stage S0 is covered with approximately 95% of vegetation including two to three layers of the lichen Usnea sp., a black moss, and encrusted lichens (Boy et al., 2016). The distance from S0 to the coast is approximately 15 meters. Most area of stage S0 was covered by 80% of Usnea fasciata. Besides, other vegetations consisted of the green cushion-moss (Chorisodontium sp.), the yellowgreen bryophyte (Sanionia uncinata), the black moss (Andreaea sp., Cladonia Borealis, Himantormia lugubris), and various other encrusted lichens. Stage S1 is covered with 95% of the vegetation. The dominating species, Usnea fasciata lichen covers around 70% of the area. Moreover, other vegetations consisting of Chorisodontium sp., Andreaea sp., Himantormia lugubris, Cornicularia aculeata, and other encrusted lichens were also detected in this area. Stage S2 is covered by 100% of vegetation with 90% of Usnea fasciata as dominating species. Stage S3, S4, and S5 were covered by 100% of vegetation and were comparable regarding vegetation composition to the other stages on the Ardley Island terrace. The organic matter enrichment in stages S4 and S5 is more advanced. A total of 30 soil samples were collected, 10 g at each sampling site. Each stage included five sampling sites with a distance of approximately 0.6-1 km. The distance between the sites was approximately 100-200 m. Each sample had a surface area of 1 × 1 square meter and was collected in triplicate from the A-horizon (10 cm). Soil samples collected for each replicate were taken from five soil cores (5 cm in diameter), mixed thoroughly, then placed in sterile plastic bags. Soil DNA was extracted within 2 h in the laboratory of the Great Wall Station. The remaining soil samples were dried naturally, ground, and passed through 100-mesh sieves for the subsequent determination of soil parameters. Determination of soil elemental compositions by X-ray fluorescence spectrometry Soil samples were dried at 105°C for 6 h, then ground to powder. The soil powder was pressed in a 45 mm diameter bore steel die under an approximately 20 t hydraulic press. Each soil sample was formed into a stable soil pie (45 mm diameter, 10 mm height), and then analyzed within a few hours. The soil elemental compositions were determined using X-ray fluorescence spectrometry (Bruker AXS, Germany) with a standardless quantitative analysis method (Handley et al., 2010). Soil physicochemical parameters measurements To measure soil pH, 5 g of each soil sample was suspended in 10 ml of deionized water. The pH of each soil suspension was measured using a pH meter (Mettler-Toledo, Switzerland). The Direct Gravimetric method was used to determine the soil moisture content. The weight loss of each soil sample was calculated after drying the soil sample at 105°C until it reached a constant weight. Total organic carbon (TOC) was determined using a TOC analyzer (Vario TOC, Elementar, Germany). Soil NH 4 + and NO 3 − content was measured by extraction of 10 g of soil sample with 50 ml of KCl (2 mol/l) solution at 25°C for 1 h. The soil solution mixture was then centrifuged at 3000 g for 5 min. After centrifugation, the clear supernatant was passed through a 0.45-μm filter (Millipore, type GP). Subsequently, the filtrate was analyzed by a continuous flowing analyzer (FIAstar™ 5,000 Analyzer, Foss, Denmark). Soil DNA extraction, PCR, and Illumina Miseq high-throughput sequencing Total soil DNA was extracted within 2 h in the laboratory of the Great Wall Station using the DNeasy PowerSoil DNA Isolation Kit (Mo Statistical analyses Before analysis, the raw sequencing data were demultiplexed and processed using the Quantitative Insights Into Microbial Ecology (QIIME) v. 1.8.0. (Boulder, CO, United States) to remove the low-quality short length reads (< 150 BP), the long polymer sequence (> 8 BP), and the low-quality base sequence. To obtain high-quality and clean reads, chimeric sequences were identified using USEARCH v. 5.2.236 1 and then removed. The quality reads were binned into operational taxonomic units (OTUs) at 97% sequence similarity using UCLUST, followed by the selection of a representative sequence for each OTU. The OTUs with a sequence number less than 0.001% of the total sequence number were eliminated (Bokulich et al., 2013). The representative sequence for each OTU was aligned to bacterial and archaea taxa based on the SILVA and Greengenes ribosomal RNA database, and to fungal taxa based on UNITE database (DeSantis et al., 2006;Pruesse et al., 2007;Quast et al., 2013). To evaluate the alpha diversity of bacteria, archaea, and fungi, operational taxonomic unit (OTU) analyses were carried out with the Sobs and Shannon indices. The analyses were accomplished by using the Mothur v. 1.30.2 software package (Schloss et al., 2009). To analyze the relationships between soil elemental compositions and environmental attributes in the soil samples, principal component analysis (PCA) was 1 http://www.drive5.com/usearch/ performed using R v. 3.3.1. statistical software. In addition, the one-way analysis of variance (ANOVA) method was used in R v. 3.3.1. statistical software to test the significant differences among environmental factors. The ANalysis Of SIMilarities (ANOSIM; Clarke, 1993) as well as the non-parametric Adonis test (Anderson, 2001) with 999 permutations were conducted to compare the differential of the microbial communities in different strata. The Bray-Curtis distance was used to obtain the dissimilarity matrices in the permutational multivariate analysis of variance (PERMANOVA) test for microbial OTU data, which had been normalized by dividing the reads per OTU in a sample by the sum of usable reads in that sample (relative abundances), where an OTU absent from a sample was coded as state 0. In the analysis of the microbial community in the terrace of the Fildes Peninsula, the method of distance-based redundancy analysis (db-RDA) was used to rank the microbial communities based on soil parameters, and the Bray-Curtis distance between communities was analyzed in R v. 3.3.1. statistical software. In addition, the Linear Discriminant Analysis (LDA) Effect Size (LEfSe) was used to identify the taxonomic biomarkers identified in each stratum of the Ardley Island terrace using relative abundances (Segata et al., 2011;Paulson et al., 2013). In the analysis, an alpha parameter significance threshold for the Kruskal-Wallis (KW) test among classes was set to 0.05. The threshold on the logarithmic score of LDA analysis was set to 3.0. The analysis was processed with the Galaxy platform developed by Harvard University (Afgan et al., 2018). Prediction of high-level bacterial phenotypic traits were carried out through BugBase 2 (Ward et al., 2017) using the Greengenes annotated biom files. The prediction of ecologically relevant functions of microbial taxa was carried out via a promising tool, FAPROTAX (Louca et al., 2016). The physiochemical properties of soil samples along the Ardley Island terrace are shown in Table 2. The ANOVA analysis revealed a highly significant difference in soil moisture contents and pH between the last two successional stages (stage S4 and S5) and the late stages (stage S0, S1, S2, and S3). The S0 stage revealed lower moisture contents, total organic carbon (TOC), and soil NH 4 + than in the other stages. On the other hand, soil pH values were high in stage S0, and lower in the other stages. Differences in bacterial diversity among successional stages The high-throughput sequencing results revealed that among bacterial communities across chronosequence, the dominant bacterial phyla were Proteobacteria, Acidobacteria, Chloroflexi, Actinobacteria, Planctomycetes, and Gemmatimonadetes (average content>5%). The abundance of Actinobacteria, Gemmatimonadetes, Nitrospirae, and Cyanobacteria was found to be significantly greater in the S0 stage of the terrace than in the other strata. The abundance of Nitrospirae gradually decreased as the terrace increased in altitude. In the S5 stage, the abundance of Proteobacteria and Planctomycetes is significantly less than that of other strata. However, they were most abundant in the middle S3 stage (Supplementary Figure 3). BugBase was used to predict high-level bacterial phenotypic traits. It was found that the content of Gram-positive bacteria and motility factors in the S0 stage were much higher than those in other strata. S5 stage anaerobic and stress tolerance bacteria are much lower than other strata. Gram-negative bacteria and aerobic bacteria steadily increase as the successional stage rises. The ability to cope with stress and the potential to cause disease increases and subsequently decreases, indicating a hump condition (Supplementary Figure 5A). The prediction of ecologically relevant functions of microbial taxa was carried out via a promising tool, FAPROTAX. As shown in Supplementary Figure 5B, methanotrophy, hydrocarbon_degradation, methylotrophy, aromatic_compound_degradation, nitrate_reduction, aerobic_nitrite_oxidation, nitrification, sulfate_respiration, predatory_ or_exoparasitic, and chloroplasts in the S0 stratum are significantly higher than other strata. The S5 stage has significantly higher animal_ parasites_or_symbionts, human pathogens, and cellulolysis than other The values with different superscripts indicate that the index has a significant difference (p < 0.05). A superscript value with the same letter indicates no significant difference between the samples. The value in parentheses indicates the standard deviation. Frontiers in Microbiology 06 frontiersin.org strata. In each successional stage, photoheterotrophy and phototrophy increase and subsequently decrease. Nitrate_reduction, aerobic_nitrite_ oxidation, and nitrification show a decreasing trend with the increase of stage, whereas animal_parasites_or_symbionts, nitrogen_fixation, cellulolysis, and chemoheterotrophy show an increasing trend as the stage increases. These function predictions are consistent with the unique bacterial characteristics of each stage in the LEfSe analysis. Differences in archaeal diversity among successional stages High-throughput sequencing results showed that the terrace soil contained Thaumarchaeota, Euryarchaeota, and Crenarchaeota. Thaumarchaeota is the dominating archaea in all stages. Euryarchaeota has a higher abundance in the late and early stages. Crenarchaeota has a substantially larger abundance in the middle stage (Supplementary Figure 6). According to LEfSe multi-level species difference discriminant analysis, Candidatus_Nitrosocosmicus, Thermoplasmata and a large number of Marine_Group_II archaea were enriched in stage S0 (Supplementary Figure 7). Nitrososphaeraceae, unclassified Crenarchaeota, and unclassified Thaumarchaeota were enriched in the middle S1, S2, and S4 stages. In the highest S5 stage, Group_1_1c, and Euryarchaeota are more abundant. Differences In fungal diversity among successional stages High-throughput sequencing results show that terrace soils mainly contain Ascomycota (47-81%) and Basidiomycota (5.9-7.7%). However, there was no significant difference in the content of other fungi in each stage, except for Glomeromycota, which was significantly less in the S1 stage than in other strata (Supplementary Figure 8). LEfSe analysis revealed that Rhizoctonia, unclassified genes of Verrucariaceae, and Hypocreaceae fungi were enriched in the bottom S0 stratum (Supplementary Figure 9). Among the middle stages, only the S3 stratum has characteristic fungal groups, including Nectriaceae, Yarrowia, unclassified genus of Didymellaceae, Hannaella, Glomeromycota, Entrophospora, Diversisporales, and Glomeromycetes. The highest S5 stage contains characteristic fungal groups: Capnodiales, Chaetothyriales and Herpotrichiellaceae. Analysis of alpha diversity index In this study, the Sobs index and Shannon index were used to reflect the richness and diversity of microbial communities in different successional stages. As demonstrated in Figure 3, the late-stage S0 has much higher bacterial and fungal species richness (Sobs) and diversity (Shannon) than the other stages. However, the uppermost stratum S5 has significantly less bacterial species richness and diversity than the other strata. On the other hand, the richness and diversity of archaea in the S0 and S1 stages are lower than in other strata. ANOSIM analysis The ANOSIM/Adonis analysis (Figure 4) showed that the intergroup distances of the bacterial and archaeal communities were significantly greater than the intra-group distance (R-values were 0.60 and 0.67, respectively, and both p-values were 0.001); while the fungal communities were very heterogeneous among groups (R-value was 0.07, p-value was 0.13 > 0.05). The distribution of fungi in each stratum is quite uneven. In addition, the intra-group distances are greater than the inter-group distances. Correlation analysis between environmental factors and microbial communities Db-RDA analyses were performed using these selected environmental factors. The communities of bacteria and archaea in the late stratum, as illustrated in Figure 5, are different from other strata. Communities in the middle stratum are mixed, but the early stratum (S5) can be distinguished from the other strata. On the other hand, the fungal communities in the S0 and S5 stages are indistinguishable, which might be due to substantial variances within the group. However, the fungal communities in the S0 and S5 stages can be distinguished from other middle stages and can be compared to other fungal communities using the Partial least squares-discriminant analysis (PLS-DA), which eliminates random variations within groups and reveals systematic differences between them (Supplementary Figure 2). This was consistent with the results of PERMANOVA tests with Bray-Curtis distances that revealed that there were significant differences around the strata for prokaryotic 16S rRNA genes (Pseudo-F = 4.9 for the bacterial, and Pseudo-F = 5.2 for the archaeal community, both p = 0.001), but not for fungal ITS genes (Pseudo-F = 1.6, p = 0.097). PERMANOVA analysis showed that the bacterial and archaeal community structure in sample soils with different Mg, Si, and Na content was significantly different. The R 2 values were 0.3334, 0.32783, and 0.28595, respectively for bacterial community, and 0.42117, 0.37815, and 0.36135, respectively for archaeal community. Therefore, the Mg, Si, and Na content were obviously related to bacterial and archaeal community structure changes. However, NH 4 + content only was strong related to the archaeal community structure discrepancies (R 2 values was 0.32655). On the other hand, Al, Ti, K, and Cl only were significantly related to the fungal community structure discrepancies (R 2 values were 0.10736, 0.10672, 0.1041, and 0.09227, respectively) (Supplementary Table S3). Discussion The Ardley Island terrace is formed by the gradual uplift of the coast. It is speculated that the earliest nutrients came from marine sediments, followed by the nutrients brought by marine animals living on newly raised beaches, and then the humus produced by vegetation growth, and the nutrients brought by birds and human activities. Although previous studies have found that the plants in this terrace do not have obvious succession rules, our results show that the soil elements present obvious differences in various strata. The vegetation diversity of the Fildes Peninsula, Antarctica is not rich. The growth of the only vascular plant, Deschampsia antarctica (hair grass), was found not to have resulted from soil succession but from bird feces enrichment leading to a pseudo-succession in vegetation where fertilization gradients around bird colonies occur. There are no penguins settled on the terraces, and no Deschampsia antarctica is growing in this area. Since there are only lichens and mosses, higher plants were not observed in this area. Thus, there is Frontiers in Microbiology 07 frontiersin.org no obvious plant succession. Besides, the elements, nutrients, organic matter, water-holding capacity and pH accumulated in each stratum are significantly different. Therefore, prokaryote communities in each successional stage are also significantly different. Factors determining the microbial community distribution Through PERMANOVA analysis, soil elements Mg, Si, and Na have a significant impact on the bacterial and archaeal community structure, while Al, Ti, K, and Cl only have a significant impact on the fungal community structure. These factors may be the key driving factors of the bacterial, archaeal and fungal communities assembly over the succession. On the other hand, environmental factors also have an important impact on the succession of microbial communities, which could be different among each microorganism. For instance, in addition to soil elements, the succession of bacterial communities is also greatly affected by pH and water content; archaeal communities are also greatly affected by NH 4 + ; fungal communities are also affected by nutrients such as NO 3 − . The S0 stage of the Ardley Island terrace is the latest stage to break away from the ocean during the geological uplift. Soil is still in the early stage of development. The bedrock and gravel are still in the weathering stage. Therefore, the soil element composition contains relatively high amounts of crust and rock composition. There are relatively abundant of P, S, and NH 4 + in the S3, S4, and S5 stages, which may be because these stages are uplifted beaches from the ocean, and might be affected by marine animals, such as guano. It has experienced a long period of weathering and freeze-thaw process. In addition, the accumulation of nutrients and organic matter brought by marine animal activities and vegetation growth for a long time promotes the maturation of the upper soil on the terraces (Michel Analysis of similarities (ANOSIM) showing variation in bacterial, archaeal, and fungal community structure of different strata on the Antarctic Ardley Island terrace. Frontiers in Microbiology 08 frontiersin.org et al., 2014;Zhang et al., 2018). The soil in the upper stages (S3-S5) has higher water content and lower pH values than in the late stage (S0). The results are reasonable, since well-developed soil has a higher water storage capacity. In addition, the vegetation coverage, humic acid, and fulvic acid produced by mosses and lichens in the upper stages could decrease soil pH values (Rakusa-Suszczewski, 1993). High ammonia concentration produced during the decomposition of organic matter in the ocean could be another reason for higher pH value in the lower stratum. Although many soil parameters are significantly correlated with fungal communities, in the db-RDA ( Figure 5C) analysis, the fungal communities of each stage are not clearly clustered along the terrace. So, we speculated that other unknown distribution patterns or factors might influence fungal communities. For example, winds along the Antarctic coast affecting fungal spore dispersal; the uneven distribution patterns of fungal host plants resulted in the uneven distribution of fungal communities in soil; or some fungal species might be controlled by fungal biocontrol agents (mycoparasite). However, the bacterial and archaeal communities in the late stages are clearly distinguished from the other strata of the Ardley Island terrace. The results of the correlation between microbial community structure and soil parameters from our study were in accordance with previous studies. In the study of Livingston island in the South Shetland Islands, it was found that soil total carbon, total nitrogen, and water content were the most significant factors affecting the distribution of bacteria (Ganzert et al., 2011). In Antarctica Valley, microbial communities were associated with soil K, C, and water content (Stomeo et al., 2012). Based on the previous studies and our results, the presence of available carbon, nitrogen, and water content in soil are considered as the driving factors that promote microbial growth along Antarctic chronosequence with extreme and oligotrophic conditions. Characteristic microorganisms of each stratum and their succession Through LEfSe analysis of the characteristic microorganisms on each stratum, it is found that the succession of microorganisms may be influenced by more complex and comprehensive factors, such as environmental instability, relationship with plants, ecological niches, and environmental tolerance. Bacterial community Based on the soil element information, flora characteristics, and the correlation with environmental factors, it is found that the S0 stage has more marine characteristics, including the K, Na, and Mg elements related to its flora. In Antarctic surface soils, element concentrations are generally influenced by the element abundance of soil-forming basaltic and granitic rocks. Potassium was gradually released from the weathering potash feldspars, biotite, and muscovite micas. Part of sodium might derive from marine sources and moist air moves in from the polar oceans (Claridge and Campbell, 1977). High magnesium concentrations in the S0 stage probably derive from marine basaltic rocks (Malandrino et al., 2009). In addition, the study of soil development on marine terraces near Metaponto found that the trend of (Ca+Mg+K+Na)/Al ratios of the soils developed in the marine sediments supports the hypothesis of increasing terrace ages (Sauer et al., 2010). There was progressive feldspar weathering associated with element release and leaching in the late stage, which decreased with the increasing time uplifted to terrestrial. Moreover, there are more marine-derived species in the flora. It is predicted that there are more motility factors, more Gram-positive bacteria in its flora, and more flora related to methane, nitrate, nitrite metabolism, and sulfur metabolism. It is speculated that this kind of flora can grow by using substances in marine sediments. Interestingly, there are a large number of filamentous appendages reproducing and budding strains in the S0 stratum. Unlike other stages, the S0 stage, the late stage uplifted to terrestrial, is still in an early stage of succession. Soils develop over time through a variety of interrelated processes, such as, organic enrichment, leaching of soluble salts, translocation of clay minerals, and changes in pH (Schmid, 2013). Thus, the ecology in this stage is considered as unstable and barren environments. Appendage and budding bacteria are easy to accumulate, which is conducive to resisting damage and obtaining nutrients in a rapidly changing and barren environment. Actinobacteria have a competitive advantage over other Distance-based redundancy analysis (db-RDA) analysis of (A) bacterial, (B) archaeal, and (C) fungal communities in various strata and environmental factors. Frontiers in Microbiology 09 frontiersin.org microorganisms due to their resilience and adaptability which help them to survive under harsh circumstances (Shivlata and Satyanarayana, 2015). Budding bacteria, Gemmatimonadetes, live in various harsh environments and are frequently resistant to the stress conditions occurring in their habitat. They are well-adapted to low-moisture environments and strong tolerance to the barren environment (DeBruyn et al., 2011). In addition, they could be found in extremely oligotrophic environments as well, for instance, on the surface of cave walls (Zhou et al., 2007;Pašić et al., 2010) and weathering rocks (Cockell et al., 2009). Aggregation or attachment is particularly beneficial in habitats with low nutrient concentrations or a fluctuating nutrient source (Hirsch, 1974). The highest level, the S5 stratum with the longest succession time, is related to phosphorus, which may be related to animal feces. Moreover, there are a larger number of animal parasites_or_symbionts, human pathogens, and cellulolytic strains may be due to the fact that this stage has been uplifted from the sea for the longest time. The flora is more affected by terrestrial animals, humans and plants. During the succession of the S0-S5 stage of bacteria, the bacteria that use photosynthesis to obtain organic carbon gradually increase and reach the maximum amount in the middle stage. However, with the growth of vegetation and the accumulation of organic matter in the soil, the photosynthetic bacteria decrease, but the chemoautotrophy function is always the main metabolic mode of the flora and it gradually increases with succession. Archaeal community When analyzing the archaea's characteristics in each stage, we discovered very interesting phenomena. Enriched Candidatus Nitrosocosmicus in the late stage, enriched Nitrososphaeraceae in the middle stage, and enriched Group 1.1c in the early stage, are all the members of the Thaumarchaeota, but with different capabilities of Ammonia-oxidizing. Thaumarchaeota is a widely dispersed archaeal phylum that includes both ammonia-oxidizing archaea (AOA) and other archaeal species that have not been shown to oxidize ammonia (including Group 1.1c and Group 1.3). In stage S0, Candidatus_Nitrosocosmicus, Thermoplasmata, and a large number of Marine_Group_II archaea were enriched. Candidatus Nitrosocosmicus, the ammonia-oxidizing archaeon is distinguished by its tolerance to high ammonia concentrations (Lehtovirta-Morley et al., 2016;Liu et al., 2021). Members of the Thermoplasmata class have been identified as methylotrophic methanogens and significant drivers in the carbon cycle in both marine and freshwater sediments (Compte-Port et al., 2020). Marine Group II, the most widespread marine planktonic archaea with photoheterotrophic lifestyles based on proteorhodopsin, has been discovered in all the world's oceans, from surface waters to the deep sea. It can be observed that the archaea in the S0 stage are more affected by marine microorganisms. However, Ammonia-oxidizing archaea (AOA), Nitrososphaeraceae are considered adaptation to low ammonia concentrations, and an autotrophic or possibly mixotrophic lifestyle, and may play an important role in nitrogen removal (Stieglmeier et al., 2014). Group 1.1c Thaumarchaeota are the most common archaeal group found in acidic forest soils. They are extensively distributed, especially in conditions with higher moisture and organic matter content (Oton et al., 2016). According to studies of the foreland of the receding Rotmoosferner glacier in the Austrian Central Alps, Crenarchaeal communities in the soil in different stages of development are distinct from each other. In contrast, Group 1.1c Thaumarchaeota are only found in mature soils (Nicol et al., 2006). The previous study revealed that marine sediments have high ammonia concentrations produced during the decomposition of organic matter in the ocean (Mackin and Aller, 1984). It is speculated that highammonia tolerant archaea might originate from the ocean, and still retains as dominant ammonia-oxidizing archaea (AOA) in the S0 stage, then decreases gradually with the succession. Then, the AOA with the ability to utilize low-concentration ammonia became dominant in the soil with a low ammonia concentration. Final, Group 1.1c, which could eventually utilize organic matter rather than ammonia (Weber et al., 2015), dominated later in succession. Fungal community Unlike prokaryotic microorganisms, the distribution of fungi in each stratum is quite uneven, resulting in significant communication variances in each stratum. The intra-group distances are greater than the inter-group distances. It is speculated that this is related to the uneven distribution of fungal symbiosis or parasitic plants. The late, middle, and early stages of terrace succession still have their own characteristic fungi. Moreover, several plant symbiotic fungi were found in successional stages. It is speculated that the uneven distribution of fungal host plants (encrusted lichens and mosses) might result in the uneven distribution of fungal communities in the soil during sampling. Some fungal species might be controlled by fungal biocontrol agents (mycoparasite). Besides, the sea breeze is possible factor that might influence fungal spore dispersal. Plant pathogens, Fungicolous fungi, and lichenized fungi were enriched in the late stage. It is speculated that there are some unique lichen species in this stage. Although there are reports of this stage of lichen in the prior study, some can only be defined as genera, but not specific species. Rhizoctonia organisms are often soiled fungi mostly associated with roots and usually pathogens. However, there have been observations of saprophytic and symbiotic species. A few species have been identified as parasitic on herbaceous plants or bryophytes (mosses) (Hietala et al., 2001). The Verrucariaceae (Ascomycota) is a lichenized fungus family with a broad group of algal symbionts, including certain algae that are seldom or never associated with other lichens (Thüs et al., 2011). Hypocreaceae members are fungicolous on fungi solely or largely, and have been widely investigated and commercialized as biocontrol agents (Põldmaa, 2000). Fungicolous fungi are a huge, varied, ecologically, and a trophic group of fungal-associated organisms. Symbionts, mycoparasites, saprotrophs, and even neutrals are all terms used to describe them (Sun et al., 2019). The ascomycete family Nectriaceae (Hypocreales) were enriched in the middle stage (S3 stage). Hypocreales is one of the most successful orders of ascomycetes on mosses and hepatics. More than 30 species of Bionectriaceae and Nectriaceae are obligately bryophilous (Döbbeler, 2005). The yeast Yarrowia is extensively dispersed and found in Antarctic marine sediments (Zhang et al., 2012). Hannaella is a genus of basidiomycetous yeast that belongs to the Tremellales order of the phylum Basidiomycota. This genus currently has around 12 species, all of which are prevalently spread on the leaf surfaces of numerous plants, such as rice, wheat, and fruit trees (Li et al., 2021). Although no species of this genus are currently found in polar regions, a former member of Frontiers in Microbiology 10 frontiersin.org this genus, Cryptococcus luteolus (Hannaella luteola), has been discovered in Antarctic soils samples from the Capes Evans-Royds area, and the Ross Dependency (di Menna, 1966;Atlas et al., 1978). It was also found in soil samples from non-polar cold habitats of Asia; the Pamir Mountains (Babjeva and Reshetova, 1971). Despite the lack of evidence, it does not rule out that this yeast is related to moss. The ancestral species in Didymellaceae are the Graminae-pathogens, Ascochyta hordei, and Phoma paspali. In Australia and New Zealand, the latter species has long been thought to be an indigenous grass pathogen. A large number of Phoma genera of Didymellaceae have also been found in the mosses of Antarctica (McRae and Seppelt, 1999;Aveskamp et al., 2010). In addition, Entrophospora, Diversisporales, and Glomeromycota enriched in the S3 stage were known as arbuscular mycorrhizal fungi (AMF), one of the most widely distributed plant symbiotic fungi in nature (Wu et al., 2013;Camenzind et al., 2014;Qi et al., 2020). Glomeromycota can form arbuscular mycorrhizas with a huge number of plants: liverworts, ferns, gymnosperms, and angiosperms (Bonfante and Venice, 2020). It might form arbuscular mycorrhizas with the thalli of bryophytes in biological soil crusts (BSC) to profit from each other. The host plant's photosynthetic carbohydrates are redirected to AM's growth. Water and nutrient uptake from the fungal partner to the host plant are exchanged. In most cases, AM symbiosis benefits the host plant by improving plant growth, nitrogen absorption, drought tolerance, and soil structure. Besides, it may have helped plants adapt from marine to terrestrial environments. This is crucial for the restoration and succession of the vegetation on the raised coastline. In the early stage, melanized fungi were enriched. The members of Herpotrichiellaceae are also known as animal pathogens. In certain harsh conditions, such as Antarctic rocks, melanized microbes are typically the dominant species, indicating that melanin is helpful to their life cycle (Horré and De Hoog, 1999;Chowdhary et al., 2015;Abdollahzadeh et al., 2020). One of the essential parts of their extraordinary ability to resist external stressors is the structure of their cell wall. The cells are encased in a thick, strongly melanized cell wall coated with black hard plaques, providing supplementary protection and making them essentially impenetrable and resistant to commercial enzymes such as chitinases and glucanases (Selbmann et al., 2012). The ability of the melanized fungus to withstand cosmic and terrestrial ionizing radiation shows that melanin is also important for radioprotection. Furthermore, melanized fungal species, such as those found in Chernobyl's reactor, respond to ionizing radiation with enhanced growth (Dadachova and Casadevall, 2008). The melanized fungi include saprobes, plant and human pathogens, mycoparasites, rockinhabiting fungi (RIF), lichenised, epi-, ecto-, and endophytes. It is speculated that these characteristic groups may be tolerant to harsh environments and can form biofilms (subaerial biofilms forming microorganisms; SAB), inhabit rocks, and weather rocks. Other microorganisms or plants may be able to colonize and thrive as a result of these functions. For instance, the formation of biofilms is conducive to the colonization and production of other microorganisms, and the weathering rock is conducive to the growth of lichens and other plants (Abdollahzadeh et al., 2020). The enriched fungi along the Ardley Island terrace in accordance with the results of soil development and plant succession in this area. The studies of Jens Boy and Robert Godoy concluded that: "All temporal gradients showed soil development leading to differentiation of soil horizons, carbon accumulation and increasing pH with age. Photoautotroph succession occurred rapidly after glacier retreat, but occurrences of mosses and lichens interacting with soils by rhizoids or rhizines were only observed in the later stages. The community of ground-dwelling mosses and lichens is the climax community of soil succession, as the Antarctic hairgrass Deschampsia antarctica was restricted to ornithic soils. Neither D. antarctica nor mosses at the bestdeveloped soils showed any sign of mycorrhization. " (Boy et al., 2016). As excess of nutrients is known to inhibit mycorrhization (de Vries et al., 2007). Perhaps the functional succession of fungi is: degradation and utilization of plants as nutrients (saprophytic fungi) -benefit from symbiosis with plants (AMF) -rock-inhabiting to help vegetation colonize rocks, resist stress (melanized fungi), and infection animals as pathogens. Conclusion The Ardley Island terrace is a typical coastal uplift terrace with a complete and well-aged six strata. Because of the slow growth of organisms in Antarctica, our results showed that although this area spans over 200-7,200 years, the succession of the microbial community can still be detected. The different historical and geological background of each successional stage is the fundamental reason for the different microbial community composition. The different strata heights correspond to the time gradient of the coastal uplift. PERMANOVA tests revealed that soil elements Mg, Si, and Na were related to the bacterial and archaeal community structure discrepancies, and Al, Ti, K, and Cl were related to the fungal community structure discrepancies. These soil elements may be the main driving factors for the succession of bacteria, archaea and fungi along terrace. However, environmental factors also have an important impact on the succession of microbial communities, which may differently depend on the microorganism. Some marine microbial groups were found in near-coastal soils of the S0 stratum. At the highest level with the longest succession period, animal pathogenic bacteria and microorganisms that may be more resistant to stress appeared. The photosynthetic bacteria, archaea that can perform ammonia oxidation at low concentrations of ammonium salts, and arbuscular mycorrhizal fungi (AMF), which are most related to soil nutrition, are in the middle strata. This also corresponds to the middle stratum is most related to ammonium salt, nitrate, and TOC in PCoA results. Moreover, the analysis of the characteristic microorganisms along the terrace reveals that the succession of microorganisms may be influenced by more complex and comprehensive factors, such as environmental instability, relationship with plants and ecological niches, and environmental tolerance. Interestingly, we found that a large number of budding reproduction and/or with filamentous appendages bacteria were enriched in the S0 stratum, which might be related to its adaptation to rapid changes and barren environments. Another is the succession of Thaumarchaeota archaea in different classes that began from: ammonia-oxidizing archaea (AOA) with the ability to utilize high-concentration ammonia to AOA with the ability to utilize low-concentration ammonia, and lastly to AOA with the inability to utilize ammonia. The other is the changes in the fungi-plants relationship in different classes (saprophytic-symbiosis-mutualism). In short, the clear chronological sequence of Ardley Island terrace (Antarctica), less disturbed by animals, plants, and humans, and the obvious microbial succession, Frontiers in Microbiology 11 frontiersin.org made it an ideal place to study the succession of microbial communities. Data availability statement The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: NCBI -PRJNA784744.
2023-02-06T14:13:44.922Z
2023-02-06T00:00:00.000
{ "year": 2023, "sha1": "e9526898932e8d9e7a8f51ac0a16eec8b685dd5a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "e9526898932e8d9e7a8f51ac0a16eec8b685dd5a", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
250329605
pes2o/s2orc
v3-fos-license
The Changing Landscape of Anticoagulation in Pediatric Extracorporeal Membrane Oxygenation: Use of the Direct Thrombin Inhibitors Bleeding and thrombosis frequently occur in pediatric patients with extracorporeal membrane oxygenation (ECMO) therapy. Until now, most patients are anticoagulated with unfractionated heparin (UFH). However, heparin has many disadvantages, such as binding to other plasma proteins and endothelial cells in addition to antithrombin, causing an unpredictable response, challenging monitoring, development of heparin resistance, and risk of heparin-induced thrombocytopenia (HIT). Direct thrombin inhibitors (DTIs), such as bivalirudin and argatroban, might be a good alternative. This review will discuss the use of both UFH and DTIs in pediatric patients with ECMO therapy. INTRODUCTION Extracorporeal membrane oxygenation (ECMO) is increasingly used in pediatric patients with life-threatening cardiac and/or respiratory failure. Very recently, the extracorporeal life support organization (ELSO) reported 154,106 ECMO runs by 521 participating centers worldwide since 1990 (1). Neonatal and pediatric ECMO runs accounted for 29.4 and 20.1% of the total number of ECMO runs, respectively. ECMO is generally indicated in patients with acute severe heart or lung failure with high mortality risk despite optimal conventional therapy. Indications for pediatric ECMO include a reversible disease process in which ECMO provides a shortterm bridge to recovery. In some cases, ECMO can be used as a bridge to transplantation. In the study of Dalton et al. bleeding complications, such as intracranial hemorrhage, were seen in up to 70.2% of neonatal and pediatric patients with ECMO. Thrombotic complications, such as circuit thrombosis and cerebral infarction, occurred in up to 37.5% of neonatal and pediatric patients (2). Despite increasing clinical expertise and improvements in technology, hemostatic complications, such as bleeding and thrombosis, remain an important cause of mortality and morbidity in ECMO-treated children worldwide. The hemostatic complications are caused by both circuit and systemic patient factors, which influence the unique balance of the hemostatic system (3). They commence upon the exposure of blood to the foreign, non-endothelial materials of the extracorporeal circuit, initiating activation of coagulation, and acute inflammatory responses, shifting the hemostatic balance to a hypercoagulable state. Antithrombotic therapy is necessary to maintain the patency of the circuit and to reduce thrombotic complications while minimizing bleeding. Until 2018, most centers used unfractionated heparin (UFH). Since then, the use of direct thrombin inhibitors (DTI), especially bivalirudin and argatroban, has increased. In this review, we will discuss the use of both UFH and DTIs in pediatric patients who received ECMO therapy. Characteristics of Unfractionated Heparin Until recently, all patients with ECMO were anticoagulated with UFH, mainly because of the long-term experience with the anticoagulant, the lack of better alternatives, and the ability to rapidly reverse with protamine sulfate when complications occur. UFH is a sulfated mucopolysaccharide. Heparin molecules range in molecular weight and have a mean molecular weight of 15,000 kDa, corresponding to about 45 saccharide units (4). About one-third of the heparin molecules possess the unique pentasaccharide sequence, responsible for its anticoagulant effect. Via this pentasaccharide sequence, UFH binds to antithrombin, causing a conformational change and increasing antithrombin efficiency by a 1,000-fold, to inhibit thrombin (factor IIa) and factors Xa, IXa, XIa, and XIIa. The heparin-antithrombin complex is, however, unable to inactivate thrombin bound to fibrin. By inactivating free thrombin, UFH prevents both fibrin formation and thrombin-induced activation of platelets and factors V, VIII, and XI. For inhibition of thrombin, heparin should bind to both thrombin and antithrombin. Therefore, heparin molecules with less than 18 saccharides are too short to bridge antithrombin to thrombin and only inhibit factor Xa. Heparin is administered parenterally by continuous intravenous infusion or subcutaneous injection. Unfortunately, UFH binds to endothelial cells and endogenous plasma proteins other than antithrombin, contributing to the variability of the anticoagulant response to heparin among patients. The half-life of UFH depends on the dose and varies between 30 and 150 min, as low doses of heparin are rapidly cleared from plasma through binding to endothelial cell receptors and macrophages, whereas high doses of heparin are mostly cleared through the slower mechanism of renal clearance (4). Dosing and Monitoring International surveys have shown large variation in the management of anticoagulation during ECMO (5,6). The ELSO anticoagulation guidelines of 2014 recommend an initial UFH bolus of 50-100 units per kilogram body weight at the time of cannulation followed by a continuous infusion during the ECMO course (7). Close monitoring is required due to the variable anticoagulant effect of UFH, hemodilution, and coagulopathy of the patient due to underlying diseases and post-surgical conditions. There is no consensus on heparin dosing and monitoring and as a consequence, significant inter-institutional variability exists (6). The most commonly used coagulation tests include the activated clotting time (ACT), the activated partial thromboplastin time (aPTT), and the anti-factor Xa assay. All coagulation tests have limitations. ACT does not only reflect the effect of heparin but is also prolonged as a result of thrombocytopenia, hemodilution, hypothermia, low fibrinogen, and other clotting factor deficiencies. Using ACT alone in pediatric ECMO patients with UFH has been shown to lead to suboptimal anticoagulation (8). The baseline aPTT is higher in neonates and infants than in teenagers (9). In addition, the aPTT response to UFH is age-dependent, younger children having higher aPTT for the same anti-factor Xa (10). Prolongation of aPTT is not only caused by heparin administration but may also occur due to underlying conditions, such as diffuse intravascular coagulation. Furthermore, many aPTT reagents are available, and all coagulation laboratories should calibrate their assays to develop the target aPTT range. A meta-analysis of pediatric studies showed a very weak correlation between ACT and heparin dose and aPTT and heparin dose, respectively (11). Anti-factor Xa assay was the only laboratory test that showed a strong correlation with heparin dosing (r = 0.61; 95% CI 0.25-0.82). A recent literature review investigated the association between coagulation tests and hemostatic complications, such as bleeding and thrombotic events (12). In nine studies, no association was found between aPTT or ACT or thromboelastography (TEG) and hemostatic complications. In one study, however, higher antifactor Xa levels were associated with fewer clotting events (13). Furthermore, Northrop et al. showed that after incorporation of anti-factor Xa assay, TEG and antithrombin measurements in addition to the standard laboratory tests ACT, and aPTT in their revised anticoagulation protocol, the median blood product usage, and the frequency of cannula bleedings and surgical site bleedings decreased (14). In addition, the median circuit life was increased significantly from 3.6 to 4.3 days. Niebler et al. also showed a significant decrease in circuit changes and intracranial bleeds after changing from an ACT-based anticoagulation protocol to an anti-factor Xa-based protocol (15). Based on the abovementioned data, the anti-factor Xa assay seems to be the most useful test to monitor anticoagulation in patients with ECMO. Limitations Heparin Although UFH has been used for years in patients with ECMO, it has various important limitations, especially in neonates and young infants (4). As mentioned before, UFH binds not only to antithrombin but also to other plasma proteins and endothelial cells. As the plasma concentrations of these proteins depend on age and underlying conditions of the patient, the heparin response is unpredictable and needs to be closely monitored. Monitoring of UFH is a challenge, as previously explained. Another limitation of UFH is the development of heparin resistance, which is defined by a progressive increase of heparin dose based on anti-factor Xa, aPTT, or ACT levels. Several mechanisms may be responsible for this phenomenon, i.e., decreased levels of antithrombin, increased binding to proteins or platelets, or increased factor VIII. Decreased levels of antithrombin may be seen in several settings that include neonates, nephrotic syndrome, and consumption and insufficient synthesis in critically ill patients; all of which can be present in the ECMO population. UFH may cause bone loss by decreasing bone formation. However, as UFH is usually given for a short period of time, its adverse effect on the bone will probably be negligible. Finally, heparin may bind to platelet factor 4 (PF4), leading to the formation of heparin-PF4 antibodies, which activate platelets, causing heparin-induced thrombocytopenia (HIT). In pediatric patients with ECMO, this is a rare condition. When HIT is suspected, UFH should be stopped immediately and alternative anticoagulation initiated to maintain the patency of the circuit and to treat HIT. With the development of DTIs, such as bivalirudin and argatroban, an alternative to UFH has become available in ECMO patients with HIT. These anticoagulants might also be promising in patients with ECMO in general. See Table 1 for features of UFH and bivalirudin. Characteristics of Bivalirudin Bivalirudin is a synthetic DTI that binds reversibly to thrombin via both the active/catalytic site and the exosite 1/fibrinogen binding site, independent of antithrombin. It is a small peptide, with a molecular weight of approx. 4,000 Da, which is cleaved by proteases that includes thrombin (16). It has a short half-life of 25 min and approximately 80% is enzymatically cleared and the rest is renally eliminated, allowing its use in patients with mild-to-moderate renal dysfunction without dose modification, but the half-life can be prolonged to 60 min in patients with renal failure requiring hemodialysis (16). Unlike UFH, bivalirudin is able to inhibit thrombin in circulation and clot-bound thrombin thereby decreasing clot stability and promoting thrombolysis. Unlike UFH, bivalirudin does not bind to other circulating plasma proteins and therefore its activity is more predictable. It is also not inhibit by PF4 and potentially also inhibits platelet activation by inhibiting thrombin and in turn activation of factors V, VIII, and X (17). The lack of an antidote/reversal agent is a major disadvantage. While the short half-life is deemed an advantage, in situations associated with stasis, this may prove to be a disadvantage. Therefore, choosing the right anticoagulant for the right patient is crucial. Monitoring of Bivalirudin Monitoring anticoagulation can be extremely challenging in extremely sick children where the risk for both bleeding and thrombosis is high. The DTIs act like a factor inhibitor in coagulation-based assays and therefore lead to an underestimation of factor activities and overestimation of protein C and protein S activities (18). By inhibiting thrombin, bivalirudin results in the prolongation of the PT, aPTT, thrombin time (TT), and ACT. The aPTT is often the most readily available assay and therefore is often used for monitoring of bivalirudin with the recommended target range being 1.5-2.5 times the baseline aPTT. The aPTT assay has several disadvantages, as has been noted with heparin. It has been well established that at high bivalirudin concentrations, the aPTT does not show a linear correlation and there is a plateau effect and may therefore place the patient at risk for bleeding (18). In addition to this, it has been well established that the aPTT is unreliable in patients with lupus anticoagulants or with other factor deficiencies, and increased concentrations of coagulation proteins especially factor VIII, common in really sick patients like those on ECMO, result in significant variability of the aPTT (19). Traditionally, it has been noted that the PT does not correlate with bivalirudin dose, especially at higher doses, and therefore is not used to monitor bivalirudin; however, a recent single center prospective review of bivalirudin use in pediatric ECMO by Ryerson et al. reported that they saw a statistically significant correlation between the international normalized ratio (INR) and bivalirudin dose (20). This has not been reported by others and will require further studies. The TT, on the other hand, is noted to be too sensitive and therefore not a good measure of bivalirudin anticoagulation, but can be used to screen the patient prior to invasive procedures, to rule out the presence of even low concentrations of a DTI (21). The anti-factor IIa assay measures the amount of residual thrombin activity in a sample anticoagulated with bivalirudin, which will be inversely proportional to the amount of bivalirudin in the sample. This assay is not affected by the presence of lupus anticoagulants or factor deficiencies. It is currently not FDA approved for monitoring of DTIs and is therefore it is also not readily available in all laboratories. The therapeutic range is still to be established. In addition to the routine assays, tests that measure the content of DTI in the plasma are another option. These assays include the ecarin clotting time (ECA) and diluted thrombin time (dTT). A recent study by Beyer et al. demonstrated a significant discordance between the aPTT and the ECA and dTT with a higher rate of bleeding complications in patients whose DTI dose was titrated exclusively based on the aPTT (22). It appears that there is a growing body of evidence to support the elimination of the use of the aPTT alone to monitor DTIs but there is a lack of supporting evidence to show poor outcomes. Hence, the aPTT continues to be used to monitor bivalirudin and other DTIs. Ecarin is a metalloprotease isolated from viper venom, which directly activates thrombin and is therefore not affected by other factor deficiencies or lupus anticoagulants. The measured clotting time (CT) is theoretically directly proportional to the concentration of the DTI. However, studies showed that it was only suitable for bivalirudin but not lepirudin or argatroban due to the sensitivity of the chromogenic substrate chosen (23). The dTT assay is a modification of the TT. Since the routine TT is too sensitive to the presence of a DTI, diluting the plasma increases the sensitivity of the assay and allows a linear correlation between the concentration of the DTI and the dTT. Both the ECA and the dTT have been shown to have a more linear correlation to the DTI concentration and are independent of the prothrombin concentration in the plasma (22). Despite these advantages, the exact relationship between the drug concentration and the outcomes of bleeding and thrombosis remains to be established, especially since these assays are limited to very specialized labs. Global coagulation assays TEG and thromboelastometry (ROTEM) are 2 whole blood coagulation assays that are currently being studied for their utility in monitoring the anticoagulation of patients on mechanical circulatory devices. They are able to measure the changes in viscosity of dynamics of clot formation. It remains an assay that is utilized in major centers and therefore has limited data. Studies have shown a good correlation between the anti-factor II assay and an ecarin-modified TEG (24). Similarly, another study found a correlation between the CT of the ROTEM with both intrinsic pathway activator (INTEM) and CT with hepzyme (HEPTEM) and the aPTT and Hepzyme aPTT (25). Data are still scarce and no guidance is available for therapeutic levels. It seems unfortunate, however, that the comparisons are still with the aPTT, an assay that has been shown to be inaccurate in these situations. Dosing of Bivalirudin There are no guidelines for dosing of bivalirudin in ECMO. Dosing strategies vary significantly by institutions. In adults, the majority of the studies report doses varying from 0.025 to 0.05 mg/kg/h with the average rate of bivalirudin infusion required to maintain therapeutic aPTT or ACT levels varying from 0.028 to 0.5 mg/kg/h (26). There is also no consensus on whether a loading dose should be used or not. In studies comparing the 2 strategies, the difference in time to achieving therapeutic levels was only 4 h (27). Further studies are required to determine the safety and risk for bleeding with bolus dosing. In pediatric patients, the largest study by Hamzah et al. reported starting with an infusion rate of 0.3 mg/kg/h for those with creatinine clearance of > 60 ml/min or 0.15 mg/kg/h for those with renal dysfunction. Infusion rates of 0.05-0.3 mg/kg/h were reported to maintain therapeutic aPTT (28). These studies showed both the safety and feasibility to use bivalirudin for patients on ECMO. It has also been shown that bivalirudin requirements increase with time. Hamzah et al. indicated that improved renal function with ECMO, upregulation of proteases that cleave thrombin resulting in increased thrombin levels, increased clot burden over time in the circuit, and increasing levels of fibrinogen over time resulting in increased competition for thrombin binding as possible reasons for this phenomenon. They also reported a dosedependent increase in PT/INR, which may be suggestive of effects on other coagulation factors beyond thrombin. Label Indication Bivalirudin is currently approved for patients who underwent percutaneous coronary intervention (PCI), i.e., patients with or at risk for having HIT or heparin-induced thrombocytopenia with thrombosis syndrome (HITTS). Initial US Food and Drug Administration (FDA) approval was based on results from the Hirulog Angioplasty Study (HAS) where 4,098 patients were randomized to receive bivalirudin or UFH during angioplasty for unstable angina or post-infarct angina (bivalirudin n = 2,059, UFH n = 2,039). Bivalirudin showed no benefit over UFH with regards to the primary composite outcome of any of the following hospital and procedural complications: death, myocardial infarction, the abrupt closure of the dilated vessel, or rapid clinical deterioration of cardiac origin requiring bypass surgery, intra-aortic balloon counter-pulsation, or repeated coronary angioplasty (11.4 vs. 12.2%; p = 0.44) (29). However, patients receiving bivalirudin demonstrated a lower incidence of major hemorrhage (3.8% vs. 9.8%; p < 0.001). Follow-up analysis that included an intention to treat the model with the 214 patients not included in the original analysis showed similar results with regard to ischemic and hemorrhagic complications, with some slight increase in benefit seen with bivalirudin based on an adjusted primary end point of death, myocardial infarction, and revascularization (6.2% vs. 7.9%; p = 0.039) (30). Thus, bivalirudin is at least equitable to UFH with regard to ischemic complications but has a potential benefit of providing lower levels of systemic anticoagulation resulting in a reduction in bleeding rates. Several subsequent studies expanded the use of bivalirudin to PCI in the setting of glycoprotein IIB/IIIa antagonists. The pilot trial, Comparison of Abciximab Complications with Hirulog for Ischemic Events Trial (CACHET) established the proper dosing regimen for bivalirudin for PCI in this setting (0.75 mg/kg bolus; 1.75 mg/kg/h for the duration of the procedure) (31). This dose was applied in the Randomized Evaluation in PCI Linking Angiomax to Reduced Clinical Events trial (REPLACE-1) and the larger REPLACE-2 trial (32,33). REPLACE-2 (n = 6,010) successfully met the non-inferiority end point as compared to heparin with regards to the composite outcome of death, myocardial infarction (MI), urgent revascularization, or in-hospital major bleeding within 30 days [9.2% bivalirudin vs. 10% controls, odds ratio (OR) 0.92; 95% CI 0.77-1.09; p = 0.03]. Bivalirudin was also found to have lower rates of major bleeding (2.4% vs. 4.1%; p < 0.001) and fewer patients treated with bivalirudin experienced a decline in platelet count below < 100 × 10/ 9 l (0.7% vs. 1.7%; p < 0.001). Bivalirudin for PCI in patients with HIT was investigated in the anticoagulation therapy with bivalirudin to assist in the performance of PCI in patients with heparin-induced thrombocytopenia trial (ATBAT) (34). Fifty-two patients with either a new diagnosis of HIT or a past history of HIT were treated with bivalirudin. Procedural success (TIMI grade 3 flow and < 50% stenosis) was achieved in 98% of patients, and clinical success (absence of death, emergency bypass surgery, or Q-wave infarction) in 96%. Off-Label Use Bivalirudin has increasingly been used off-label in part because it has a relatively short half-life and predominantly non-organindependent clearance with less need for reduction in the setting of mild or moderate renal function. Additionally, it is not dependent on a co-factor and therefore less likely to result in drug resistance as can be seen with heparin and low levels of antithrombin (35). For all these reasons, it has been favored for off-label use in cardiac patient management and the management of patients with HIT/HITTS. Off-label use and a few highlighted studies are shown in Table 2. These include medical management of acute coronary syndrome, cardiopulmonary bypass (CBP) on and off pump, and HIT/HITTS with or without the need for cardiac intervention. Pediatric Use Bivalirudin is currently not approved for use in pediatric patients and only a handful of prospective trials have been conducted. In 2007, Young et al. published a pilot dose-finding and safety trial in patients < 6 months of age with thrombosis (36). This study (n = 16) established pediatric dosing for bivalirudin of a bolus dose (0.125 mg/kg) followed by continuous infusion (starting at a rate of 0.125 mg/kg/h) to target 1.5-2.5 times the patient's baseline aPTT. Two patients suffered from a major bleeding event. No patient had thrombus progression at 48-72 h and 6 patients (37.5%) had complete or partial resolution of the thrombus at 72 h. This was followed by the Utilization of Bivalirudin on Clots in Kids (UNBLOCK) study; an open-label, single-arm, dose-finding, pharmacokinetic, safety, and efficacy study conducted in children aged 6 months to 18 years with a deep venous thrombosis (37). Eighteen children received a bivalirudin bolus (0.125 mg/kg) followed by continuous infusion (starting at a rate of 0.125 mg/kg/h) to target 1.5-2.5 times the patient's baseline aPTT. There were no major bleeding events, only one minor bleeding event and the only non-bleeding adverse event was hypertension. An interesting finding was the complete or partial thrombus resolution rate of 50% at 48-72 h and 89% at 25-35 days. While this finding supported a possible therapeutic benefit, the small number of children enrolled and lack of comparable data for UFH make it difficult to draw conclusions about efficacy benefits. In this study, the plasma bivalirudin levels correlated more closely with the infusion rate than with the aPTT, therefore aberrant aPTT results should be interpreted within the clinical context. A more detailed discussion regarding drug monitoring is found above. An additional prospective trial enrolled children who underwent PCI for congenital heart disease (n = 110) (38). In this trial, patients received a weight-based dose of 0.75 mg/kg bolus followed by a 1.75 mg/kg/h continuous infusion. In this setting, pharmacodynamics and kinetics were similar to adults with a trend toward increased clearance rates in neonates. There were minimal major bleeding events (1.8%) or thrombotic events (8.3%). There is only 1 randomized trial of bivalirudin use in children to our knowledge. In this trial, bivalirudin was compared to UFH in children with acyanotic aged 1-12 years who underwent open-heart surgery (39) (n = 50). Bivalirudin dosing in this study was extrapolated from approved weight-based dosing in adults. Children receiving UFH achieved higher ACT levels at first bolus and 30 min after the onset of CPB (673 s vs. 458 s; p < 0.001 and 839 s vs. 590 s; p = 0.03) and a shorter duration of post-CPB ACT increment (immediately after CPB vs. 2 h; p < 0.01). Bivalirudin also prolonged the duration of surgery mostly due to the need for additional bolus doses each of which prolonged the surgery by 10-13 min. There was no difference, however, in chest tube output or need for transfusions between the two groups. Use in Mechanical Support Devices Robust randomized or prospective data for the use of bivalirudin in mechanical support devices, such as ECMO circuits, and ventricular assist devices (VADs) are lacking. A systemic review from 2005 to 2017 looking at bivalirudin and ECMO found only 8 relevant publications (58 patients, 24 pediatric); 2 retrospective case-control studies, 1 case series, and 5 case reports; highlighting the knowledge gap in this area (40). In the two studies comparing bivalirudin to UFH, there was no difference in complication rates (41,42), however, one study did show some advantages with lower blood loss and transfusion rates in the bivalirudin group (42). The variability across studies in ECMO likely reflects differences in the circuit, dosing of bivalirudin, limitations of retrospective data collection, and the heterogeneous population of patients placed on ECMO. A small number of studies have reported on the clinical outcomes of in-circuit thrombosis rates, need for circuit exchange, and need for blood product replacement. In a retrospective chart review (n = 295), Rivosecchi et al. showed a decrease in circuit-related thrombotic events (32.7% vs. 17.3%; p = 0.003) with the use of bivalirudin in patients placed on veno-venous (VV)-ECMO (43). These results were similar to a prospective cohort study (20 ECMO runs in 18 patients) that showed that circuit interventions were lower in patients who received bivalirudin as compared to UFH [median (interquartile range; IQR) circuit intervention rate per run was (0-1) and (1-2); p = 0.0126] (43). It is important to note, however, that in this study, the comparison is within patients who received both UFH and bivalirudin, with 80% of patients were placed on bivalirudin only after UFH failure. A second retrospective study (n = 429) failed, however, to demonstrate a significant difference in the composite outcome of circuit intervention rate and oxygenator/pump change-out rate (44). One additional retrospective review compared adults who received UFH or bivalirudin treated per high-or lowintensity protocols (n = 72) (45). The authors found no difference with regards to thrombotic events during the initial 96 h, the course of the ECMO run, or at any time during the admission. When high-intensity UFH and bivalirudin dosing protocols were specifically compared, patients who received highintensity bivalirudin were more likely to spend time in the therapeutic range than those being treated with high-intensity heparin, possibly related to the enhanced pharmacokinetics of bivalirudin or its lack of dependence on antithrombin. This finding did not translate into meaningful differences in clinical outcomes related to hemostasis and thrombosis. One pediatric retrospective study (n = 32) found no difference in time within the therapeutic range between UFH and bivalirudin (46). In this study, UFH resulted in higher amounts of iatrogenic blood loss per hour; however, this did not translate into higher product utilization. Lastly, no difference was seen in circuit changes between the two groups. There is potential that the short half-life of bivalirudin, while desirable, may not be ideal for mechanical support devices where areas of stasis or non-systemic blood flow may result. This may result in disproportionately low bivalirudin concentrations and thrombus formation. With ECMO, there is no contractile force on blood flow allowing for cardiac blood stagnation and possible formation of intracardiac thrombus, especially in the setting of a very large right or left atrium with insufficient venous drainage or with very poor ventricular systolic function (47). ALTERNATIVE DIRECT THROMBIN INHIBITORS Bivalirudin is just one of the DTIs and it has the most expansive label indication. Intravenous DTIs represent a class of medications that can be either synthetic hirudin fragments (i.e., lepirudin and bivalirudin) or lowmolecular-weight inhibitors that interact at the active site of thrombosis (i.e., argatroban). To our knowledge at this time, only a few case reports have described the use of lepirudin with the primary indication being HITT in the majority of the cases (48,49). Lepirudin is not available in the US. One potential benefit of argatroban over bivalirudin is the long half-life (45 min vs. 25 min) overcoming the potential limitation of bivalirudin in areas of stagnation addressed above. Argatroban undergoes liver metabolism and dosing is not renal dependent. Argatroban has successfully been used in the setting of ECMO. In propensity score-matched observational study of 78 adult patients who received UFH were matched to 39 patients who received argatroban. A composite primary outcome of major thrombosis and/or major bleeding was seen in 83% of patients with UFH and 79% of the patients who received argatroban. The authors concluded that argatroban was found to be non-inferior to UFH regarding bleeding and thrombosis rates. While argatroban drug costs were higher, they were balanced when accounting for blood product support and HIT testing associated with UFH use (50). A systematic review (n = 13) reporting on argatroban use in 307 patients with ECMO found considerable variation in dosing practice and target anticoagulation goals with either the aPTT or ACT. These differences are likely related to patient differences with regards to disease severity, end-organ function, and institutional aPTT or ACT goals. Across the included studies, bleeding and thromboembolic event rates were similar to UFH (51). CONCLUSION Prevention of bleeding and thrombosis in the setting of the inherent variability in ECMO circuits, cannulation, and patient populations is extremely challenging. The choice of anticoagulants, which was limited to heparin, has now increased with the new parenteral anticoagulants. Bivalirudin is being increasingly explored for anticoagulation in patients with ECMO for its obvious advantages of short half-life and ability to bind to both free and clot-bound thrombin, but the lack of a reversal agent is a primary disadvantage. Although data are limited, there appears to be increasing evidence that this may at least be an equally efficacious. It also has the potential to avoid the use of antithrombin replacement and reduce lab monitoring. Current potential benefits are mostly extrapolated from adult data in the setting of PCI and additional studies are needed for specific ECMO to determine the true impact on clinical outcomes, such as transfusion needs, circuit-related thrombosis, and hemolysis. Randomized controlled trials are extremely difficult to conduct in this diverse population of patients and continued data collection on the safety and efficacy of the use of bivalirudin in ECMO will be required to determine if this can be considered a first-line anticoagulant in ECMO.
2022-07-07T14:04:09.356Z
2022-07-06T00:00:00.000
{ "year": 2022, "sha1": "02c9a433d32bc075b2f7a130510dcad2780df552", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "02c9a433d32bc075b2f7a130510dcad2780df552", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
92831647
pes2o/s2orc
v3-fos-license
The Effect of KNO3 on the Growth of Sorghum Plant (Shorgum bicolor var. numbu) The purpose of this research is to know the effect of giving KNO3 on sorghum plant’s growth (Shorghum bicolor var. numbu). This study was conducted a Complete Randomized Design by using four treatments (K1, K2, K3 and K4), each treatments did in five repetitions. K0 (without KNO3= only aquades), K1 were given 15% KNO3, K2 were given by 30% KNO3, K3 were given 45% KNO3 and K4 were given 60% KNO3. Parameters measured were the number of leaves, the weight of wet leaves, the weight of dry leaves, malai’s weight, and chlorophyll contents (a cholophyll, b chlorophyll, and total chlorophyll). Data analyzed by using ANOVA (Analysis of Variance) then continued by calculating honestly significant difference (Tukey –HSD Test) at 0,05 significanct level. The results show that there was a non-significant difference for all parameters of giving KNO3. However, K4 treatment (60%) had significant difference for all parameters like; leaf-number, the weight of wet leaves, the weight of dry leaves, heavy malai. In spite of this, the chloropyll content on Sorghum plant did not show any significant results. This study still not shows the best result and maximum effects KNO3 on Sorghum plant’s growth. Hence, deeper assessment is much needed. Introduction Sorghum is one of the cereal plants that can grow in various environmental conditions, especially on dry marginal land in Indonesia. Sorghum has advantages on broad adaptability, tolerance to drought, high productivity, and more resistant to pests and diseases than any other food crop. Sorghum plants have benefits such as food, feed, and industrial materials [1]. Other countries used sorghum seeds as food, animal feed and industrial raw materials. As a world food ingredient, sorghum is ranked 5th after wheat, rice, corn, and barley. In developed countries, sorghum seeds used as poultry feed, while the stem and leaves for ruminant livestock. Sorghum seeds are also industrial raw materials such as, ethanol, beer, wine, syrup, glue, paint and modified starch. Some countries such as America, India and China, sorghum sre used as raw material for the manufacture bioethanol fuel. The production of sorghum in Indonesia is very low, even sorghum product is not available in the markets, to increase the production of sorghum, many ways that can be done,one of them is allocation of fertilizer. Fertilization is done with the aim to sufficient the nutrient availability that needed the plants in growth, so it will increase the production of the plants. One of the fertilizer that used is the fertilizer that contains macro nutrients such as N, P, and K. K elements is very needed for carbohydrate metabolism such as formation, split, and starch translocation. The function of kalium is very important in plant physiology, act as essential enzymes activator in metabolism reaction and enzymes involved in starch synthesis. However the K concentration in the soil solution is only partially absorbed by the plants, the remainder released into the soil solution or attached strongly to the surface in the colloid soil. One of the fertilizer that can be used as K source for plants is KNO3 fertilizer. The nutrients that contains in KNO 3 are potassium and nitrogen that needed the plants for increase the growth. The N element that contains in KNO 3 is needed in large quantities for plants Reference [2] said that K elements is the second macro nutrients after N that required by the plants. The K nutrients taken by the plants in the form of ion K+, K nutrient has a large hydrated elements and valvous 1, so this element is not strongly absorbed so it can be lost easily and washed in the soil. The K element are supplied into the ground in form of sea salt fertilizer such as KCl, KNaCl, K 2 SO 4 dan KNO 3 . In the previous research, showed that KNO 3 solution able to break the dormancy of java acid effectively at the 0,4% concentration. Reference [4] showed that allocation of KNO 3 fertilizer up to 150 kg/ha produce the higher plants, more leaves, larger of leaf area index, larger dry weights, the number of seeds per line higher, higher production and uptake pottasium compared with the control. Therefore, do the research about the effect KNO 3 on the growth of sorghum plants, in attempt to produce a certain quality animal feed. This research is attempt to find out the effect of KNO 3 on the growth of sorghum plants (Shorgum bicolor var. numbu). Parameter that can be measured were number of leaves, the weight of wet leaves, the weight of dry leaves, malai's weight, and chlorophyll content of sorghum plant. Results Parameter that used to find out the growth of sorghum plant are the number of leaves, the weight of wet leaves, the weight of dry leaves, malai's weight, and chlorophyll content of sorghum plant (chlorophyll a, b, and total). The Effect of KNO 3 on The Number of Leaves, The weight of wet leaves, and Dry Weight of Leaves of Sorghum Plant The analysis results on Table 1 showed that KNO 3 on some concentration is no effect on number of leaves, the weight of wet leaves, and the weight of dry leaves. However, KNO 3 at the concentration 60% (K4) give the higher results compare with all treatments. The Effect of KNO 3 on Malai's Weight of Sorghum Plant The analysis results (Table 2) showed that KNO 3 at 15% concentration (K1) is not much different with control (without KNO 3) . This just looks different at 60% concentration (K4), which gives higher malai's weight of sorghum plant. The Effect of KNO 3 on Chlorophyll Content of Sorghum Plant The analysis results (Table 3) showed that KNO 3 at some concentration is no effect on chlorophyll content of leaves. Discussion Based on the analysis results, giving KNO 3 on the growth of sorghum plant on each parameter, which is calculated, has differemt results. Generally, giving KNO 3 can affect on plant growth such as the number of leaves, the weight of wet leaves, the weight of dry leaves, malai's weight, and chlorophyll content on sorghum plant. Although the optimum concentration of KNO 3 is unknown on growth of sorghum plant. The effect of KNO 3 on some concentration, give the result that not real on all variable, except at 60% concentration (K4), give the higher results from all parameter that observed (except Chlorophyll content parameter). This shows that KNO 3 in low concentration is not showed the optimal results because there is no maximum point and decrease in growth of sorghum plant. The research of [4] showed that KNO 3 give the effect of plant growth in increasing the number of leaves and sustain the vegetative period of Amorphophallus muelleri, with KNO 3 application, which are given through the leaves give the best results at 4% concentration compared application in soil, which not effect in each doses. In this connection, Reference [5] said that KNO 3 produce the best growth on vegetative growth and reproduction characteristics of strawberry plant cv. 'Merak'. The research of [6] showed that KNO 3 at 6 and 8 mM concentration with spray application also affected on vegetative growth ang reproduction of tomato plan. KNO 3 at 0,5% showed the good growth on parameter such as seed height, leaf length, leaf width, leaf area, number of leaves, number of roots of orchid plant with spray application on all parts of the plants. Relationship between giving KNO 3 with the variable from this research can be seen from the correlation value that formed. Correlation value showed that giving KNO 3 of the number of leaves are 73,9%, the effect giving KNO 3 of the weight of wet leaves are 87,9%, the effect giving KNO 3 of the weight of dry leaves are 89,5%, the effect giving KNO 3 of malai's weight are 79,4%, the effect giving KNO 3 of chlorophyll content are chlorophyll A 16,6%, chlorophyll b 31,3%, and total chlorophyll 20,8%. The correlation results showed that there is a strong correlation between giving KNO 3 to leaf forming, the weight of wet leaves, the weight of dry leaves, and malai's weight with > 70% correlation that indicate there is strong correlation and the rest is affected by other factors. While, giving KNO 3 of chlorophyll content are < 32% (weak correlation). Leaves are one of the important plant organs for plants. One of the important function is doing the photosynthesis. Plants that have many leaves (Table 1), it will produce heavier the weight of dry leaves. Plants that have more leaves will capture the energy of the sun for photosynthesis, it will produce much of photosynthesis results because stomata leaves will manage the inclusion of CO 2 as photosynthesis material. Asimilation of the plants can be seen from the number of flower, so the flowers as a panicles on sorghum plants is the results og plants photosynthesis. Because at the generatif growth will be allocation for seed forming on sorghum panicles. the more panicles that are formed, the more fruit (seeds) will be formed ( Table 2) On the chlorophyll content parameters, with weak correlation, it means that the content of chlorophyll of sorghum plants only slightly affected by the giving of KNO 3 (Table 3). Each treatment did not showed the significant results. Reference [7], one of the elements chlorophyll formation is nitrogen (N), which this element is needed in large amount. But, the plant can not directly used the nitrogen because the bactery must doing the fixation stages, and then the plants can use it. The results ( Table 3) showed that giving KNO 3 at low concentration did not showed the significant results on chloropyhll forming. Besides nitogen, there are other factors that can be affected on chlorophyll forming, it is environmental factor such as water, light, temperature, others mterials (N, Fe, Mg, Cu, Zn, O, and sulfur) also the genetic factor of the plants. Nevertheless, the chlorophyll content which is low, plant can grow well, it can be seen on others parameter that shown the different at 60% concentration compared with other concentration, although it is not shown the significant result yet. Photosynthesis process on plant not only affected by chlorophyll content but also affected by others factor such as CO 2 , light, water, and etc. However, the plants with much leaves, have better photosynthesis priduct otomatically, this is related with amount of stomata that carry out the CO 2 for photosynthesis. The results of this research can be concluded that giving KNO 3 at high concentration (60%) can increase the growth sorghum plant on the number of leaves, freah weight of leaves, the weight of dry leaves and malai's weights, but it can not affected of chlorophyll content. There is relation on photosynthesis product (malai's weight) on the leaves parameter that assumed of leaf forming affects plant product, as the leaf function as a place to do photosynthesis. This is showed the number of correlation leaves amount with panicles forming are 55%, it means that there is 55% relation between forming panicles with number of leaves, and the rest are affected by others factor. The correlation between malai's weight with the weight of wet leaves are 77,4%, it means that, the relation between malai's weight and the weight of wet leaves are strong correlation. The correlation between malai's weight with the weight of dry leaves are 72,4%, it means that there are strong correlation, malai's weight are affected by the weight of dry leaves. This is showed that, the optimum concentration of KNO 3 is unknown for the growth of sorghum plants. So, need further review about KNO 3 with higher concentration, also more frequent of KNO 3 application, with the aim to get the optimum concentration, frequency, and technique application so can get the maximum output. This is confirmed by [3] research, that increasing dose KNO3 fertilizer up to 150 kg/ha or 112.5 gr/plots produce higher height of corn crops, more amount of leaves, largest leaf area index, greater of the weight of dry leaves, more amount of lines per cob, more amount of seeds per row, higher production and higher pottasium uptake than controls. Conclusion From this research can be conclude that: 1. KNO 3 at 60% concentration (K4) showed the higher results compared with other concentration on parameter of Sorghum plant such as number of leaves, the weight of wet leaves, the weight of dry leaves, and malai's weight. However, it is no effect on chlorophyll content of leaves. 2. The optimum concentration KNO 3 is unknown for the best growth of sorghum plant. Suggestion Based on the results of research that has been done, needs further review about KNO 3 with higher concentration, also more frequent of KNO 3 application, with the aim to get the optimum concentration, frequency, and technique application so can get the maximum output.
2019-04-03T13:08:16.150Z
2018-10-23T00:00:00.000
{ "year": 2018, "sha1": "cd1c2c61a89ecdbef675fb42e96faa2321386308", "oa_license": null, "oa_url": "https://doi.org/10.12691/plant-6-1-1", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8ee0f1f68a47f0385ab7d9f4df70e39122d414f8", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
231621710
pes2o/s2orc
v3-fos-license
Another layer of complexity in Staphylococcus aureus methionine biosynthesis control: unusual RNase III-driven T-box riboswitch cleavage determines met operon mRNA stability and decay Abstract In Staphylococcus aureus, de novo methionine biosynthesis is regulated by a unique hierarchical pathway involving stringent-response controlled CodY repression in combination with a T-box riboswitch and RNA decay. The T-box riboswitch residing in the 5′ untranslated region (met leader RNA) of the S. aureus metICFE-mdh operon controls downstream gene transcription upon interaction with uncharged methionyl-tRNA. met leader and metICFE-mdh (m)RNAs undergo RNase-mediated degradation in a process whose molecular details are poorly understood. Here we determined the secondary structure of the met leader RNA and found the element to harbor, beyond other conserved T-box riboswitch structural features, a terminator helix which is target for RNase III endoribonucleolytic cleavage. As the terminator is a thermodynamically highly stable structure, it also forms posttranscriptionally in met leader/ metICFE-mdh read-through transcripts. Cleavage by RNase III releases the met leader from metICFE-mdh mRNA and initiates RNase J-mediated degradation of the mRNA from the 5′-end. Of note, metICFE-mdh mRNA stability varies over the length of the transcript with a longer lifespan towards the 3′-end. The obtained data suggest that coordinated RNA decay represents another checkpoint in a complex regulatory network that adjusts costly methionine biosynthesis to current metabolic requirements. S3. Figure S3. Stretch of rare codons present within last 800 nt of metF. Frequency of each codon from 300-450 (= nt 900-1,350) of metF is given in percent ('%') for S. aureus. 'AUG' has a frequency of 100 % as it is the only base triplet coding for methionine. A frequency of 80 % means that the respective codon is used by S. aureus in 80 % of cases, when the particular amino acid is encoded. Codons with frequencies below 20 % (grey) and 10 % (red) are regarded as rare codons. Region of nt 1,168-1,194 (= codons 389-398) within metF, where 29 % of 5'-ends were detected with 5' RACE (see Figure 7) is highlighted by a blue box. Base triplet and amino acid in single letter code is given for each position. kb fragments were DpnI-digested, cleaned up using the innuPREP PCRpure kit and used for a 'ligation' reaction (see above). Vector pBASE6_Ter_mutated_1 was generated in several steps because cloning as described above was unsuccessful. First, two fragments overlapping in the region containing the point mutations introduced by the primers were amplified by PCR using the pBASE_Ter_destab plasmid as template. The 800 bpfragment contained a BglII restriction site, the 3' flanking region of the met leader and the 3' region of the met leader, the 1.4 kb-fragment contained a BglII restriction site, the 5'-flanking region of the met leader and most of the met leader sequence (both fragments overlapped in the Terminator region). PCR reactions were treated with DpnI and cleaned up using the innuPREP PCRpure kit. 50-100 ng of each fragment were used as template for an overlap PCR using the primers FW099 and FW100. The resulting 2.2 kb fragment was size-separated from unwanted PCR products via gel electrophoresis, cut out from the agarose gel and purified using the NucleoSpin ® Gel and PCR Clean-up kit (Machery-Nagel, #740588.250). Purified fragment was subjected to A-tailing using Dream-Taq polymerase (Thermo Scientific, #EP0703), following manufacturer's instructions, for cloning into the pGEM-T-easy vector system I (Promega, #A1360). The resulting plasmid, pGEM-T-easy+Ter_mut_1, was treated with BglII (Thermo Scientific, FastDigest) to cut out the Ter_mut_1+1kb_flanking region fragment. pBASE6 vector was linearized using BglII and dephosphorylated using CIP (alkaline phosphatase calf intestinal, NEB, #M0290) according to manufacturer's instructions. The fragment and the linearized vector were size-separated on an agarose gel, cut out from gel and purified using the NucleoSpin ® Gel and PCR Clean-up kit. The fragment was cloned into pBASE6. pEB01-met leader-metI Vector pEB01-met leader-metI was generated by amplifying the met leader sequence (from the -35 signal on) until nt 215 of metI with PCR using genomic DNA of S. aureus Newman as template. PCR product was digested with BamHI (Thermo Scientific, FastDigest) and cloned into pEB01. pJC1_tRNAi_deletion Vector pJC1_tRNAi_deletion was generated by amplifying the pJC1-MetTBox-metleader-cl-pR-eYFP vector in two fragments that overlapped in the kanamycin resistance cassette. The 5.2 kb fragment included the 3'-end of the tRNA sequence on the non-overlapping end that was subsequently removed by Pfl23II (Thermo Scientific, FastDigest) digestion. The 4.5 kb fragment had a flanking Pfl23II restriction site on the non-overlapping end. Both fragments were DpnI-and Pfl23II-digested, cleaned up using the innuPREP PCRpure kit and used for a ligation reaction. Accuracy of all plasmids was verified by Sanger sequencing. Probe signals were displayed in *.tiff files. These files were opened in Fiji and the region of interest was selected by using the rectangle tool covering the band of interest. After measuring the last band, the signals were plotted and the background noise was removed by closing the area under the curve. The enclosed areas under the curves were measured by the programme. The data were exported from the results table. Percentage of transcript remaining was calculated by setting the t0 value (derived from the t0 band signal of the rifampicin assay) as 100 %. 5S rRNA signals were quantified. Only, if values were equal for all time points, signals of met leader transcripts were quantified. The 'percentage of transcript remaining' was plotted against time to obtain the graph shown in Figure 4B. Supplementary Method 3 In-line probing assays. In-line probing was used to determine the secondary structure of the met leader RNA. The experimental set up was adapted from (2). In vitro transcribed RNA of either the full-length (440 nt) or a shortened (237 nt) met leader version were used for in-line probing. Column-purified PCR products containing a T7 promoter sequence immediately upstream of the met leader sequence were used as template for in vitro transcription. To increase transcription efficacy, the full-length met leader template sequence contains two additional 5'guanosines and the shortened met leader one additional 5'-guanosine. Sequences of the resulting T7 transcripts are listed in Table S3. In vitro transcription was carried out using the MEGAscript T7 Kit (Ambion, #AM1333) according to the manufacturer's instructions for short transcripts. Following, template DNA was digested by incubation with 1 µl Turbo DNase for 15 min at 37 °C, reaction volume was increased to 100 µl with RNase-free ddH2O and RNA was purified with 100 µl phenol/chloroform/isoamylalcohol solution (P/C/I, 25:24:1) in a Phase Lock Gel (PLG) 'heavy' tube (5 Prime, #2302830). After vigorous shaking reaction was centrifuged for 12 min at 13,000 rpm and 15 °C and upper aqueous phase cleaned up via a G-25 column (illustra MicroSpin G-25 column, GE Healthcare, #27-5325-01) according to manufacturer's instructions. RNA was precipitated with 300 µl 30:1 ethanol/sodium acetate mix for 2 to 3 h at -80 °C or overnight at -20 °C. When precipitated at -80 °C solution was thawed on ice for 20 to 30 min prior to centrifugation for 30 min at 14,000 rpm and 4 °C. RNA pellet was washed with 100 µl 70-75 % ethanol, centrifuged for 10 min at 14,000 rpm and 4 °C and then dried at room temperature. To redissolve the RNA 33 µl RNase-free ddH2O were added and solution was incubated for 5 min at 65 °C and 1,000 rpm in a heating block with shaking function and vortexed one to two times in between. RNA concentration and quality was determined by measuring a 1:10 diluted and an undiluted aliquot of the sample with a spectrophotometer (NanoDrop). 10 µM stock solutions of the in vitro transcribed RNA were set up with RNase-free ddH2O for downstream application and stored at -80 °C until use. RNA was spun down for 30 min at 13,000 rpm and 4 °C, pellet was air-dried and then redissolved in 25-50 µl RNase-free ddH2O. When the signal detected with the Geiger counter was less than 1,000 counts per second, the RNA was regarded as weakly labelled and double the quantity was used for the in-line probing reaction. RNA concentration was determined by measuring an undiluted aliquot of the sample with a spectrophotometer (NanoDrop) and a 0.2 pmol/µl stock was set up with RNase-free ddH2O. To prepare the OH ladder 0.2 pmol [γ-32 P]-ATP labelled RNA was mixed with 9 µl alkaline hydrolysis buffer (Ambion), incubated for 5 min at 95 °C and reaction stopped by adding 12 µl 2x RNA gel loading dye. Then samples were loaded and electrophoresis proceeded at 40 W for 1 to 4 h (depending on PAA percentage and region of the RNA molecule supposed to be well separated) in 1x TBE buffer at room temperature. Ensuing gel was transferred onto blotting paper (Whatman), dried for 45 min at 80 °C with a vacuum applied and exposed to a storage phosphor screen for 1 to 2 days. Screen was read out using the Typhoon TM FLA 7000 laser scanner (GE Healthcare). Tables Table S1. List of additional plasmids used in this work. In case of shuttle vectors selection is detailed for Gram-negative and Gram-positive bacteria.
2021-01-17T06:16:13.608Z
2021-01-15T00:00:00.000
{ "year": 2021, "sha1": "bd410319fc3a6c7c0d22e62c172e11de01df68c3", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/49/4/2192/36398573/gkaa1277.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0a0174e350610f502270d0408e3616970dff04fb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
119737220
pes2o/s2orc
v3-fos-license
Elementary proof of congruences modulo 25 for broken $k$-diamond partitions Let $\Delta_{k}(n)$ denote the number of $k$-broken diamond partitions of $n$. Quite recently, the second author proved an infinite family of congruences modulo 25 for $\Delta_{k}(n)$ with the help of modular forms. In this paper, we aim to provide an elementary proof of this result. Introduction The notion of broken k-diamond partitions was introduced by Andrews and Paule [1] in 2007. They showed that the generating function of ∆ k (n), the number of broken k-diamond partitions of n, is given by Throughout this paper, we assume that |q| < 1 and adopt the customary q-series notation: (1 − aq n ). In fact, Chan extended these congruences to Furthermore, other infinite families of congruences modulo 5 satisfied by ∆ 2 (n) have been discovered by many authors. The interested readers may refer to Radu [14] and Xia [16]. Quite recently, with the help of modular forms, the second author [15, Theorem 2] proved the following infinite family of congruences modulo 25 for ∆ k (n). Our main purpose of this paper is to provide an elementary proof of Theorem 1.1. We now absorb the ideas of [2] with some refinements. The first ingredient from [2] is the following three relations. ELEMENTARY PROOF OF CONGRUENCES MODULO 25 3 Now, for α ∈ Z ≥0 and β ∈ Z, we define It is not hard to observe that P (0, 0) = 2, (2.6) With the help of the following recurrence relations along with the initial conditions (2.6)-(2.9), one may easily express P (α, β) in terms of K and q for arbitrary α ∈ Z ≥0 and β ∈ Z. Proof. We first notice that This gives (2.10). Next, it follows from (2.5) and (2.8) that which is equivalent to (2.11). At last, we have This yields (2.12). The other ingredient we request from [2] states as follows: We shall show One readily sees that Theorem 1.1 is a direct consequence of (3.1) since if k ≡ 62 (mod 125), then 2k + 1 is a multiple of 125. In view of (2.5), we may rewrite the above identity as Using Lemma 2.3 and the initial conditions (2.6)-(2.9) to express each summand P (·, ·) in terms of K and q, we may further simplify the above identity as Acknowledgements The second author was supported by the National Natural Science Foundation of China (No. 11501061).
2018-07-05T08:18:22.000Z
2018-07-05T00:00:00.000
{ "year": 2018, "sha1": "c7f6250663a44ebb82cfae80d127b8dd06fa326e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c7f6250663a44ebb82cfae80d127b8dd06fa326e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
219648884
pes2o/s2orc
v3-fos-license
Microbiological profile of different sources of drinking water offered to commercial broiler chicken during monsoon season A study was carried out to investigate the effect of water from five different sources on the microbiological parameters of broiler chicken. Among the different sources, the total bacterial load was higher in pond water. However, after treatment, microbiological qualities (total viable count, total E. coli count and total coliform count) of all the sources of water under study were found to be reduced. The rain water was free from E. coli. The average total water consumption (l/bird) of broiler chicken offered untreated pond water was lowest (12.055) and it was highest for bore well water (14.560). However, after treatment of water the total water consumption per bird increased numerically for all the groups. The overall water/ feed consumption ratio of broiler chicken offered different sources of water ranged from 3.67-4.07 wherein, it was lowest for untreated pond water (3.67) and highest for untreated rain water (4.07). Introduction Water quality attributes can have a direct or indirect effect on the performance of broiler chicken. High level of bacterial contaminations, minerals or other pollutants in drinking water can have detrimental effects on normal physiological properties resulting in inferior performance (www.aces.edu). The drinking water may cause transmission of some bacterial, viral and protozoan infections that are among the most common poultry diseases. Water contaminated with microorganisms, algae, dust and rust is relatively common and can have a profound adverse impact on poultry performance. In some aspects, water quality can have a greater negative effect on performance of birds than feed quality because it is well known fact that birds consume more water than they consume feed. In modern era of poultry production water should be provided as clean as possible in order to avoid the possible microbial hazards. Good quality water is very important for good digestion and to create a healthy gut flora, which will help the bird to absorb all the essential nutrients and keep away the gut infections (Manwar et al., 2012a) [4] . Quality of surface and ground water depends upon the naturally occurring inclusions such as cations, anions, heavy metals and microorganisms. The main source of drinking water for humans as well as animals, by and large, is open wells or tube wells (Manwar et al., 2012b) [5] . The use of drinking water with high physical, chemical and microbiological qualities is of fundamental importance in animal production because many animals have access to the same water source and a problem in the water quality would affect a great number of animals. This is particularly related in poultry production, where one single water source serves thousands of animals. Therefore, control measures must be considered as priority, in order to prevent the occurrence of diseases that are spread through water, and would certainly result in great economic losses. Although water doesn't provide ideal conditions for pathogenic microorganisms to multiply, they will generally survive for enough time to allow waterborne transmission. Water is therefore, an excellent transmission route of agents responsible for human and animal diseases (Amaral, 2004) [1] . Most often, poultry farmers get alarmed only when the mortality level in a farm is high. However, even the existence of disease at a sub clinical level may hinder the performance of the birds, in terms of body weight or egg number. Such economic losses are sometimes relatively less and unnoticed, may mean the difference between success of failure in the poultry business. Hence, the adage "prevention is better than cure" applies more to poultry industry than any other field (Prabakaran, 2018) [6] . Materials and methods A total of 450 day-old commercial broiler chicks (Cobb 400) having similar body weight from a single hatch were procured from a local hatchery of Guwahati city. The chicks were weighed and randomly divided into ten experimental groups namely, untreated group with ring well water, treated group with ring well water, untreated group with tube well water, treated group with tube well water, untreated group with bore well water, treated group with bore well water, untreated group with pond water, treated group with pond water, untreated group with rain water and treated group with rain water. Further each group was again subdivided in 3 replicates containing 15 chicks in each group. The birds were offered both untreated and treated drinking water of these five sources. The treatment of water was done with the combination of acidifier and sanitizer at the rate each of 0.05 ml per liter of drinking water. water samples from all the untreated and treated groups were analysed for various physico-chemical parameters. The total viable count of bacteria in water samples was determined as per the method recommended by standard methods for the Examination of Water and Waste Water (1998). Pour plate method was used for the test. Serial tenfold dilution (10 -1 to 10 -5 ) of the water samples collected in sterile bottles was made in test tubes using normal saline solution as diluent. The diluent sample in 1 ml volume was transferred into duplicate Petridis. About 15-20 ml of sterile molten plate count agar maintained at 45 0 C was poured and mixed thoroughly with the inoculums. The plates were incubated at 37 0 C for 24 hours. The plates which showed 30-300 colonies were selected and colonies were counted. Number of bacteria in the sample was determined by multiplying the mean of the colonies on duplicate plates with dilution factor which was expressed as colony forming unit (CFU)/ml of the sample. Total E. coli in untreated and treated water samples was determined as per the method recommended by Standard method for the Examination of Water and Waste Water (1998) and it was done by spread plate method in Eosin Methylene Blue Agar (EMB agar). From the selected 10-fold dilution, 0.1 ml each of the inoculums was transferred into duplicate EMB agar plates. The inoculum was spread evenly using a sterile L-shaped disposable plastic rod and the plates were incubated at 37 0 C for 24 hours. At the end of incubation, greenish black colonies with metallic sheen were counted as E. coli. The number of E. coli was estimated from mean CFU present in duplicate plates x dilution factor and was expressed as CFU/ml of water sample. Mac Conkey's Lactose bile broth was used and the medium after preparation was distributed in test tubes in 10 ml volume with a Durham's tube placed in inverted position and tubes were autoclaved at 121 0 C for 15 minutes. Five tubes system with each set consisting of five tubes was adopted as per the method recommended by Standard method for the Examination of Water and Waste Water (1998). In the 1 st set, each tube was inoculated with 10 ml of sample, the 2 nd set with 1 ml and the 3 rd set with 0.1 ml of sample. The inoculated tubes were incubated at 37 0 C for 24 hours. Acid and gas production was recorded as positive reaction. Numbers of tubes of each set with positive reaction was recorded and the results were compared with the table of Mac Conkey and the number of bacteria in 100 ml of water was noted. Results and Discussion The average values of total viable count of bacteria (cfu/ml) of drinking water under different sources and treatment have been shown in Table 1 The average values of total viable count of bacteria (cfu/ml) of untreated ring well, tube well, bore well, pond and rain water were 190×10 3 , 21×10 3 , 170×10 3 , 207.20×10 3 , 0.73×10 2 respectively. The corresponding values for treated sources were 0.45×10 2 , 0.00, 0.38×10 2 , 0.72×10 2 and 0.00. Among the untreated sources highest total viable count of bacteria was found in pond water followed by ring well, bore well, tube well and rain water. In respect of treated sources, the total viable count of bacteria became nil in tube well and rain water. Due to treatment, the overall total viable count of bacteria reduced greatly for all the sources of water under study. The total viable count is the measure of total number of viable bacteria in a sample of water. In the present study, the total bacterial count of all the untreated sources except rain water was very high as compared to the report of Thirunavukkarasu (1997) [8] , who found total bacterial count of 4428 cfu/ml and 164 cfu/ml in open and bore well water respectively in Namakkal taluk of Tamil Nadu. In another study, Abbas et al. (2010) reported that total bacterial count of Nile water in Egypt was uncountable as compared to the well water and commercial water. In support of the present findings, Ibitoye et al (2013) [3] of Nigeria and Saidy et al. (2015) [7] of Egypt also reported very high bacterial count in well water, farm tap water, farm stored water and underground water. The higher bacterial count in well water might be due to vulnerability to various pollutant and contamination by people fetching water from it (Ibitoye et al., 2013) [3] . The treatment of drinking water with the combination of acidifier and sanitizer greatly reduced the total bacterial count upto nil and his findings was in agreement with the report of Das (2013) [2] . Total E. coli count The average values of total E. coli count of drinking water of different sources and treatment have been shown in Table 1 The average values of total E. coli count (cfu/ml) of untreated ring well, tube well, bore well, pond and rain water was 0.91×10 2 , 0.78×10 2 ,0.67×10 2 , 2.07×10 2 and 0.00 respectively. The corresponding values for treated sources except pond water were nil. Among the untreated sources highest total E. coli count was found in pond water followed by ring well, tube well and bore well water. E. coli is aerobic gram negative, motile rods, ferments lactose with production of gas and usually produces smooth, nonmucoid colonies on solid media. Its presence in water is an indication of fecal contamination. In the present study, untreated drinking water of different sources had an E. coli count of maximum 207 cfu/ml. Contrary to the present findings Das (2013) [2] and Ibitoye et al (2013) [3] recorded lower E. coli count (cfu/ml) of 100 and 160 in ring well and pipe borne water respectively. On the other hand, Das et al (2011) [2] found much higher E. coli count of 500 cfu/ml in water samples of west Bengal. The treatment of water with the combination of acidifier and sanitizer made the water free from E. coli except pond water (2 cfu/ml). The present findings corroborated with the report of Das (2013) [2] who found zero E. coli in ring well water after treatment with the combination of acidifier and sanitizer. Total coliform count by using MPN technique The average values of total coliform count by using MPN technique of drinking water of different sources and treatment has been shown in Table 1 The average values of total coliform count by using MPN technique (MPN index/100 ml) of untreated ring well, tube well, bore well, pond and rain water was 1642, 200, 974, 1462 and < 2 respectively. The corresponding values for all the sources of treated drinking water were < 2. Among the untreated sources highest MPN INDEX per 100 ml was found in ring well water followed by pond, bore well, tube well and rain water. Coliform bacteria are gram negative, aerobic and non-sporing rods which ferments lactose with the formation of acid and gas within 24 hours at 37 o C. In the present study, the total coliform count of different sources of untreated water as per MPN technique ranged from < 2 to 1640 MPN index per 100 ml. Contrary to the present findings, Thirunavukkarasu (1997) [8] found much higher level of coliform count of 2164 cfu/ml in open well water in Namakkal taluk of Tamil Nadu. In a similar study Saidy et al. (2015) [7] reported the total coliform count of different sources of drinking water which ranged from 2.8 to 500 cfu/ml. In contrary to the present findings, Abbas et al. (2010) revealed that coliform count was nil in different sources of water under their study. The total coliform count of untreated ring well water was reported as more than 1680 MPN index per 100 ml (Das, 2013) [2] which was comparable with the present findings (1640 MPN index per 100 ml) of untreated ring well water. In the present study, the total coliform count as per MPN technique was <2 per 100 ml in all the sources of drinking water treated with the combination of acidifier and sanitizer. This was in agreement with the report of Manwar et al. (2012b) [5] and Das (2013) [2] , who also found the average total coliform count after treatment as <2 MPN index per 100 ml.
2020-05-28T09:14:52.515Z
2020-05-01T00:00:00.000
{ "year": 2020, "sha1": "a2a8afcdc645c77c2f04109af87ad28b837d06d0", "oa_license": null, "oa_url": "https://www.chemijournal.com/archives/2020/vol8issue3/PartG/8-2-366-705.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "2333611cd037372dce40b7997bfac251c8def4f1", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
210148908
pes2o/s2orc
v3-fos-license
Eliminating microscopic lymph node metastasis by performing pelvic lymph node dissection during radical prostatectomy for prostate cancer The oncological benefit of pelvic lymph node dissection (PLND) for prostate cancer (PCa) remains unclear. The therapeutic effect of PLND on the elimination of microscopic metastases during radical prostatectomy (RP) for PCa was examined in the current study. A total of 348 Japanese patients with high- or intermediate-risk PCa without lymph node metastasis, who underwent antegrade RP at the Kyushu Cancer Center (Fukuoka, Japan) between August 1998 and May 2013 were retrospectively analyzed. The patients were divided into the standard (obturator + internal iliac nodes) group and the expanded (standard + additional nodes) group according to the extent of PLND. Preoperative and postoperative characteristics were also analyzed to determine the factors associated with prostate-specific antigen (PSA) failure. Standard and expanded PLND were performed in 70.9% (247/348) and 29.1% (101/348) of cases, respectively. The results revealed that preoperative PSA levels were the only marked difference between the two groups. No differences were observed in the other preoperative and postoperative characteristics. Furthermore, the rate of PSA recurrence in each group did not differ to a statistically significant extent (P=0.3622). Reducing the area of dissection from expanded PLND to standard PLND significantly reduced the number of dissected lymph nodes (P<0.0001). Additionally, the PSA level, clinical tumor stage, Gleason score of the biopsy specimen, pathological tumor stage and extent of PLND were all associated with PSA recurrence, as determined via multivariate Cox hazards regression analysis (P=0.0177, P=0.0023, P=0.0027, P<0.0001 and P=0.0164, respectively). In high- and intermediate-risk patients without lymph node metastasis, a greater number of lymph nodes were dissected when the extent of dissection was greater. Furthermore, the extent of PLND was a significantly associated with PSA failure. The results indicated that PLND exerted a therapeutic effect by eliminating microscopic pelvic lymph node metastases that were not detected by routine pathological examinations. Introduction Pelvic lymph node dissection (PLND) is the only reliable technique for accurately ensuring the nodal status in prostate cancer (PCa) (1)(2)(3), as the ability of imaging modalities such as computed tomography and standard magnetic resonance imaging to predict lymph node invasion is limited (4,5). Based on nomograms predicting the risk of preoperative lymph node metastasis, it is generally accepted that extended PLND is desirable in patients deemed suitable for PLND (6,7). Several studies have suggested that more extensive lymphadenectomy is associated with a survival advantage, possibly due to the elimination of microscopic metastases (8)(9)(10)(11); however, there is no definitive proof of an oncologic benefit (12). The elimination of microscopic metastasis means that lymph node metastases that are not detected by routine pathological examinations are surgically removed by PLND. Thus, if the expanded PLND template is a factor influencing prostate-specific antigen (PSA) failure after radical prostatectomy (RP) in patients who are pathologically negative for lymph node metastasis, we might be able to indirectly detect the elimination of microscopic metastasis and confirm an improved therapeutic benefit over a more constrained template. The aim of this study was to determine whether the extent of PLND is associated with the risk of PSA failure in patients undergoing RP, especially in high-and intermediate-risk PCa patients in whom lymph node metastasis is not detected. Patients and methods Patient characteristics and risk-group classification. The cases of 638 consecutive patients with clinically localized Pca Eliminating microscopic lymph node metastasis by performing pelvic lymph node dissection during radical prostatectomy for prostate cancer who underwent RP at the Kyushu Cancer Center (Fukuoka, Japan) between August 1998 and May 2013 were reviewed. RP was performed in an open retropubic manner in all cases. The patients were classified into three risk groups according to the D'Amico criteria (13). A total of 290 patients were excluded from this study for the following reasons: A history of hormone therapy (n=151), low-risk classification according to the D'Amico criteria (n=105), the absence of PLND (n=14), the detection of lymph node metastasis by a routine pathological examinations [n=13 (low-risk, n=0; intermediate-risk, n=5; high-risk, n=8)] and unclear findings in the examination of biopsy or prostatectomy specimens (n=7). Two pathologists evaluated the degree of malignancy in the biopsy and prostatectomy specimens and determined the pathological stage based on the 2009 tumor-node-metastasis classification (14). PLND technique. At a minimum, all patients underwent standard PLND, which was performed along the lower edge of the external iliac vein with the caudal limit being the deep circumflex iliac vein and femoral canal, preserving the lymphatics overlying the external iliac artery. The proximal border was the bifurcation of the common iliac artery, and all tissue in the angle between the external and internal iliac arteries and obturator nerve was removed. All of the fatty, connective and lymphatic tissue of the obturator fossa was removed along the obturator muscle, leaving the obturator nerve and vessels bare. Subsequently, the internal iliac artery and the internal iliac vein (to the extent that was possible) were skeletonized up to the obturator arteriovenous branch. The patients were subdivided into two subgroups according to the lymph node dissection technique: Standard PLND and expanded (extended + more extended) PLND. Extended PLND included standard PLND as well as the dissection of the lymphatics overlying the external iliac artery and vein, this extended laterally to the genitofemoral nerve. More extended PLND included extended PLND as well as the dissection of the lymphatics overlying the common iliac artery, this extended cranially to the ureteric crossing. Tissue processing and the determination of the PSA level. The RP and PLND specimens were fixed in 15% neutral buffered formalin (Wako Pure Chemical Industries, Ltd.) for 48-96 h at room temperature, and whole-organ prostate specimens were serially sectioned perpendicular to the rectal surface at 5-mm intervals. Sections that were predominantly caudal and cephalic were cut at 5-mm intervals on the sagittal plane in order to assess the bladder neck and apical margins. The specimens were subsequently embedded in paraffin, cut into 5-µm sections and stained with hematoxylin and eosin. Extraprostatic extension was defined as the extension of the tumor from the prostate to the periprostatic soft tissue. A positive resection margin was defined by the presence of tumor cells at the stained resection margin. The follow-up schedule after RP included the performance of a PSA assay every three months for the first two years, followed by every four months for the next three years, and every six months thereafter. The date of disease recurrence or PSA failure was defined as the date on which a serum PSA level of >0.2 ng/ml was detected. RP was performed if the PSA level did not drop below 0.2 ng/ml after surgery. Additional treatment is basically performed for cases meeting the criteria for PSA failure and adjuvant therapy is not routinely used in cases with a positive surgical margin or any unfavorable factors. A small number of patients who underwent RP were subsequently treated with radiotherapy and/or hormone therapy before the serum PSA level exceeded 0.2 ng/ml. Thus, in these patients the date on which adjuvant therapy was initiated was defined as the date of disease recurrence. All patients provided their written informed consent to participate in this study, and the study protocol was approved by the Ethics Committee of the Kyushu Cancer Center. Statistical analyses. All statistical analyses were performed using the JMP ® Pro, version 13.0.0, software package (SAS Institute, Inc.). The PSA failure-free rate was determined according to the Kaplan-Meier method, and the significance of the clinicopathological parameters associated with PSA failure was assessed using the Cox proportional hazards regression model. Chi-squared and Mann-Whitney U tests were used to assess the differences between standard PLND and expanded PLND. P-values of <0.05 were considered to indicate statistical significance. Results Clinicopathological characteristics. The clinicopathological characteristics according to the PLND technique that was applied are presented in Table I. All patients were Japanese (median age, 66 years; range, 48-77), and the median PSA level was 8.171 ng/ml (range, 0.8 to 39.413 ng/ml; normal range <4.0 ng/ml). The median follow-up period after surgery was 53.7 months. The standard PLND group included 247 (70.9%) patients, while the expanded PLND group included 101 (29.1%) patients. There were no marked differences in the preoperative characteristics of the two groups, including age, clinical tumor stage, and the Gleason score of the biopsy specimen (Table I). However, there was a significant difference in the preoperative PSA level (P= 0.0008). There were no marked differences in the postoperative characteristics, including the pathological tumor stage, final Gleason score, extraprostatic extension, resection margin, and seminal vesicle invasion, of the two groups. The rate of PSA recurrence in the two groups did not differ to a statistically significant extent. Association between the number of dissected lymph nodes and the PLND technique. In total, 247 patients (70.9%) underwent standard PLND, and 101 patients (29.1%) underwent expanded PLND (extended, n=78; more extended, n=23). The median number of dissected lymph nodes in the standard PLND group was 13, while that in the expanded PLND group was 19; the difference was statistically significant (P<0.0001; Table II). Associations between the patient characteristics and PSA failure. In the Cox proportional hazards analysis, all characteristics without a preoperative variable (i.e., age), were found to be significant predictors in the univariate analysis (Table III). The multivariate analysis revealed significant differences between the patients with and without PSA failure in the preoperative PSA level, clinical tumor stage, and Gleason score of the biopsy specimen (preoperative characteristics) and in the pathological tumor stage and extent of PLND (postoperative characteristics) (Table III). Discussion Urological surgeons have been performing extended pelvic lymph node dissection in high-and intermediate-risk cases because several studies have suggested that more extensive lymphadenectomy is associated with a survival advantage, possibly due to the elimination of microscopic metastasis (8-11). However, a systematic review of studies assessing the relative benefit and harm of PLND in relation to the oncological and non-oncological outcomes of patients undergoing RP for PCa failed to confirm a direct therapeutic effect. The current poor quality of evidence indicates the need for robust and adequately powered clinical trials (12). Thus, the therapeutic role of PLND during radical prostatectomy for the management of PCa remains controversial. The patients in the low-risk group were presumed to have a minimal risk of developing lymph node metastasis (6,7,15). Thus, the therapeutic role of PLND in low-risk patients is not clear. The overall results of RP operations performed in our institution showed that no low-risk patients developed lymph node metastasis. Thus, low-risk patients were excluded from the present study, which focused on intermediate-and high-risk patients. In the In the present study, the high-and intermediate-risk PCa patients without lymph node metastasis were classified into two groups according to the extent of PLND: Standard PLND (obturator + internal iliac) and expanded PLND (standard + additional nodes), accounting for 70.9% (247/348) and 29.1% (101/348) of the patients, respectively (Table I). The preoperative PSA level was the only value for which a marked difference was observed between the two groups; no significant differences were observed in the other preoperative and postoperative characteristics. In addition, there was no marked difference in the rate of PSA recurrence between the groups (P=0.3622). The extent of PLND varies widely according to the era, institution, operative procedure and individual urologist due to the lack of standardized definitions regarding anatomical extension (16,17). The definition of extended PLND differs according to the guidelines used. According to the National Comprehensive Cancer Network (NCCN) guidelines, extended PLND includes the removal of all node-bearing tissue from the area bounded by the external iliac vein (anteriorly), the pelvic side wall (laterally), the bladder wall (medially), the floor of the pelvic (posteriorly), Cooper's ligament (distally) and the internal iliac artery (proximally) (6). In contrast, according to the European Association of Urology (EAU), extended PLND includes the removal of the nodes overlying the external iliac artery and vein, the nodes within the obturator fossa, located cranially and caudally to the obturator nerve, and the nodes located medially and laterally to the internal iliac artery (7). The major difference between these guidelines concerns the definition of extended PLND-specifically whether or not the nodes overlying the external iliac artery are resected. At the Kyushu Cancer Center, the extent of standard PLND is similar-but not identical-to the definition of extended PLND in the NCCN guidelines, as it was performed along the lower edge of the external iliac vein, without the resection of the nodes overlying the external iliac vein. When RP was initially performed at our institution, all cases underwent more extended PLND, which includes the common iliac, external, obturator and internal lymph nodes, as there was no consensus among urological surgeons regarding the extent of PLND, and because the performance of PLND with an increased range may allow for the surgical resection of microscopic lymph node metastasis. However, even when patients who received preoperative hormone therapy were included, the rates of lymph node metastasis and PSA recurrence after RP at the Kyushu Cancer Center were lower in comparison to previous studies (18,19). Thus, as more RP procedures were performed, the extent of lymph node dissection was gradually reduced to extended PLND and finally standard PLND, regardless of the D' Amico risk classification. The advantage of this single-institutional study over a multicenter study is that all of the operations were performed by or under the supervision of urological surgeons, who performed standardized surgery. For this reason, any differences in the dissection area and methods were negligible. There are no established guidelines regarding the optimum method of examining PLND specimens, and the approach may vary considerably between individual pathologists and institutions (20). At our institution, PLND specimens were processed using the same methods as the RP specimens. Although the period was long, this study was performed in a single institution and the tissue processing and diagnostic methods basically remained unchanged during the study period. We also examined the number of lymph nodes dissected for each PLND technique (Table II). When the area of dissection was reduced by changing from expanded PLND to standard PLND, there was a significant decrease in the number of dissected lymph nodes (P<0.0001). Thus, as expected, the number of dissected lymph nodes was decreased due to the reduction in the extent of lymph node dissection. This also suggests that narrowing the extent of lymph node dissection might reduce the likelihood of eliminating microscopic lymph node metastasis by surgery. To confirm the therapeutic effect of PLND, we next examined the correlation between patient characteristics and PSA failure in consecutive RP cases (Table III). With the exception of age, all factors were found to affect PSA recurrence in the univariate analysis. Ultimately, the PSA level, clinical tumor stage, Gleason score of the biopsy specimen, pathological tumor stage, and extent of PLND were found to affect the incidence of PSA recurrence in the multivariate analysis. These results suggest that PLND has a therapeutic effect because microscopic lymph node metastasis can be eliminated by PLND. Some studies have reported the possible effect of lymphadenectomy on the survival of patients with confirmed positive nodes who underwent radical prostatectomy. Bader et al reported a 78% cause-specific survival rate in patients treated with RP and ePLND and who did not undergo any adjuvant therapy until progression. Interestingly, among the patients with 1 positive node, 39% remained free of clinical or biochemical progression, in comparison to 12% of patients with 2 or more positive nodes (9). Seiler et al reported that patients with 1 positive node have a good survival probability and a 20% chance of remaining biochemical relapse-free after a median follow-up period of 15.6 years, even without immediate adjuvant therapy (21). It is considered that these reports apply to cases of micrometastasis. Yuen et al reported that sentinel lymph nodes were located in the obturator fossa, internal and external iliac regions, and rarely in the common iliac and presacral regions (22). There is a possibility that several LNs in the external iliac and common iliac area determined PSA recurrence. Two prospective studies (NCT01812902 and NCT01555086) are ongoing to determine the therapeutic effectiveness of PLND in terms of oncological outcomes. These results may improve the level of evidence (12). In the previous study, we reviewed all cases, including a low-risk group, and concluded that standard PLND is appropriate at radical prostatectomy (23). In this study, we reviewed the cases in the intermediate-and high-risk groups, and excluded cases in the low-risk group. Extended PLND is generally recommended for intermediateand high-risk patients. Thus, while the results of this study differed from those of previous studies, we do not consider it to be a problem. The present study was associated with several limitations, including the small cohort size and the retrospective nature of our database analysis. In conclusion, the extent of PLND in operations performed at the Kyushu Cancer Center has gradually been reduced over time, and standard PLND is routinely performed. In cases involving high-and intermediate-risk PCa patients without lymph node metastasis, a greater number of lymph nodes can be dissected when the extent of dissection is larger, and the extent of lymph node dissection was found to significantly affect PSA failure. Thus, we demonstrated that PLND exerts a therapeutic effect in intermediate-and high-risk patients by eliminating microscopic pelvic lymph node metastasis that is not detected by routine pathological examinations.
2019-12-19T09:16:30.969Z
2019-12-13T00:00:00.000
{ "year": 2019, "sha1": "e09852ed7ef8980baf5d968da5c915264dd56279", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/mco.2019.1965/download", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2fb7cbb7528fc9fea578d72294f5adc20286a779", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
197886489
pes2o/s2orc
v3-fos-license
Kidney Procurement System in Colombia : A System Dynamics Approach * Joan Paola Cruz Escuela Colombiana de Ingeniería Julio Garavito, Colombia ORCID: 0000-0002-5343-9055 William J. Guerreroa Universidad de la Sabana, Colombia ORCID: 0000-0002-9807-6593 Edna Rocío Pérez Escuela Colombiana de Ingeniería Julio Garavito, Colombia ORCID: 0000-0002-0659-2794 David L. Lizarazo Escuela Colombiana de Ingeniería Julio Garavito, Colombia ORCID: 0000-0002-4178-3617 Paula C. Rico Escuela Colombiana de Ingeniería Julio Garavito, Colombia ORCID: 0000-0003-1260-1236 Ana María Castillo Escuela Colombiana de Ingeniería Julio Garavito, Colombia ORCID: https://orcid.org/0000-0002-4166-8459 Laura N. Torres Escuela Colombiana de Ingeniería Julio Garavito, Colombia ORCID: https://orcid.org/0000-0002-3734-8858 Introduction Kidneys are vital organs of the human body whose functions include producing hormones, removing waste products from the blood, and regulating water uid levels that allow a chemical balance in the body. Kidneys are healthy if they do not present a reduction of their renal function, which is the capacity of ltering blood. Factors such as age, genetics, high blood pressure, and diabetes can permanently reduce renal function. is is known as Chronic Kidney Disease (CKD). CKD is dened as the progressive loss of the rate of glomerular ltration that oen results in uremia and death (1). Although its progression can be delayed, CKD does not have a known cure. is disease is categorized into ve stages. In stages 1, 2 and 3, the objective of the medical treatment is to preserve renal function. In stage 4, renal function is heavily reduced, and in stage 5, hemodialysis and kidney transplant are the only options to keep the patient alive. During 2015 in Colombia, the number of patients in stages 4 and 5 that were registered in the kidney transplant waiting list was over 2000, and the number of performed transplants was 768 (2). About 17.3% of these transplantations come from living donors. Donations from patients with asystole are not practice in Colombia. is shows the imbalance between kidney donors and the waiting list. On the other hand, CKD deaths rise to 26.9 per 100,000 inhabitants (3. is behavior needs understanding, considering its complexity. A System Dynamics (SD model representing this situation is a helpful tool to determine the factors and impacts that the changes in the system will cause in the future and allows studying the effect of the implementation of strategies to improve its behavior. e developed model considers the bill 1805 of 2016 that aims to amend article 2 of Law 73 of 1988 of the Republic of Colombia, which extends the legal presumption of organ donation to people did not state their will to forbid organ donations in life. us, every Colombian citizen who dies will be an organ donor unless they declare against it in life. Examples of SD models analyses to simulate disease trends and projections are presented by Huang et al. (4) who proposed an SD model to study the evolution of the Kawasaki disease in Taiwan and the US; and Shin et al. (5) who study the Middle East Respiratory Syndrome Coronavirus in the Republic of Korea. In addition, the model considers the implementation of a Kidney Exchange Program (KEP) in Colombia. is program consists on contacting CKD patients that have a voluntary living kidney donor (a relative, spouse or a friend) with no histocompatibility and they are willing to participate in nding a couple in the same situation (no donor-recipient histocompatibility) and swap donors. e KEP has been implemented in countries such as South Korea (6), Switzerland (7), e United States (8,9,10), Turkey (11), Romania (12), Netherlands (13,14,15), United Kingdom (16,17,18), Portugal, Canada, New Zealand, and Australia (19). To the best of our knowledge, there are no records in Colombia of studies about kidney donation and procurement analysis using SD. Nonetheless, within the studied literature two approaches are using this tool in other countries. e rst refers to the kidney transplant system focused on reducing the waiting list and illegal kidney traffic. Fakkert, Schwarz y Pruyt (20) developed an SD model that simulates the behavior of the waiting list for kidney transplantation in the United States from 2012 to 2030. is model shows that by 2030, the waiting list for kidney transplantation will have doubled when compared to 2012 and the main analysis shows the change in the waiting list system if different laws or norms are applied. ey conclude that the only strategy that showed a signicant drop in the waiting list was the application of economic compensation in health treatments for living donors. However, this nancial compensation is illegal in many countries, and this market is oen considered repugnant (21). Paricio and Fidal (22) analyze how the transplant system is impacted by adopting social policies such as the opportunity for an altruistic donor to affiliate three people needing a transplant, of their choice, into a prioritization system on the waiting list. ey conclude that this strategy needs to be complemented by others to balance kidney demand and supply. Although there are no records in Colombia of studies using SD, the increasing demand for kidney procurement problem has been studied by many authors from other areas of management science. One of them is a proposal to increase the number of transplants in Colombia from the perspective of operations research. Bruni, Conforti, Sicilia, and Trotta (23) propose an integer programming model to locate and allocate resources for the kidney transplantation system in Italy, aiming to minimize the inequality in waiting times of different regions in the country. Fajardo-Vallejo (24) developed a mathematical model using simulated samples of incompatible patient-donor pairs to determine if an exchange is possible, given the pool of pairs. In that case, they nd the maximum number of such pairs that would be included in the KEP. ey conclude that KEP can yield a signicant increase in the number of organ transplantation in Colombia. Villa and Patrone (25) studied the mechanism design of the KEP from a game theory point of view, assuming three different information levels, based on the observation of the Italian case. ey conclude that the players are motivated to manipulate their information to get better kidneys under any information assumption. Also, Ahmadvand and Pishvaee (26) present a study on the system of kidney allocation using a Data Envelopment Analysis (DEA) model inspired by the Iranian system. ey propose to evaluate the efficiency of possible patient-organ pairs for kidney allocation and perform experiments using a series of data realizations for different credibility levels. Ünver (27) studied the problem under the objective of minimizing average waiting cost using a stochastic optimization model. By assuming Poisson arrivals, the paper proves that certain dispatching rules for kidneys constitute an optimal policy. Zenios (28) models the KEP as a birth and death process, where no patients expire but the long wait is penalized by a cost. e objective is to maximize the average quality-adjusted life years and the optimal policy that limits the number of patients that can take part in pairwise exchange is analytically derived. ompson et al. (29) proposed a simulation model to evaluate different policies for allocating kidneys and increase the efficiency of the system, considering only cadaveric donors in the US. From this literature review, it is concluded that, although most of these previous studies have been carried out in the context of the United States, they provide a guideline for modeling the donation, procurement and transplantation system in the Colombian context. e proposed model considers Colombian social dynamics, political constraints, health system capacity, and the Colombian population biological features. For this reason, the development of this model is relevant to improve key performance indicators of publicly and privately funded healthcare systems, based on the analysis of the current and future situation of the dynamics that affect waiting lists and kidney transplants. us, new strategies that generate the maximum benet for the population may be proposed. For example, our model will allow in the future to simulate and analyze the impact of investing in strategies to treat the causes of CKD such as genetic factors, high blood pressure, and diabetes. e following section explains the methodology developed together with the established models. en, the results and ndings are discussed in section 3. Finally, section 4 presents the discussion of the ndings and limitations of the proposed methodology, conclusions and future research. Materials and Methods SD is a systemic tool that allows to understand a complex system from a qualitative and quantitative point of view and to simulate possible scenarios and intervention strategies to improve its behavior. e identication of the causal relationships between variables of a system is the rst step, followed by the detection of the behaviors that generate balance in the system or reinforcement effects (30). A recent literature review on SD models applied to health systems modeling by Chang et al. (31), concluding that SD can capture the dynamic interactions between different components of a health system to predict the consequences of policy interventions, and provide critical insights on the evolution of the system. e present study uses SD to conceptualize the kidney donation and procurement system in Colombia, identifying the main actors and their causal relationships. en, behavioral scenarios simulations are performed to explore the impacts of two intervention projects aimed at decreasing the number of patients in a kidney transplantation waiting list. e methodology steps are the following. First, a model of causal loops is constructed reecting the behaviors of the system that are currently presented in Colombia. e proposed models are build based on interviews that were performed to experts and operators of the logistic systems in a Colombian private company dedicated to kidney transplantation procedures with experience of more than 2000 transplantation procedures performed to date. We made unstructured interviews to allow experts to expose their points of view and standard practices. Further, the literature presented in the article is a source of information for the model. Two intervention projects are included in the model to determine, in qualitative terms, what the behavior of the system would be and how its loops are affected as a result of the changes in its structure. e two projects are: rst, the amendments to article 2 of law 73 of 1988 of the Republic of Colombia, stated in article 3 of law 1805 of 2016, which extend the legal presumption of organ donation to people who have died and did not state their will to forbid organ donations. is assumption means that every Colombian citizen who dies will be an organ donor if they did not declare against it in life. e second project is the implementation of a KEP in Colombia. It consists on contacting CKD patients that have a voluntary living kidney donor (a relative, spouse, or a friend) with no histocompatibility and that are willing to voluntarily participate in nding a couple in the same situation (no donor-recipient histocompatibility) and swap donors. e second step, a model of stocks and ows corresponding to the model of causal loops previously elaborated is constructed, and using simulation, it expands the prior analyses and quantitatively evidences the impact over time that the projects can have in the system. Finally, using the developed models, results are discussed enhancing main ndings regarding the kidney procurement system in Colombia. e following subsections detail the proposed models. Causal Loop Diagram e rst approach to model the system is by proposing a causal loop diagram. Figure 1 presents the model that describes the current kidney transplant and donation system in Colombia. It is divided into three subsystems which interact through the variables that connect them. e rst subsystem is denoted as CKD Diagnosis. It considers the patients within the healthcare system suffering from CKD or those who are susceptible to become ill. e subsystem models how patients are detected and treated for CKD while discriminating patients diagnosed at early stages of the disease (stages 1-3) from those patients with late diagnosis (stages [4][5]. Also, there is a proportion of patients with an early diagnosis that start a treatment to avoid the evolution of the disease and a proportion of patients who deteriorate rapidly during the treatment. Within this subsystem, the reinforcing loop, denoted as Early diagnosis of CKD, is identied. e loop begins with the detection of patients in stages 1, 2 or 3. Subsequently, they are treated in order to prevent their progression into stages 4 and 5. erefore, the number of these patients who reach the waiting list does not increase. Consequently, fewer CKD associated deaths are expected, since, at a global level, the Colombian population increases and therefore the population in the health system increases as well. us, when more people are sick, more early detection is expected. is behavior indicates the need to strengthen the methods for early detection of CKD and to invest in strategies to treat the causes of the disease such as diabetes and high blood pressure. It is concluded that this loop does not present a dominant behavior in the system. However, it is relevant since, if its dominance in the system increased, it would have an effect in favor of reducing people reaching stages 4 and 5, and this would reduce the number of patients in the waiting list. Once patients are in advanced stages of the disease (4 and 5), they are registered in the waiting list for organ transplantation, if approved by an ethics committee. e dynamics of this subsystem is modeled in the subsystem denoted as Waiting List, which represents those patients with CKD willing to have transplant surgery. In the meantime, these patients are treated with hemodialysis procedures, which affect their quality of life as this procedure is performed about three times a week, it has an average duration of 4 hours per session, and it must be performed until they receive a transplanted kidney. Some patients on the waiting list will never be transplanted as a result of donor decit in the country, as evidenced in the rst half of 2015 where 17 patients died waiting for a kidney transplant2. is subsystem considers the proportion of patients with successful kidney transplantation is reducing CKD deaths and those who ultimately remain on the waiting list because they had a transplantation surgery but rejected the transplant aer a negative immunological response. For this subsystem, three feedback loops were identied. We have denoted them as Transplant, Deaths in the Waiting list, and Transplant Rejection. e rst is the most relevant for the system, the more people on the list, then there would be more transplants, and with more transplants, fewer people on the list. It would be expected to have a dominant balancing behavior in the system since the transplants that are performed should cover the demand for kidneys and thus, the waiting list should decrease. However, demand for kidneys tends to grow faster than the transplanted patients (2). e second loop, denoted as Deaths in Waiting List, balances the system with the growing number of patients dying while waiting for a kidney transplant. When people are diagnosed on stages 4 and 5, the transplant waiting list is increased. It should be claried that although this is a balancing loop, it is not desired for the system, because it is expected that the waiting list will decrease thanks to successful kidney transplants and not due to the death of patients while they wait. Similarly, the Transplant Rejection loop reinforces the increase of the waiting list with patients returning to it aer being transplanted, but their transplantation procedure fails (32). e third subsystem is denoted as Kidney Donation subsystem. It contemplates the people who become potential donors of organs and tissues aer they die or before dying in the case of coma state or brain death with the consent of their relatives, and the cadaveric donors who stated their will to donate before dying and their kidneys are viable for donation (33). It does not consider donations from patients with asystole since it is not a practice in Colombia. Together, all these potential donors increase the number of transplants, decreasing the number of deaths in the system and generating a balanced behavior. As this loop dominates, the system behavior is improved in terms of reducing the transplant waiting list. Currently, this loop, although desired, does not dominate the system either. In real life, many other variables affect the number of cadaveric donors. ese may include under-detection of donors, failures in the maintenance of hemodynamic stability, issues while diagnosing brain death, or administrative or legal barriers such as the unavailability of resources to retrieve the organs which oen happen in Colombia (about 9.7% of the cases) [34]. In our modeling approach, we will consider living donors as part of the KEP. Official statistics show an average of 132 transplantations coming from living donors per year with a steady behavior (34). Figure 2 represents the model of the system when implementing the rst project. It evaluates the behavior of the system if an amendment is made to Law 73 of 1988 of the Republic of Colombia, eliminating any possibility for a patient's relative to deny their organ donation. e model is modied so that the acceptance percentage of the family does not have an incidence in the number of cadaveric donors. e total number of potential donors increases in this scenario, leading to a higher number of transplants performed and resulting in a reduction in the waiting list. Source: own work e loop denoted as cadaveric donors (campaigns) is a balancing loop intensied by the awareness of the disease among the population. rough effective governmental awareness campaigns, a higher number of donors can be found. ese campaigns are motivated when the number of CKD deaths is perceived to be signicant. Furthermore, cadaveric donors represent the major source of kidneys for the system. It is estimated that the number of transplants from cadaveric donors is 10.4 per million inhabitants and the number of transplants from living donors is 1.9 per million inhabitants (28). However, this balancing loop is not currently dominating in the Colombian system because about 37% of the relatives of the potential cadaveric donor decline donation. us, an increase of 2.2 percentage points compared to 2014 has been observed. An increasing donation is ideal for the system because it balances the offer of kidneys with the transplants required in the waiting list. Nevertheless, it is not growing at the same pace as the waiting list is. is demonstrates the need to implement strategies such as this rst project that seeks to amend Law 73 of 1988 to achieve an increase in the number of cadaveric donors. Figure 3 shows the model of the system when implementing the second project. It evaluates the behavior of the system if a KEP is implemented in Colombia as it has been used in other countries. In the model, it is possible to qualitatively demonstrate that the number of donors increases in the proposed scenario since each patient wanting to enter the program, is required to report a relative or a friend willing to donate one of his kidneys. at way, every patient might get a kidney while helping to increase the pool of donors simultaneously. is program is useful for patients that have a relative or friend willing to donate but that it is not compatible with their immune system so that they can nd other couples in the same situation and swap donors. is reinforcing loop improves the behavior of the kidney donation subsystem by increasing the number of transplanted patients, resulting in decreasing the number of deaths of CKD patients, which neutralizes the number of CKD deaths in the waiting list. is encourages more couples (patient-donor) to participate in the program, increasing the pool of donors again. In sum, as the KEP becomes successful, the loop will dominate in the system improving its behavior by reducing the waiting list. In this instance, it is possible to consider the hypothesis that the government decided to increase the investment in disease awareness campaigns. As a result, the early detection of the disease (stages 1, 2 or 3) where treatment is possible, is expected to increase. In other words, investing in awareness campaigns will result in the early detection of the disease and, in turn, provide adequate prevention, reducing the likelihood of getting CKD and further delay the patients evolving to stage 4 or 5 as shown in the following loop diagram (Figure 4). ese campaigns must include not only the means for detecting patients on early stages but the mechanisms to guarantee that every patient detected will receive and commit to a medical treatment program. Causal Loop of Investment in Awareness Campaigns Source: own work Stock and Flow Diagram e rst qualitative approach to model the kidney transplantation system in Colombia is complemented by a quantitative approach proposing a stock and ow model also known as a Forrester diagram. e current situation modeled using this second approach is presented in Figure 5. We made the following assumptions to formulate this model. First, the time it takes for patients in stage 1, 2 or 3 to reach more advanced stages of the disease (stages 4 or 5) is calculated as a weighted average represented as a delay. Second, factors such as birth and death rates, the proportion of patients admitted to the waiting list, and the number of patients who suffer from transplant rejection are considered to remain constant throughout the simulation. Official data sources were consulted such as the National Ministry of Health to set the values of variables. Appendix A presents the list of the xed variables and their respective data sources. e associated parameters are estimated using the historical ratio of incidence and prevalence data. e incidence ratio is dened as the frequency of appearance of new cases of a disorder in a period, while prevalence ratio is the proportion of individuals in a population who have the disease at any time (35). To implement the rst project, the model is modied in order to simulate the effect of the application of the amendment to Law 73 of 1988 of the Republic of Colombia, which eliminates the possibility for potential cadaveric donor relatives to decline organ donation of the patient (see Figure 6). e modied model also incorporates the fact that there is a clinical criterion to select donors. Not all people are eligible as candidate kidney donors because of medical factors, even if there is the will to donate. e number of donors in the system will increase but not in the same proportion of deaths. Given that the impact of the implementation of the KEP in Colombia is unknown, three different scenarios are made based on the same assumption: if fewer people die on the waiting list, the more reliable the KEP program will be. erefore, the dynamics of the rate of KEP donors is assumed as the inverse to the number of deaths on the waiting list. ese scenarios have the following variables: the number of deaths of patients registered in the waiting list is the independent variable. e Colombian Health Ministry keeps historical data on the mortality of people requesting a kidney transplant. On the other hand, the donor rate is a dependent variable since a KEP has never been implemented in Colombia, then the impact of the implementation of KEP is unknown. e three scenarios are evaluated as follows (see Appendix B): the rst scenario simulates the behavior where a favorable and constant KEP donor acceptance rate is described, but when the number of deaths on the waiting list increases, the rate of KEP donors will gradually decrease. e second scenario assumes that the acceptance rate is associated with an interval of the number of deaths of patients registered in the waiting list. e third scenario shows an accelerated decline, assuming there is less condence in the program because of the number of deaths of patients registered in the waiting list. e behavior of the system does not vary signicantly with different scenarios. For the sake of brevity, only the rst one is presented in gure 8. It is selected because it shows the most favorable results on the waiting list dynamics. First Scenario, Impact of KEP implementation on number of CKD deaths on waiting list Source: own work In gure 8, the acceptance decreases from a value of 0.9 to 0.3 as the number of CKD deaths increase from 0 to 20 based on the historical data of the number of deaths of patients registered in waiting lists in recent years. It is important to clarify that this relationship is an assumption and therefore it is possible that there are variations in the behavior of the KEP donor rate if implemented in Colombia. Next section discusses the results obtained aer simulating the stocks and ows models developed. Results e presented analysis is based on a simulation performed within a time horizon of 29 years (2016-2045). e results in this section analyze the behavior of the system through a stocks and ows model (Forrester Diagram) from a quantitative point of view and complement those found using the qualitative approach exposed in section 2 with the causal loops diagrams. Four scenarios are discussed: the rst one is the current situation in Colombia where neither the legal reform nor the KEP have been implemented. e second one involves the amendment to Law 73 of 1988 of the Republic of Colombia (3). e third one includes the implementation of the KEP in the system. Finally, the fourth one is the combination of the second and third scenarios. e last three scenarios are compared with the rst one using two performance indicators. ese contribute to clarify the panorama of the kidney demand found in each one of the scenarios. Besides, they allow to analyze the behavior of the proposed projects in terms of their effectiveness in reducing the waiting list. Indicator 1 measures the proportion to which potential donors cover the need for transplanting in the waiting list, which allows knowing how the system is balanced to equal the required transplant demand. Indicator 2 shows the proportion of the Colombian population that enters the waiting list and its evolution over the years. Finally, the proportion of potential donors belonging to the KEP is useful for monitoring the implementation of this program and its acceptance in the Colombian population. First Scenario: Current Situation Figures 9 and 10 indicate the results of the simulation under the current system situation. It is noted that during the rst 19 years of the simulation of the kidney donation and transplantation system in Colombia, the proportion of Colombians entering the waiting list (Indicator 2) and transplanted patients has a stable, increasing behavior (see Figure 9), while the proportion of waiting list patients who are transplanted (Indicator 1) decreases at an accelerated pace (see Figure 10). is can be explained because the waiting list grows in proportion with the Colombian population while potential donors, which in this case are mostly composed by cadaveric donors, follows a pattern of the mortality rate and fails to grow at the same rate. is is true under the assumption of maintaining the mortality rate at 5,870 deaths per thousand populations, and the percentage of family acceptance of the donation at 63% (3). Simulation of performance indicators, under current system conditions Source: own work Sensitivity analysis was performed in this situation regarding the rate of acceptance versus the rate of death. In a rst instance we proposed the hypothesis that if the acceptance rate remained at 63%, but the death rate per thousand inhabitants grew by 340%, the list would reduce and stabilize quickly because enough donors would be available to meet the demand. In another instance where the death rate is more likely to remain constant and the acceptance to increase to 80%, it was found that although the waiting list could be reduced, it would not be enough to stabilize the donation and kidney transplantation system in Colombia. However, aer the 20th year, the proportion of citizens entering the waiting list has an exponential growth due to an increasing number of people who become ill with CKD at stages 4 or 5. On the other hand, the number of patients at stage 1, 2 or 3, and stage 4 or 5 of the disease presents an oscillatory and growing behavior. Figure 11 shows that although patients are detected in early stages, eventually the number of patients in advanced stages predominates because not all receive medical treatment to slow the disease evolution to the point where kidney transplantation is required. us, under the current conditions, the system will continue to present a decit in the donation of kidneys, and therefore the waiting list will continue to grow over time, reinforced by the fact that few patients are treated in early stages of the disease (stages 1 -3). increases dramatically up to about 98% in the period from 2016 to 2022. is is due to the decrease in the waiting list size obtained for that time interval. is decrease also impacts indicator 2, which is reduced over the same time period indicating that the reform has a positive impact on the waiting list. Furthermore, achieving stability for the next years in this interval. In addition, when comparing the current number of cadaveric donors against the results obtained with the approval of reform for the last year of simulation, a 60% improvement is noticed. en, this is a favorable scenario to cover the demand for kidneys in the country, and therefore it is desirable to accept the amendment to Law 73 of 1988 that forces about 100% acceptance of organ donation for cadaveric donors. ird Scenario: KEP If Colombia decides to adopt a KEP, the system would achieve a favorable behavior through time, because it reduces the waiting list until reaching a point of balance. is represents a signicant improvement over the current situation since the program allows the number of transplanted patients to exceed the annual entry of people to the waiting list Regarding the KEP donor rate, there is evidence of an increase during the rst years of the simulation because there is an intrinsic relationship between the deaths on the waiting list and the KEP donors. Namely, as the number of deaths decreases there will be more patients and relatives willing to cooperate and therefore a higher chance of nding a KEP donor It is important to emphasize that the simulation was carried out contemplating the assumption that once the KEP is implemented, the only condition to achieve a transplant is that the patient waiting for an organ presents a potential donor. is means, that logistic aspects are not considered such as the probability or time for performing the pairing, scenarios in which a partner decides to leave the program, or clinical characteristics such as compatibility, age, organ status, etc.; aspects that could inuence the effectiveness of this program, and still are an open eld for future research. Fourth Scenario: Both Projects By applying the two projects simultaneously into the system, an improvement in the trend of the waiting list size is perceived, similar to the results obtained by implementing the KEP alone. e difference is that a reduction achieved in the waiting list size is faster. Under this scenario, for the rst years of the simulation, the number of transplanted patients will increase signicantly due to the increase that the number of potential donors in the system will have. e number of transplanted patients per year is expected to reach a maximum value by 2017. From this point on, the number of people who are on the waiting list will gradually decrease and therefore transplants carried out in the following years will be much lower than in 2017. By 2018, the donation and transplant system will stabilize. However, this does not imply that new patients diagnosed with CKD are avoided. On the contrary, this indicator will gradually increase; since without effective treatment, they will enter the waiting list eventually (see Figure 12). Simulation of patients admitted to waiting list, patient in treatment and patient progress to stage 4 or 5, under fourth scenario Source: own work Finally, there are ndings regarding the overall systemic analysis performed. e waiting list increases because patients are oen detected in stage 4 and 5 or due to those patients suffering from transplant rejection in their rst attempt. e rst can be counteracted with a strategy focused on making early detection of CKD, which mainly helps to prevent the list from continuing to grow. One of the leading causes of uncontrolled increase in the waiting list is the number of patients who reach stage 4 or 5 of CKD. At rst, it was attributed to the lack of early detection of the disease. Further analysis, and based on the historical records, it can be concluded that even if there is a diagnosis of CKD in early stages (1, 2 or 3), currently about 50% of the diagnosed population does not commit to a renal protection program (3), resulting in an increase of the population susceptible to join the waiting list. erefore, a path is open for research to establish measures and effective mechanisms to integrate most of the population diagnosed in preventive medical treatments to avoid disease progression. An alternative to motivate an increase in the number of transplants is to develop awareness campaigns including, for example, the adoption of the amendment to Law 73 of 1988. e second is the implementation of a KEP, which requires making a broader study that includes legal, logistical, and social aspects of the program implementation. Discussion and Conclusions e study of the system behavior considering kidney procurement and donation in Colombia, particularly the problematic situation concerning the waiting list, provides a global view of the entire system. e organ procurement system in Colombia is a complex social system, as it involves a high volume of variables that require the study of interactions between them and the environment in which they are immersed. erefore, the use of a tool such as SD is essential for this context, and this study contributes to its elds of application. e results of this study using SD present a contribution to the healthcare system in particular to the development of strategies for intervention and the kidney donation and procurement system design. e proposed model is based on experts' interviews and a literature review considering scientic articles and current laws in Colombia. Although it is not SD's intention to predict the future but to understand the complex dynamics within the system, according to this study, if Colombia decides to adopt the KEP in the near future, the system would achieve in time a favorable behavior by reducing the waiting list to reach a point of equilibrium between supply and demand. is represents a signicant improvement over the current situation, as the program allows the number of transplant patients to increase in higher proportion than the annual income of people into the waiting list. is study explores the modeling of the kidney procurement system in the Colombian context, using an SD approach. e current kidney procurement system is described and compared against the other three different scenarios. ese are the implementation of an amendment to Law 73 of 1988, which allows the system to assume organ donation will of every citizen unless an a priori written consent to forbid it is made. Special focus is given to this case. Further, the implementation of a voluntary exchange program is also analyzed, with the aim to develop a national pooling system of CKD patients, each of them having a relative willing to donate a kidney to make voluntary exchanges of kidneys. Finally, the implementation of the amendment to Law 73 along with the implementation of the KEP. Results allow us to conclude the following ndings: with the implementation of the amendment to Law 73 of 1988, the cadaveric donor campaigns loop becomes dominant in the system so that by the year 2023 of the simulation, balance in the waiting list size is achieved. Furthermore, in average, the waiting list for a kidney transplant size is decreased by about 35 patients which is a steady reduction of 98% by the seventh year of the simulation, under stable endogenous factors. On the other hand, with the implementation of the KEP as performed in other countries, it is found that the associated reinforcement loop dominates within the system, increasing the number of kidney transplants so that by year 2019, the average waiting list size is about 23 patients. A steady reduction in the waiting list of 98% is achieved by the third year of the simulation. Finally, if the two projects are implemented, the system could nd a balance by the third year of the simulation with a sustained reduction in the waiting list of about 99%. If only a single strategy can be adopted, the implementation of a KEP is recommended since it achieves stability of the system three years earlier than the amendment to Law 73 of 1988, and the reduction in the waiting list size is higher by 37.14%. Although the most signicant variables have been considered, research is still required on this system to examine the impact of other variables such as strategies for reducing and treat high blood pressure and diabetes which have an impact on CKD incidence and prevalence. For future research, it is also possible to include economic factors within the analysis such as the cost of treatments, to have a more extensive look into the impacts of the strategies (17). Also, to explore the problem of designing a model that optimizes the KEP, in such a way that medical resources will be used efficiently to perform surgeries, combined with the use of technological tools for gathering information and generating new soware to support the model and other ndings. Further, making a more in-depth analysis of the causes for Colombian families to refuse their deceased relatives to participate in kidney donation programs is yet an interesting future research topic. Scenarios of the impact of KEP implementation on the number of CKD deaths on waiting lists.
2019-07-22T06:02:17.243Z
2019-05-23T00:00:00.000
{ "year": 2019, "sha1": "3409754f9f6380574032558e8035d47518005578", "oa_license": null, "oa_url": "https://revistas.javeriana.edu.co/index.php/gerepolsal/article/download/26143/22858", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2da6df686b52fcc67857c8b85099d383153e9233", "s2fieldsofstudy": [ "Medicine", "Political Science" ], "extfieldsofstudy": [ "Computer Science" ] }
248866209
pes2o/s2orc
v3-fos-license
Prediction Breast Molecular Typing of Invasive Ductal Carcinoma Based on Dynamic Contrast Enhancement Magnetic Resonance Imaging Radiomics Characteristics: A Feasibility Study Objective To investigate the feasibility of radiomics in predicting molecular subtype of breast invasive ductal carcinoma (IDC) based on dynamic contrast enhancement magnetic resonance imaging (DCE-MRI). Methods A total of 303 cases with pathologically confirmed IDC from January 2018 to March 2021 were enrolled in this study, including 223 cases from Fudan University Shanghai Cancer Center (training/test set) and 80 cases from Shaoxing Central Hospital (validation set). All the cases were classified as HR+/Luminal, HER2-enriched, and TNBC according to immunohistochemistry. DCE-MRI original images were treated by semi-automated segmentation to initially extract original and wavelet-transformed radiomic features. The extended logistic regression with least absolute shrinkage and selection operator (LASSO) penalty was applied to identify the optimal radiomic features, which were then used to establish predictive models combined with significant clinical risk factors. Receiver operating characteristic curve (ROC), calibration curve, and decision curve analysis were adopted to evaluate the effectiveness and clinical benefit of the models established. Results Of the 223 cases from Fudan University Shanghai Cancer Center, HR+/Luminal cancers were diagnosed in 116 cases (52.02%), HER2-enriched in 71 cases (31.84%), and TNBC in 36 cases (16.14%). Based on the training set, 788 radiomic features were extracted in total and 8 optimal features were further identified, including 2 first-order features, 1 gray-level run length matrix (GLRLM), 4 gray-level co-occurrence matrices (GLCM), and 1 3D shape feature. Three multi-class classification models were constructed by extended logistic regression: clinical model (age, menopause, tumor location, Ki-67, histological grade, and lymph node metastasis), radiomic model, and combined model. The macro-average areas under the ROC curve (macro-AUC) for the three models were 0.71, 0.81, and 0.84 in the training set, 0.73, 0.81, and 0.84 in the test set, and 0.76, 0.82, and 0.83 in the validation set, respectively. Conclusion The DCE-MRI-based radiomic features are significant biomarkers for distinguishing molecular subtypes of breast cancer noninvasively. Notably, the classification performance could be improved with the fusion analysis of multi-modal features. INTRODUCTION According to the released data in 2020, breast cancer was the most common malignancy occurring in women worldwide and served as the main cause of cancer death (1). As one of the most common histological types of breast cancer, IDC approximately accounted for 80% of them. Patients who were diagnosed with the same pathological type and clinical stage of the disease may have distinct therapeutic outcomes due to tumor heterogeneity at the molecular level (2). Based on the expression of several specific molecular receptors, breast cancers are classified into three distinct molecular subtypes as follows: hormone receptor (HR) +/Luminal, HER2-enriched, and triple-negative breast cancer (TNBC). As the varied biological characteristics of these molecular subtypes, individuals generally respond differently to the same therapy (3). For example, patients with (HR)+/Luminal breast cancer subtype have the highest five-year survival rate and low recurrence risk, operation and endocrine therapy would be preferably suggested. For the patients with human epidermal HER2-enriched gene amplification, target treatment is strongly recommended to reduce the risk of recurrence. Given the strong invasion and the worst survival of TNBC subtype, neoadjuvant chemotherapy is recommended due to its relatively high sensitivity (4). In this context, early identification for the molecular subtypes could actively guide the targeted personalized therapy and prognostic prediction. Clinically, immunohistochemistry is commonly used to determine the molecular type of breast cancer. However, it is invasive, and the molecular characteristics of the obtained tissue samples may fail to represent the overall tumor, and sometimes the molecular types of the specimens from the puncture and post-operation are inconsistent. Radiomics has been proven as an efficient noninvasive approach to correctly identify breast cancer molecular type. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) has been established as an imaging technique to present the morphologic and hemodynamic characteristics of tumors and is performed effectively to distinguish the tumor from the background parenchyma as the high-resolution of soft tissue (5,6), thus it is commonly used for the feature extraction in radiomics (7). Prior studies have investigated radiomic signatures in the breast, Fusco et al. (8,9) demonstrated that quantitative analysis of the morphology and texture features of breast lesions is feasible, a multiple classifier system can optimize the accuracy for breast lesion classification. Agner et al. (10) showed that good performances could be yielded using a probabilistic boosting tree classifier in conjunction with textural kinetic features for differential diagnosis between breast cancer and benign breast lesions. Some previous radiomics studies based on DCE-MRI (11)(12)(13)(14)(15)(16)(17)(18) have already investigated the radiomic features of the breast, whereas the stability and reliability of models were affected by the difference in imaging schemes and devices. Furthermore, limited radiomic features or not complete subgroups of breast cancer in some previous studies resulted that the prediction performance of the provided models thus far are not the best. Therefore, this is still a lack of a comprehensive evaluation of MRI radiomic features for differentiating molecular types in patients with breast cancer. This study aims to investigate the value of MRI radiomic features in distinguishing molecular types of breast cancer. To our knowledge, our study is the first attempt to extract radiomic features based on the original and wavelet-transferred DCE-MRI images through the 3D volumetric imaging technique, developing a nomogram combining radiomic characteristics and clinical pathological risk factors. Moreover, an external independent validation set was included to evaluate the stability of our models. We believe that our findings could provide valuable discriminative information of breast cancer molecular typings. Patient Data From January 2018 to March 2021, 382 patients diagnosed by clinical examination and confirmed by ultrasound in two hospitals were retrospectively included in this study. Patients were enrolled according to the following inclusion criteria: common breast invasive ductal carcinoma in pathology; complete breast MRI data, pathological and immunohistochemical data; and a longterm follow-up period. Exclusion criteria were pregnant or lactating females, or a plan to get pregnant within 6 months; prosthesis implantation; and a history of breast surgery that might affect imaging diagnosis. Of the finally included 302 patients, 223 cases (from Fudan University Shanghai Cancer Center) were randomly split into a training and internal test set with a ratio of 7:3, and 80 cases (from Shaoxing Central Hospital) were treated as an independent validation cohort. Demographic data from Electronic Medical Record Systems of both hospitals included age, menopause status, and tumor location. Pathological data included tumor pathological type and histological grade, status of estrogen receptor (ER) and progesterone receptor (PR), HER2, Ki-67, and lymph node metastasis. The study protocol was approved by the ethics committee of the Fudan University Shanghai Cancer Center and Shaoxing Central Hospital. The workflow of the patient selection process is given as Figure 1. Imaging Examination Fudan University Shanghai Cancer Center: Aurora Dedicated Breast MRI System and dedicated phase-array coil were used. The patients were asked to stay in the prone position to allow both mammary glands in the concave hole of phase-array coil with a natural overhanging effect. In the plain scan, crosssectional T1-weighted images (T1WI) (TR 5 ms, TE 13 ms) and T2-weighted images (T2WI) with fat suppression (TR 6680 ms, TE 68 ms) were selected, with a layer thickness of 3 mm and a layer spacing of 1 mm. Phase I mask scan was performed before contrast enhancement scan. Gd-DTPA was used as the contrast agent at a dose of 0.2 mmol/kg and a flow rate of 2.0 mL/s. Contrast images at 5 phases were consecutively collected, with the scan time per phase as 120 s. In the contrast scan, crosssectional T1WI with fat and water suppression (TR 5 ms, TE 29 ms) was selected, with a layer thickness of 1.1 mm and a layer spacing of 0, FOV 360 mm×360 mm, matrix 360×360×128. The number of scanned layers in a single phase was 160. Shaoxing Central Hospital: Philips Achieva 1.5T MR scanner (Holland) and dedicated breast coil were applied. In the plain scan, crosssectional T1WI (TR 4.8 ms, TE 2.1 ms) and T2WI with fat suppression (TR 3400 ms, TE 90 ms) were selected, with a layer thickness of 3 mm and a layer spacing of 0.5 mm, matrix 512×512. Phase I mask scan was performed before contrast enhancement scan. Gd-DTPA was also used as the contrast agent at a dose of 0.2 mmol/kg and a flow rate of 2.0 mL/s. Contrast images at 6 phases were consecutively collected, with the scan time per phase was 90 s. In the contrast scan, crosssectional T1WI with fat and water suppression (TR 5.0 ms, TE 2.2 ms) was selected, with a layer thickness of 1.0 mm and a layer spacing of 0.5 mm, FOV 320 mm×320 mm, matrix 336×336×128. The number of scanned layers in a single phase was 150. Image Segmentation and Transfer The breast DCE-MRI data were imported as DICOM file into the DeepWise scientific research platform v1.6 (http://keyan. deepwise.com/) to semi-automatically outline threedimensional region of interests (3D ROIs) at the individual level and then revised manually by two radiologists of more than 10-years' experience in breast imaging diagnosis. Disagreements were resolved by consensus-based discussion. The third sequence during the dynamic enhancement course was selected, the first series was acquired before intravenous injection, about 240 s for Aurora Dedicated Breast MRI System and about 180 s for Philips Achieva 1.5T MR scanner after injection of contrast medium. At this time point, malignant lesions show the general peak enhancement to present clear contrast with the surrounding normal breast parenchyma, which is conducive to more accurate ROI delineation and feature extraction. The chosen ROIs should conform to the following criteria: (1) Including cystic lesion, necrosis, and halo-sign; (2) invasion of surrounding structures: areas with connection to focus and have the same enhancement pattern with the focus; (3) reduction of volume effect for the upper and lower ends of focus: ROI <5 mm 2 is waived. The coronal and sagittal planes could be further referenced to or getting calibration advice from superior physicians for decision making if there is any uncertainty. Finally, B-spline interpolation was carried out to standardize the image resolution into the same (1 mm × 1 mm × 1 mm) and followed by the gray-level discretization with fixed bin widths (25HU) as previous studies suggested. Radiomic Feature Extraction and Screening To emphasize the imaging characteristics, three-dimensional wavelet decomposition was further applied at each level to obtain all possible combinations in high-pass or low-pass filters (LLH, LHL, LHH, HLL, HLH, HHL, HHH, LLL). For original and wavelet-transformed images, first-order, shape and texture features were extracted, respectively, which was implemented with open-source PyRadiomics library (https:// github.com/Radiomics/pyradiomics). Subsequently, Z-score transformation was used to normalize the features distribution in the training set and the data in the other sets were then standardized by the same calculated parameters to avoid dataleakage. The implementation of feature extraction and standardization was in compliance with Imaging Biomarker Standardization Initiative (IBSI) (19). Given the extracted high-throughput radiomic features, we initially applied feature selection in the training set to minimize the potential collinearity of variables and obtain the sparse feature matrix for modelling, which included Spearman's rank correlation with a threshold of 0.9 and least absolute shrinkage and selection operator (LASSO) regression analyses, resulting in the most predictive covariates with non-zero coefficients. Model Establishment and Evaluation Multi-class classification model was constructed using a transformed logistic regression. We transferred the multiclass cases into binary-class cases, hence there were three models: HR+/Luminal model (HR+/Luminal vs. rest), HER2enriched model (HER2-enriched vs. rest), and TNBC model (TNBC vs. rest). We used the extended logistic regression method penalized by LASSO with 10-fold cross-validation to train the best performing classification models from the training set prior to external validation. To investigate the classification power of finally retained clinical and radiomic features, three multi-class models were built for classifying three primary molecular subtypes: clinical model, radiomic model, and combined model. Receiver operating characteristic (ROC) curves were used to evaluate the predictive discrimination in three molecular types with one-vs.-res (OvR) averaging strategy, which computes the average of the area under the curve (AUC) scores for each class against all other classes. Pathological Analysis Surgical specimens were obtained for pathological classification, histological grading, and immunohistochemical analysis. Molecular typing of breast cancer was performed according to the standard criteria proposed at the St. Gallen Conference (2,20,21): HR+/Luminal includes Luminal A and Luminal B, Luminal A for ER and/or PR+ (>1% staining) and HER2-; Luminal B for ER and/or PR+ (>1% staining) and HER2+; HER2-enriched for ER-, PR-, and HER2+, fluorescence in situ hybridization (FISH) was performed to assess gene amplification, and HER2 was considered positive if the ratio ≥ 2.0; and TNBC for ER-,PR-, and HER2-. Statistical Methods Statistical analysis was conducted on R statistical software v3.6.1 (http://www.Rproject.org). Student's t test and Chi-square test were respectively used for continuous and categorical data with normal distribution, Mann-Whitney U test was applied for data with non-normal distribution. All tests were two-tailed, and a pvalue threshold of 0.05 was considered statistically significant. The R package "glmnet" statistical software (R Foundation) was used to perform the modelling process of multi-class classification models. "PROC" R package was mainly used in the ROC curve analysis. After the completion of feature selection for multi-class classification models, stepwise regression analysis based on Akaike Information Criterion (AIC) was devised to establish a nomogram for predicting molecular subtypes (HR+/Luminal and HER2-enriched)of breast cancer in the training set. The performance of the nomogram was evaluated by concordance index (C-index). Calibration curves of this nomogram were used to validate the agreement between prediction and observation in all data sets. Furthermore, we performed decision curve analysis (DCA) to visualize the net benefit for clinical decisions. Enhanced Imaging Data, Clinical Data, and Pathological Diagnosis Results The detailed characteristics of patients are summarized in Table 1. In our study, all the cases were breast malignant tumors, 191 cases (85.7%) showed early enhancement with wash-in and rapid washout type curve or plateau type curve, and only 32 cases (14.3%) showed a slow increase followed by persistent enhancement curve. The mean age was 50.07 ± 10.48 years ranging from 16 to 86 years. Of the included 303 cases: HR+/Luminal (Luminal A, n=45; Luminal B, n=71) in 116 cases (52.02%), HER2-enriched in 71 cases (31.84%), and TNBC in 36 cases (16.14%). The HR+/Luminal breast cancer was the most prevalent subtype among them. There were 17 cases that were Stage I (7.62%), 114 cases were Stage II (51.12%), and 92 cases were Stage III (41.26%) in the histological grade assessment. There were no significant differences among the three subtypes in age (P=0.06) and histological grade (P=0.14). In contrast, the value of Ki-67 (P=0.01) and status of lymph node metastasis (P=0.03) were significantly different in the molecular subtypes of breast cancer. Feature Selection and Optimal Omics Feature Radiomic phenotyping of ROIs on the enhanced MRI images produced a total of 788 radiomic features from original and wavelet-transferred images, including first-order features (n=162), shape-order features (n=14), texture features from gray level co-occurrence matrix (GLCM, n=198), gray-level run length matrix (GLRLM) (n=144), gray-level size zone matrix (GLSZM) (n=144), and gray-level dependence matrix (GLDM) (n=126). Before feature selection, 48 (6%) radiomic features were excluded through stability analysis (ICC≤ 0.85). There were 148 radiomic features and 6 clinical features selected with the |correlation coefficient| ≤ 0.9. Figure 2 shows the selection process where the subset size of non-zero features tuned by the parameter l is based on the minimum criteria. The optimal l (log (l) = −3.331) resulted in 8 radiomic features with non-zero coefficients ( Figure 2C). We further verified that there was no statistically significant difference in those features between the training set and test set ( Table 2). Model Construction and Validation The algorithm of extended logistic regression penalized by LASSO finally determined 8 optimal radiomic features ( Table 2) and 4 clinical features (age, tumor location, histological grade, Ki-67, and lymph node metastasis). Three multi-class classification models (clinical model, radiomic model, and combined model) were constructed considering not only single-modal features but also the fusion of multimodal features. The confusion matrix of the combined model shown in Figure 3 demonstrates that the proposed multi-class model performs well on most one-vs.-res (OvR) results. For predicting molecular subtype, the model performance for Nomogram Establishment The nomogram for the classification model of HR+/luminal and HER2-enriched is shown in Figure 5, in which original_ shape_Maximun2DDiameterRow has the most discriminative power, and the value of C-index was 0.84 in the external dependent validation set. The calibration curves of the combined nomogram showed good calibration performances in the training set, test set, and external validation set, the high agreements between ideal curves and calibration curves were observed. The DCA curve revealed a more extensive range of cutoff probabilities shown by the nomogram, the threshold probabilities of the model had excellent net benefits and enhanced performance for classifying the two molecular subtypes with combined nomogram. DISCUSSION Radiomics is a rapid developing field of medical study that quantitates the microstructure and biological information of tumor tissue for exploring the intra-tumoral heterogeneity and tumor characterization in a convenient and non-invasive way (22). To date, some studies have already investigated the discrimination between benign and malignant breast tumors (23,24), lymph node metastasis (25)(26)(27), tumor response prediction of neoadjuvant chemotherapy (28,29), and survival analysis (30,31). Our study found that radiomics showed favorable predictive performance on molecular subtype based on the DCE-MRI images. In the present study, we identified 8 radiomic features as significant in the radiomics model and 4 clinical features in the clinical model. The combined model with the fusion of clinical and radiomic features was proven to have the optimal performance in distinguishing molecular subtype of breast cancer, with the value of sensitivity, specificity, and macro-AUC were 0.832, 0.781, and 0.830, respectively. Furthermore, based on the optimal radiomic features and clinical risk factors (patient age, pathological grade, Ki-67, and lymph node metastasis), a clinical predictive nomogram for Her2+/Luminal molecular subtypes was constructed. DCA, a method available to obtain net benefit based on threshold probability, revealed the superiority of the nomogram in the classification of molecular subtype of breast cancer. To validate the stability and reliability of all models, further testing was applied in the internal test set and an independent external validation set, the nearly similar values of macro-AUC indicating the excellent robustness and generalization, meaning good practical value for molecular subtype classification in coming breast cancer cases. Previous studies (32)(33)(34) have suggested that MRI-based radiomic features are definitely correlated with the molecular subtypes of breast cancer. achieved good results in distinguishing between ER+ and ERbased on a radiomic signature (AUC=0.89), although only leaveone-out cross validation (LOOCV) was used because of the limited cases. Rossana et al. (36) investigated three advanced machine learning algorithms, including support vector machine, random forest, and Naive Bayes classifier, and successfully identified the molecular prognostic markers (AUC: 86-93%). The results from the previous studies are not completely consistent, probably influenced by the difference in selected phase/level in contrast scan, or the method for molecular typing. In contrast, a few studies provided different views. Grimm et al. (11) assessed the value of imaging features in predicting molecular subtypes of breast cancer from three aspects, including morphology, radiomics, and dynamic enhancement, by a semi-automated segmentation approach (i.e., fuzzy C-means clustering). They found that radiomic features were inferior to the other two types of features. The discrepancy between the views might be caused by the changes in the scanner and the pulse sequence applied in the studies. Notably, it has been proven that the matrix size is crucial in feature calculation as its relationship with spatial resolution. The model in our study provided high accuracy, which is consistent with the study of Leithner et al. (33), which could be interpreted as follows: First, in the most obvious enhancing phase, the heterogeneity and invasiveness of the tumor will be reflected obviously (32), and the much clearer boundary of focus will minimize errors occurring in focal delineation. Second, the semiautomated segmentation approach for the extraction of breast DBT and 3D ROI on MRI original images is more reliable compared to the other 2D or the maximum level analyses in the same research field. Third, the additional exploration of wavelet-based features revealed some more specific image characteristics of overall lesions. In this study, an extended logistic regression with LASSO penalty was applied to obtain 8 optimal radiomic features from the total 788 candidate radiomic features. The 8 features include morphological, first-order, GLCM and GLRLM characteristics, which are predominantly related to tumor heterogeneity. Shape_Maximum 2D Diameter Row depicts the tumor size and morphology, which was proved significantly correlated with the molecular type of breast cancer, indicating the molecular subtype could be influenced by tumor size. Consistent with the previous report (37), the morphology and size of lesions varied with the expression of different hormone receptors, and hormone receptornegative plus HER2-positive or TNBC breast cancers tend to have larger lesions than hormone HR+/Luminal cancers. We also found that the low kurtosis and skewness appeared in HR+/Luminal cases, which are highly important in the radiomic model. Compelling evidence provided by Fan et al. (14), who constructed a predictive model for four molecular subtypes of breast cancer based on DCE-MRI radiomic, dynamic, and 2 clinical features, revealed the heterogeneity-related low kurtosis and skewness in Luminal A cases and highlighted the potential of skewness as a predictor for molecular subtype classification of breast cancer. It is reported that higher kurtosis and skewness values are associated with treatment failure (38), while lower values indicate good responses to treatment. This is supported by the fact that HR+/Luminal breast cancers have favorable clinical outcomes. Correlation, Autocorrelation, DifferenceVariancewavelet and glrlm_RunEntropy are second-order or high-order features based on original and wavelet transforms. They reflect the roughness of texture and the consistency between tumor texture images, conducive to better predicting intra-tumor heterogeneity and subtle differences in gray level texture feature (13). Moreover, they are regarded to be of vital significance in texture analysis in the field of medical imaging. There are still some limitations in this study. For instance, only the phase with the most obvious dynamic enhancement was selected for analysis. The further requirement of T1WI, T2WI, and DWI images, which are essential in breast cancer analysis, may provide a more comprehensive information of the lesions. In future research, a complete sequence will be involved to further investigate the value of multi-parameter radiomic features in predicting molecular subtype of breast cancer. Another limitation is that the TNBC showed an unbalanced distribution in all breast cancers, though it reflected the general distribution of breast cancer molecular subtypes in the patient population. Hence, we adopted the cross-validation method to ensure the stability of results in a different split training cohort. To sum up, the radiomics signature based on DCE-MRI has good clinical application value in predicting molecular subtype of breast cancer, and it may help clinicians make beneficial treatment decisions before surgery. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The study protocol was approved by the ethics committee of the Fudan University Shanghai Cancer Center and Shaoxing Central Hospital. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS AX and XW conceived and designed this study. XC carried out to collect the clinical data. JZ collected pathological data. AX and XC drafted the manuscript. FL performed the statistical analysis. DS performed image processing. SZ and SL put forward many opinions on the manuscript. All authors contributed to the article and approved the submitted version.
2022-05-19T13:30:56.846Z
2022-05-19T00:00:00.000
{ "year": 2022, "sha1": "651354a9725c2d1f580937ce243abad2d6c12c92", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "651354a9725c2d1f580937ce243abad2d6c12c92", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
261015977
pes2o/s2orc
v3-fos-license
The Use of 3D Printing Technology in Gynaecological Brachytherapy—A Narrative Review Simple Summary Cervical and endometrial cancers are the fourth and sixth most common cancers in women, respectively. Radiation therapy, including brachytherapy, is an important component of their treatment. Commercially available brachytherapy applicators only come in limited sizes and designs and either do not fit in some patients or do not allow adequate dose delivery to the target volume. In recent years, customised 3D-printed applicators have been increasingly used in such cases. This review summarises the role of 3D printing in brachytherapy of gynaecological tumours. Abstract Radiation therapy, including image-guided adaptive brachytherapy based on magnetic resonance imaging, is the standard of care in locally advanced cervical and vaginal cancer and part of the treatment in other primary and recurrent gynaecological tumours. Tumour control probability increases with dose and brachytherapy is the optimal technique to increase the dose to the target volume while maintaining dose constraints to organs at risk. The use of interstitial needles is now one of the quality indicators for cervical cancer brachytherapy and needles should optimally be used in ≥60% of patients. Commercially available applicators sometimes cannot be used because of anatomical barriers or do not allow adequate target volume coverage due to tumour size or topography. Over the last five to ten years, 3D printing has been increasingly used for manufacturing of customised applicators in brachytherapy, with gynaecological tumours being the most common indication. We present the rationale, techniques and current clinical evidence for the use of 3D-printed applicators in gynaecological brachytherapy. Introduction Gynaecological cancers represent an important health care burden, with cervical and endometrial cancer being the fourth and sixth most common cancers in women worldwide, respectively, and together account for more than 10% of all newly diagnosed cancers in women in 2020 [1].Radiotherapy, including image-guided adaptive brachytherapy (IGABT) based on magnetic resonance (MR), is the standard of care in locally advanced cervical and vaginal cancer and also an integral part of curative treatment of (medically) inoperable endometrial cancer, locally recurrent cervical and endometrial cancer [2][3][4]. Tumour control probability (TCP) is influenced by tumour volume and overall treatment time [5][6][7].TCP increases with dose and, at the same time, is higher for smaller tumours compared to large tumours at the same dose level [8][9][10].Brachytherapy is the optimal technique to increase the dose to the target volume, while maintaining the dose to organs at risk (OARs) at set constraints [11].In recent years, based on prospective and retrospective data collection in large groups of cervical cancer patients, new dose planning aims for MR-based IGABT with the combined intracavitary (IC) and interstitial (IS) technique were proposed [9].Adhering to proposed dose planning aims depends on several factors, including tumour size and topography, proximity of the OARs, choice of imaging modality, choice of applicator, application technique (IC vs. IC/IS) and quality of the implant [9,[12][13][14][15][16].Ten years ago, Fokdal et al. found that 41% of patients with locally advanced cervical cancer needed interstitial needles to ensure adequate target volume coverage.Sixteen percent of all inserted needles were freehand and inserted at an oblique angle [17].Since then, the use of interstitial needles has increased and the proportion of interstitial component use now represents one of the quality indicators for the management of cervical cancer, with at least 40% of patients treated with the combined IC/IS approach being the minimum required and ≥60% representing the optimal target [18]. Several applicators for combined intracavitary/interstitial brachytherapy have been developed in the recent decades by different companies to ensure better coverage of the target volume with the prescribed dose.However, commercial applicators to fit all anatomical variations and tumour topographies are not available.Some types of individual applicators, for example, vaginal moulds, have been used in gynaecological cancer brachytherapy for decades [19].However, in recent years, 3D-printed applicators have been used in an increasing number of clinical situations, alone or combined with commercially available applicators to improve target volume coverage and/or overcome anatomical barriers such as a narrow vagina [20][21][22][23][24]. Three-dimensional printing, also named additive manufacturing, has been revolutionised in different fields of medicine in the last decade.First used in dentistry, its use has later spread into other medical fields, such as maxillofacial surgery, neurosurgery, urology, orthopaedic surgery, cardiology and also radiotherapy.Three-dimensional printing applications are now being used in medical training, preoperative planning and treatment [25][26][27][28][29][30][31][32]. In this paper, we summarize the rationale, techniques and current clinical evidence for the use of 3D-printed applicators in gynaecological brachytherapy. The Rationale for Development of 3D-Printed Applicators In a retrospective study, Petric et al. created a target density map (TDM) by merging the target volume contours of different cervical cancer patients after aligning the applicators via the centre of the ring and the ring-tandem axis, thus preserving the spatial relation of the tumour towards the applicator.Using the TDM, they assessed that the planning aims for the target can be achieved with insertion of the tandem and ring applicator in 60% of patients, while the addition of parallel needles achieved the planning aims in 95% of the tumours.For the remaining 5% of the tumours, novel applicator prototypes would need to be developed [49].Additionally, insertion of commercially available applicators can be difficult in patients with a narrow vagina, so new solutions are also needed for some patients with smaller tumours. The use of interstitial needles permits asymmetric modelling of isodose according to the topography of the tumour and OARs.In the past, oblique needles were inserted freehand, under transrectal ultrasound (TRUS) guidance, or via the transperineal approach.The point and angle of insertion as well as needle depth is determined in the preplanning process, based on MR images with the intracavitary applicator in place.Reproducing the preplanned needle position in a freehand insertion requires ample expertise in both needle insertion and TRUS guidance, and needle repositioning is often required.With the transperineal approach, the needle path is very long, making it difficult to keep the desired angle and direction.While this approach may be feasible for treatment of tumour extension to the lower third of the vagina, its use is unsuitable for treatment of tumour spread to the pelvic side wall or to the sacrouterine ligament. Commercially available and modified commercial applicators such as the Vienna II, Geneva and Venezia applicators now allow the insertion of parallel and oblique needles at fixed positions and angles [50,51].Compared to parallel needles alone, insertion of oblique needles offers a better dose distribution with fewer cold and hot spots, a lower dose to the vagina and a higher minimum dose that covers 90% of the high-risk clinical target volume (D 90 to CTV HR ) even for tumours extending to the distal parametria and pelvic sidewall [13,51].However, as needle placement options are still limited in terms of both point and angle of insertion with the commercially available applicators, this could substantially impact the dose volume histogram (DVH) parameters for both the target volume and OARs, especially in large tumours, significant parametrial and/or vaginal involvement and unfavourable pelvic topography [52].Because these cases represent a minority among all gynaecological patients treated with IGABT, and at the same time, the tumour topography in these individual patients varies from case to case, it is neither realistic nor feasible that commercial applicators will be available for these scenarios. The dose to the target volume is one of the most important parameters for increasing TCP, with higher doses required to achieve LC in non-squamous histological types, larger tumours and certain molecular subtypes [9,10,53].It is therefore important to achieve planning target aims also in patients with unfavourable topography, especially in patients with large tumours and poor response to EBRT and chemotherapy.In a large, prospective, multicentre cohort study of patients with locally advanced cervical cancer, treated with curative radiotherapy including IGABT, D 90 to CTV HR and CTV HR size > 45 cm 3 were among the risk factors that had an impact on local control in multivariable analysis [10].In a large retrospective cohort of patients with cervical cancer the use of IC/IS applicators increased D 90 to CTV HR from 83 to 92 Gy and local control in patients with a CTV HR larger than 30 cm 3 was 10% higher at 3 years with no increase in treatment-related morbidity, compared with tumours of the same size treated with IC applicators alone [54]. Achieving good implant geometry is crucial in all BT applications, as no optimisation process can correct for a poor implant.Poor implant geometry or inadequate applicator choice can compromise the dose to the target volume and OARs and negatively impact local control, acute and late toxicity [2,12].In two prospective trials, patients whose implant was classified as inadequate had a higher risk of local failure compared to those treated with an adequate implant (HR = 2.5, p = 0.04).Disease-free survival (DSF) was also better in patients with an adequate implant (HR = 1.88, p = 0.055) [55].The same was reported by Cornet al., who found better local control at five years in patients treated with an adequately placed applicator compared to the inadequately placed one (68%:35%, p = 0.02) [56]. The use of a customised applicator for insertion of oblique needles allows better positioning of the needles and better compliance with the preplan is usually achieved compared to freehand needle insertion [57,58].The application is generally shorter, there is less need for needle repositioning and if using general anaesthesia, the time under anaesthesia is shorter. Central recurrences of gynaecological tumours after surgery present another challenge, for which there are no commercially available applicators.With no uterus, the tandem cannot be used to better fix the geometry of the implant.If only the ring is used for an IC/IS application, there are too many degrees of freedom in the position of the ring within the vaginal vault, making reproducible interstitial implants very hard to achieve.At the same time, recurrences in the vagina or primary vaginal cancer that extend into the middle and/or lower third of the vagina cannot be adequately covered by such an implant.In such cases, a customised 3D-printed applicator allows a more fixed geometry, better reproducibility and better DVH parameters for target volume and OARs [14,23,59]. In postoperative radiotherapy for endometrial cancer, vaginal cylinders are used for vaginal cuff radiotherapy.The commercially available cylinders have different diameters but uniform shape and the dose is typically prescribed to a certain distance from the applicator surface.However, the postoperative size and shape of the vagina is far from uniform and there are clinical situations where the commercially available cylinders do not fit due to a narrow vagina or introitus, and air pockets form when the shape of the vaginal stump is asymmetrical, conically shaped or has a shape described as "dog ears" [24,60,61].A 3D-printed customised mould applicator can extend the walls of the vagina, minimising the possibility of air pocket formation, ensuring better dose distribution [60]. Three-Dimensional Printing Technology The benefit of 3D printing is the fast and relatively inexpensive production of various prototypes, which are suitable for small series and individual production.Compared to traditional manufacturing, 3D printing enables the production of more customised and complex forms.The main advantages of 3D printing technology in various areas of medicine are the reduction in production cost and time, reduction in manual work, ability to make complex geometric forms, on-demand manufacturing, personalisation and improved medical outcome [62,63].Manufacturing could be an integral part of a BT unit or any other clinical department using 3D printing. Medical applications produced by 3D printing include tools and medical devices, implants, medical aids and prostheses, medical models, used for educational purposes or preoperative planning, and biomanufacturing, which is a merger of 3D printing and tissue engineering [28,40,[64][65][66][67][68].In radiotherapy, 3D printing is used for the production of individual boluses in EBRT, manufacturing of (anthropomorphic) phantoms, used in medical dosimetry, equipment for the quality assurance process, training devices and the production of individual applicators and templates for IC, IS and contact BT [20,23,33,36,[69][70][71][72]. There are six main methods of additive manufacturing used in medicine [62,63,73]: • Stereolythography (SLA)-the material used is a liquid resin with photoactive monoand polymers, which gains its final form with photopoylmerisation under UV light and high temperature.Its resolution is high, in the range of 10 µm, the surface is smooth; however, the printing is slow and expensive, and the final product is fragile. • Selective laser sintering (SLS) or powder bed fusion (PBF)-the materials used are powders, which can be plastic, ceramic, metal or glass and are fused into solid form using a laser beam.Similar to SLA, its resolution is high, in the range of 80-250 µm, but the process is slow and costly. • Fused deposition modelling (FDM)-the materials used are continuous fibre-reinforced polymers and filaments of thermoplastic polymers, which are heated to a semi-liquid form and ejected through the nozzle layer by layer.The method is simple, fast and cheap, with a resolution of 50-200 µm, and its major limitations are the lack of more thermoplastic materials to choose from, rough surface and mechanical fragility of the final product. • Laminated object manufacturing (LOM)-it is used in different materials including metal, paper and polymer composites.Its advantages are low cost and a variety of materials to choose from, while its major drawbacks are poor surface quality and unsuitability for finely detailed shapes due to low dimensional accuracy. • Inkjet printing (IP)-the material mostly used is ceramic in a form of particle dispersion, which is ejected from the printer nozzle and deposited on the surface.This method is fast, but the resolution is coarse and adhesion between layers is poor. • Direct energy deposition (DED)-mostly metal materials in the form of powder or a wire are fused together using focused thermal energy.DED produces devices of excellent mechanical properties, and the time and costs are low; however, surface quality is poor and resolution is low at 250 µm, which renders printing of fine details hard. The choice of the method depends on different factors such as choice of material, complexity of the product, desired resolution and cost considerations.SLS, SLA and FDM technologies are most commonly used for applicator printing in BT.The materials have to be biocompatible and certified for medical use, they have to allow some form of recurring Cancers 2023, 15, 4165 5 of 18 sterilisation and hold dose attenuation properties close to water [74].The density of most resins used for 3D printing is 1.0-1.3g/cm 3 , so they should cause no or minor dose changes in both pulse dose rate (PDR) and high dose rate (HDR) BT [75].The materials with high density, such as WPLA (wood polylactic acid), can be used as parts of 3D-printed individual shielded applicators [76]. Biocompatibility is a greater concern for 3D-printed applicators, used in gynaecological BT, compared to applicators used for superficial BT of the skin or 3D-printed templates for seed insertion guidance in LDR brachytherapy, as they come into contact not only with the skin but also the mucosa and blood vessels.Materials of Class VI of the U.S. Pharmacopeial Convention, Class III of the European Medicines Agency Council Directive or ISO standard 10993-certified materials should be used [50,74]. After completion, the applicator must undergo a quality control (QC) procedure, which should include mechanical QC, consisting of assessing the firmness of the applicator, testing the patency of all active channels, adequacy of fixation of different parts of the applicator and dosimetric QC.The sterilisation method should be chosen according to the type of material used for 3D printing.After sterilisation, an additional check of fixation and exclusion of possible obstruction should be performed under sterile conditions just before the insertion.QC is recommended after each sterilisation procedure.Commercially available needles and tubes should be passed through the channels and later connected to the afterloader so that the source capsule never comes into direct contact with the 3D-printed applicator. The typical workflow for construction and use of a 3D-printed applicator is presented in Figure 1. The choice of the method depends on different factors such as choice of material, complexity of the product, desired resolution and cost considerations.SLS, SLA and FDM technologies are most commonly used for applicator printing in BT.The materials have to be biocompatible and certified for medical use, they have to allow some form of recurring sterilisation and hold dose attenuation properties close to water [74].The density of most resins used for 3D printing is 1.0-1.3g/cm 3 , so they should cause no or minor dose changes in both pulse dose rate (PDR) and high dose rate (HDR) BT [75].The materials with high density, such as WPLA (wood polylactic acid), can be used as parts of 3D-printed individual shielded applicators [76]. Biocompatibility is a greater concern for 3D-printed applicators, used in gynaecological BT, compared to applicators used for superficial BT of the skin or 3D-printed templates for seed insertion guidance in LDR brachytherapy, as they come into contact not only with the skin but also the mucosa and blood vessels.Materials of Class VI of the U.S. Pharmacopeial Convention, Class III of the European Medicines Agency Council Directive or ISO standard 10993-certified materials should be used [50,74]. After completion, the applicator must undergo a quality control (QC) procedure, which should include mechanical QC, consisting of assessing the firmness of the applicator, testing the patency of all active channels, adequacy of fixation of different parts of the applicator and dosimetric QC.The sterilisation method should be chosen according to the type of material used for 3D printing.After sterilisation, an additional check of fixation and exclusion of possible obstruction should be performed under sterile conditions just before the insertion.QC is recommended after each sterilisation procedure.Commercially available needles and tubes should be passed through the channels and later connected to the afterloader so that the source capsule never comes into direct contact with the 3Dprinted applicator. The typical workflow for construction and use of a 3D-printed applicator is presented in Figure 1.Various types of applicators for gynaecological BT have been manufactured with 3D printing, including custom made vaginal cylinders that better fit the anatomy, multichannel vaginal cylinders with parallel and oblique needle channels and different add-ons for needle insertion for available commercial applicators [14,20,61,77].Some examples of 3D-printed applicators are depicted in Figure 2. Various types of applicators for gynaecological BT have been manufactured with 3D printing, including custom made vaginal cylinders that better fit the anatomy, multichannel vaginal cylinders with parallel and oblique needle channels and different add-ons for needle insertion for available commercial applicators [14,20,61,77].Some examples of 3Dprinted applicators are depicted in Figure 2. Clinical Evidence Most of the evidence supporting the use of 3D-printed applicators is of a low level in the form of individual case reports or retrospective series.The first report of the use of a 3D-printed applicator for cervical cancer IGABT is from Lindegaard et al., who presented the clinical workflow for the design and use of a 3D-printed vaginal template in their department.An in-house 3D printer was used and there was no delay in the treatment [21]. Wiebe et al. reported a single case of a patient with endometrial cancer, treated with BT after surgery.Due to the characteristic "dog ears" shape of the vaginal stump and a narrow introitus, a 3D-printed multichannel vaginal cylinder (MVC) in two parts was used.The two parts were assembled after insertion into the vagina.Compared to the standard single-channel cylinder, higher target volume covered by the 100% isodose (V100), higher D90 and higher minimum dose that covers 98% of the target volume (D98) to the CTVHR were achieved, resulting in 13.2% better target volume coverage and a reduction in the target volume covered by the 200% isodose (V200) from 10.5 to 3.7% [24]. Sekii et al. reported two cases of patients with vaginal tumours, treated with 3Dprinted templates, based on CT and MR images with a vaginal cylinder in place, presenting the workflow and reporting DVH parameters.The 3D printing was outsourced, and STL technology was used [23].Another report on two patients with recurrent Clinical Evidence Most of the evidence supporting the use of 3D-printed applicators is of a low level in the form of individual case reports or retrospective series.The first report of the use of a 3D-printed applicator for cervical cancer IGABT is from Lindegaard et al., who presented the clinical workflow for the design and use of a 3D-printed vaginal template in their department.An in-house 3D printer was used and there was no delay in the treatment [21]. Wiebe et al. reported a single case of a patient with endometrial cancer, treated with BT after surgery.Due to the characteristic "dog ears" shape of the vaginal stump and a narrow introitus, a 3D-printed multichannel vaginal cylinder (MVC) in two parts was used.The two parts were assembled after insertion into the vagina.Compared to the standard single-channel cylinder, higher target volume covered by the 100% isodose (V 100 ), higher D 90 and higher minimum dose that covers 98% of the target volume (D 98 ) to the CTV HR were achieved, resulting in 13.2% better target volume coverage and a reduction in the target volume covered by the 200% isodose (V 200 ) from 10.5 to 3.7% [24]. Sekii et al. reported two cases of patients with vaginal tumours, treated with 3Dprinted templates, based on CT and MR images with a vaginal cylinder in place, presenting the workflow and reporting DVH parameters.The 3D printing was outsourced, and STL technology was used [23].Another report on two patients with recurrent gynaecological cancer is by Laan et al., who also reported on the workflow and modelling of the applicator but did not provide dosimetric data.Sethi et al. reported on three patients with different gynaecological tumours, treated with 3D-printed vaginal cylinders due to unfavourable anatomy of the vagina, with applicator design based on gynaecological exam alone.They reported favourable DVH parameters for target volume and OARs [61]. Kang et al. published a retrospective analysis of 28 patients with gynaecological tumours, treated with low dose rate (LDR) BT.They compared 12 patients treated with 3D-printed templates for seed implantation with a group of 16 patients treated with their traditional freehand technique under CT guidance.They showed that the reproducibility of preplanned seed geometry and DVH parameters achieved with 3D-printed template guidance is better compared to freehand seed insertion [78]. Marar et al. reported on two retrospective cohorts of patients with cervical cancer, treated with 3D-printed add-ons for parallel and oblique needle guidance (TARGIT and TARGIT-FX) compared with a commercial applicator [52,79].In the first group, they compared 302 applications in 70 patients, of which 23% were performed with the TARGIT and 77% without it, using no needles or freehand needles.V 100 , D 90 and D 98 for high-risk CTV (CTV HR ) were higher in the TARGIT group, with V 100 being higher regardless of the tumour size.There was no significant increase in doses to the OARs.The application time in the TARGIT group was longer, which could mean that the assembly of the add-on and the applicator was complicated [79].In the second group, they compared the next-generation add-on TARGIT-FX with the original TARGIT in 148 applications performed in 41 patients.In the TARGIT-FX application, higher mean V 100 , D 90 and D 98 for CTV HR were achieved, compared with TARGIT.The time of insertion was 30% shorter in the TARGIT-FX group.It is noteworthy that these add-ons were not individually designed for a single patient; instead, three sets of add-ons with different channel positions were designed to allow precise needle insertion through a wide range of tumour topographies [52]. Kudla et al. compared the treatment plans of ten patients with primary or recurrent tumours of the vagina, treated with a vaginal cylinder and interstitial needles inserted via a perineal template, with the theoretical treatment plans of the same patients with a 3D-printed custom-made vaginal cylinder template.The planned needle path in the tissue was shorter with the vaginal cylinder template, while the DVH parameters for the target volume and OARs were comparable or better.An interesting point is the design of needle fixation, which allows each needle to be locked individually, providing more possibilities for the needle entry point into the cylinder, compared to a mechanism which locks all the needles simultaneously [80]. In a small prospective series of nine patients with gynaecological cancer by Logar et al., all DVH parameters for both GTV and CTV HR (V 100 , D 98 , D 90 and D 100 ) were significantly increased with the use of 3D-printed applicators, while the dose constraints for the OARs were not exceeded.Different applicators were used depending on the location of the tumour and patient's anatomy-3D-printed add-on for the ring for parallel and oblique needle insertion, multichannel vaginal cylinder with channels for parallel and oblique needles, intrauterine tandem with channels for oblique needle insertion through the stopper of the tandem and a 3D-printed tandem and ring with channels for parallel and oblique needles for a patient with a narrow vagina.SLS technology was used.The advantage of using a 3D-printed applicator was shown to be independent of the size of the tumour, with sizes of CTV HR ranging from 5.2 cm 3 in a patient with local recurrence of cervical cancer after hysterectomy to 96.7 cm 3 in a patient with primary cervical cancer [20]. Serban et al. published a prospective series of 20 patients with cervical cancer, treated with MR-based IGABT, including oblique needles inserted through an in-house 3D-printed vaginal template, used as an add-on for the standard tandem applicator.Additional freehand needles were inserted as needed.With a mean of 11 oblique needles per patient, excellent target volume coverage was possible with a median D 90 for CTV HR of 93 Gy even in large tumours and unfavourable topography.They also analysed the loading patterns in different parts of the applicator and reported that almost half (44%) of the dwell time was shifted to the interstitial needles, with tandem and ring dwell times accounting for 31% and 25% of the total dwell time, respectively.In this way, the dose was moved into the tumour, while the dose to the unaffected vagina was reduced and the total TRAK (total reference air kerma) remained roughly unchanged [81]. In a larger prospective study by Jiang et al., 32 patients with central recurrences of gynaecological tumours were treated with HDR BT using 3D-printed individual templates for needle insertion.Two types of applicators were printed, a transvaginal applicator with oblique channels for needle insertion for patients with vaginal stump recurrences and a combined transvaginal/transperineal applicator for patients with more extensive recurrences.There was good reproducibility of the preplanned needle positions and depth and the technique was found to be reliable and feasible [82]. Yan et al. reported an analysis of 48 patients with endometrial cancer treated with postoperative BT.They compared dosimetry of a commercial multichannel cylinder (MCC) application with 3D-printed individual MCC, modelled on CT images with a contrastsoaked vaginal packing in place.Five typical shapes of the post-hysterectomy vaginal stump were identified, and there were fewer air gaps with the 3D-printed MCC insertion.In addition, the 3D-printed MCC enabled coverage of larger CTVs with a more homogeneous dose distribution and a higher D 98 for the CTV [60]. In the only randomised trial by Yuan et al., 21 patients with recurrent cervical cancer after surgery were randomised at the time of BT to the freehand implantation group (10 patients) and the 3D-printed guidance template group (11 patients).The D 90 in the template group was significantly higher than in the freehand group (6.3:6.07,p < 0.05), while the dose to the maximally exposed 2 cm 3 (D 2cc ) of the bladder, rectum, sigmoid and bowel was significantly lower.With a freehand implant, more needles were used (5.71:7.78,p < 0.05) and the procedure time was longer [14]. Discussion Three-dimensional printing has gained importance over the past decade, with gynaecological tumours being the most common indication for its use in BT.The use of 3D-printed applicators represents a significant improvement in gynaecological BT, allowing implantation in patients where commercially available applicators either could not be inserted due to anatomical barriers or were not adequate for target coverage. In cervical cancer, several studies have shown the impact of dose on TCP.A 12% increase in dose (from 75 to 85 Gy) improves local control by 3% for tumours of 20-30 cm 3 and up to 7% for larger tumours of 70 cm 3 .An additional dose escalation from 85 Gy to 90-95 Gy can further improve local control by 1-4%, depending on tumour size [8].In the study by Logar et al. [20], all reported dose parameters for CTV HR and GTV were 30-40% better when using a 3D-printed applicator compared to the standard applicator, which could lead to a 15% better local control for stages II-III/IV taking into account the TCP curves for cervical cancer [8,20]. However, nearly 70% of studies on use of 3D printing in radiation oncology report at least one impediment or concern to the wider use of 3D printing, with the most common concerns being about time and workflow, 3D printer accuracy, biocompatibility and sterilisation of the applicator [71].There are also some specific limitations to the use of customised 3D-printed applicators in gynaecological BT. Due to the anatomy of the vagina, insertion of needles at a large angle through a ring-like applicator may prove difficult or even impossible due to a lack of space.Insertion into tumour infiltrates far from the ring surface (vaginal template), e.g., infiltration of the distal part of the sacrouterine ligament or the fallopian tubes, is also demanding as the trajectory of the needle can change far away from the channel exit point.The trajectories of the needle channels in an MVC are also not without limitations.Sharp angled trajectories can cause obstruction of the source wire.The attachment of 3D-printed add-ons to parts of commercial applicators can also be a challenge. The materials used for 3D printing have mostly not been tested for repeated sterilisation, so there is limited knowledge about possible changes in the structure of the material, its firmness and potential impact on dosimetric properties.The possibilities for additional QA after sterilisation are limited, because it has to be conducted under sterile conditions immediately before implantation.For FDM, for example, several biocompatible materials are available, but only a few are reported as being able to endure the sterilisation in an autoclave, with ethylene oxide or gas plasma as recommended by the Centre for Disease Control [88].In most reports, the 3D-printed applicators have only been used for a single patient [14,20,21,52,82], reducing the number of sterilisation procedures but increasing the cost of the application itself.There are no guidelines for the commissioning 3D-printed applicators or recommendations for the QA/QC process and widely varying levels of QC were described in the literature [20,38,74,[89][90][91]. Additionally, the modelling of most 3D-printed applicators is based on a preplanning procedure, with MR or CT performed with a standard applicator in place.Based on the contours, virtual needles are placed and the applicator is modelled to accommodate the required needle trajectories.For the patient, this means an additional procedure under local, regional or general anaesthesia and an additional day of hospital stay.The cost of the additional imaging also must be taken into account. Insertion of a large number of needles and deep insertion into the distal parametria increases the possibility of bleeding on removal of the applicator.Mahantshetty et al. reported a 27.5% incidence of bleeding at Vienna II applicator removal, with almost onethird of the bleeds being arterial [51].This could be partly avoided by using blunt needles, provided by the vendors, and TRUS guidance for needle insertion with colour Doppler for better visualisation of blood vessels.The proximity of the blood vessels to the planned needle path should also be assessed in the preplanning MR. There is concern that the additional steps required to use a 3D-printed applicator would prolong the overall treatment time (OTT), which could compromise local control [92]. For cervical cancer, the optimal treatment time is ≤50 days, and a dose escalation of 5 Gy is necessary to compensate for a treatment extension of 1 week [8].Maintaining OTT at or below 50 days could be compromised by additional application and imaging required prior to 3D printing.With a 3D printer available in the department, the printing of an applicator can be completed in several hours or overnight [21,52].Even in departments where printing was outsourced, the treatment time did not increase, but good logistics are required [20,23]. The only alternative to 3D-printed applicators, when commercially available applicators do not allow an adequate dose to the target volume, is freehand needle insertion.Studies in BT of gynaecological, head and neck and skin cancers have shown that a template-guided implant achieves better DVH parameters, better reproducibility of the preplanned needle positions and better adaptation to patient than freehand insertion [14,33,35,36,57,93].Huang et al. reported on the accuracy of template-guided needle insertion in 25 patients with head and neck cancer [36].All 619 interstitial needles were inserted to the planned position on the first try, the mean deviation from the entry point in the preplan was 1.18 mm and the mean angular displacement was 2.08 • .The only randomised trial comparing 3D-printed template-guided with freehand insertion of oblique needles in gynaecological tumours also confirmed better needle positioning with significantly better DVH parameters for both the target and for the OARs [14].In addition, with template-guided insertion, there is less need for potential needle repositioning during the procedure and after a post-implant MR [78].If the MR shows the need for additional needles, needle repositioning or depth correction, additional operating room time and postcorrection imaging will be required, all of which increase the costs and use of departmental resources.The use of TRUS guidance can reduce the need for needle repositioning after post-implant MR; however, it is sometimes difficult even for a skilled radiation oncologist to assess the adequacy of needle position within the target with TRUS or abdominal US. The number of oblique and parallel needles that can be used for the manufacturing of individual applicators is limited and geometrically determined by the trajectories of the needle channels which must not intersect or merge.To our knowledge, there is currently no software on the market that can be used as a tool in the preplanning process to determine the optimal positions of the needles within the target volume to achieve the best coverage of the target volume with the minimum number of needles.In this way, a virtual optimal distribution of needles within the individual applicator would be achieved, which would speed up not only the preplanning process but also the application procedure itself.An experienced multidisciplinary team should be involved in the process.If modelling and 3D printing are not outsourced, additional staff training might be needed.The strengths and limitations of 3D-printed applicator use are summarised in Table 2. Conclusions and Future Directions Three-dimensional printing is a promising and still developing technique in gynaecological BT.The use of customised applicators is necessary in a minority of patients with gynaecological cancers and should be performed in large-volume BT centres with experienced radiation oncologists and physicists.While customised applicators are economically unattractive for large manufacturers of radiotherapy equipment due to the small number of cases, smaller companies specialising in 3D printing of various medical equipment could emerge.As 3D printer prices decrease and 3D printing materials become cheaper, the wider use of this technology in clinical departments can be expected.The radiation oncology community should form focus groups to develop guidelines for the manufacture and commissioning of 3D-printed applicators, with emphasis on the QA/QC process, which currently varies widely among centres already using 3D printers. In the future, using both low and high Z materials for 3D printing, shielded applicators enabling intensity-modulated BT and protection of OARs could be produced.Some dosimetry reports and phantom studies have already been published, but clinical data are lacking [39,91,94,95].Larger prospective studies on the efficacy and safety of 3D-printed applicators are also needed before 3D printers come to be a part of our daily clinical practice in a BT department. Another recent revolution in medicine is the introduction of the Internet of Things (IoT) concept in various healthcare settings [96][97][98][99].It is most widely used in neurology and cardiology [98], but there are some reports of its applicability in oncology and radiation oncology [100][101][102][103][104]. Virtual reality, artificial intelligence (AI) and robotics are components of the IoT, for which use in BT has already been described [103,104].For the emerging field of 3D printing in brachytherapy, the IoT presents an interesting opportunity to remotely connect machines and experts in the field to enable access to cutting-edge treatment even in BT centres where the technology or knowledge is not available.In surgery, the possibility of tele-surgery and tele-mentoring is already being explored [99].Similarly, an experienced radiation oncologist and physicist could remotely perform or assist with preplanning, 3D modelling and applicator insertion.AI could help with the preplanning process, suggesting the optimal needle trajectories to improve coverage of the target volume and perform 3D modelling of the applicator based on large data sets available in the IoT, which would speed up the process.However, there are still many challenges to overcome, especially in the areas of data monitoring, governance and ownership but also in reimbursement, and studies are needed to test the clinical relevance of IoT in gynaecological BT. Figure 1 . Figure 1.Typical workflow for the design and use of a 3D-printed applicator when outsourcing the printing.Workflow can be shortened when an in-house printer is used. Cancers 2023 , 18 Figure 1 . Figure1.Typical workflow for the design and use of a 3D-printed applicator when outsourcing the printing.Workflow can be shortened when an in-house printer is used. Figure 2 . Figure 2. Examples of 3D-printed applicators used in our department.(A) The 3D-printed intrauterine tandem (orange arrow) and add-on for parallel and oblique needle insertion for the ring (yellow arrow).(B) The 3D-printed tandem and ring with channels for parallel and oblique needles.(C) The 3D-printed intrauterine tandem with oblique needle channels in the stopper (white arrow).(D) The 3D-printed vaginal cylinder with parallel and oblique needles.Needle fixation screw is marked with a black arrow.Needles provided by the vendor were used in all cases. Figure 2 . Figure 2. Examples of 3D-printed applicators used in our department.(A) The 3D-printed intrauterine tandem (orange arrow) and add-on for parallel and oblique needle insertion for the ring (yellow arrow).(B) The 3D-printed tandem and ring with channels for parallel and oblique needles.(C) The 3D-printed intrauterine tandem with oblique needle channels in the stopper (white arrow).(D) The 3D-printed vaginal cylinder with parallel and oblique needles.Needle fixation screw is marked with a black arrow.Needles provided by the vendor were used in all cases. Table 1 . Summary of the studies on the use of 3D-printed applicators in gynaecological brachytherapy.Only the studies with clinical cases are included. Table 2 . The strengths and limitations of 3D-printed applicators in gynaecological brachytherapy.
2023-08-20T15:06:14.822Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "f58e600687983bb80dd8a7187694b3e490941644", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6694/15/16/4165/pdf?version=1692347099", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1753de9ba387ff78d98d2ce5c797ec0e35a9359a", "s2fieldsofstudy": [ "Medicine", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
252874992
pes2o/s2orc
v3-fos-license
Liver, visceral and subcutaneous fat in men and women of South Asian and white European descent: a systematic review and meta-analysis of new and published data Aims/hypothesis South Asians have a two- to fivefold higher risk of developing type 2 diabetes than those of white European descent. Greater central adiposity and storage of fat in deeper or ectopic depots are potential contributing mechanisms. We collated existing and new data on the amount of subcutaneous (SAT), visceral (VAT) and liver fat in adults of South Asian and white European descent to provide a robust assessment of potential ethnic differences in these factors. Methods We performed a systematic review of the Embase and PubMed databases from inception to August 2021. Unpublished imaging data were also included. The weighted standardised mean difference (SMD) for each adiposity measure was estimated using random-effects models. The quality of the studies was assessed using the ROBINS-E tool for risk of bias and overall certainty of the evidence was assessed using the GRADE approach. The study was pre-registered with the OSF Registries (https://osf.io/w5bf9). Results We summarised imaging data on SAT, VAT and liver fat from eight published and three previously unpublished datasets, including a total of 1156 South Asian and 2891 white European men, and 697 South Asian and 2271 white European women. Despite South Asian men having a mean BMI approximately 0.5–0.7 kg/m2 lower than white European men (depending on the comparison), nine studies showed 0.34 SMD (95% CI 0.12, 0.55; I2=83%) more SAT and seven studies showed 0.56 SMD (95% CI 0.14, 0.98; I2=93%) more liver fat, but nine studies had similar VAT (−0.03 SMD; 95% CI −0.24, 0.19; I2=85%) compared with their white European counterparts. South Asian women had an approximately 0.9 kg/m2 lower BMI but 0.31 SMD (95% CI 0.14, 0.48; I2=53%) more liver fat than their white European counterparts in five studies. Subcutaneous fat levels (0.03 SMD; 95% CI −0.17, 0.23; I2=72%) and VAT levels (0.04 SMD; 95% CI −0.16, 0.24; I2=71%) did not differ significantly between ethnic groups in eight studies of women. Conclusions/interpretation South Asian men and women appear to store more ectopic fat in the liver compared with their white European counterparts with similar BMI levels. Given the emerging understanding of the importance of liver fat in diabetes pathogenesis, these findings help explain the greater diabetes risks in South Asians. Funding There was no primary direct funding for undertaking the systematic review and meta-analysis. Graphical abstract Supplementary Information The online version contains peer-reviewed but unedited supplementary material available at 10.1007/s00125-022-05803-5. Introduction South Asians living in Europe and North America have a twoto fivefold higher risk of developing type 2 diabetes than their counterparts of white European descent living in the same countries and develop the disease at a younger age and lower BMI [1][2][3]. Furthermore, South Asians exhibit a 30-100% higher mortality risk for coronary heart disease and cardiovascular disease than their white European counterparts [4][5][6]. In addition, South Asians without diabetes have higher fasting glycaemic indices than white Europeans, and greater levels of insulin resistance [7,8]. Conventional cardiometabolic factors do not account for the magnitude of the inter-ethnic differences in the burden of type 2 diabetes and cardiovascular disease. Smoking is less prevalent among South Asians [8], but overall caloric intake appears not to differ meaningfully between the two ethnic groups, with South Asians consuming larger quantities of polyunsaturated fats [9]. Diabetes rates are also increasing rapidly in all South Asian countries. It has been suggested that increased central adiposity and storage of fat in deeper abdominal compartments, such as around the viscera or liver [1,10], may be a key pathway leading to greater insulin resistance and subsequent type 2 diabetes and cardiovascular disease in South Asians. Some authors have hypothesised that South Asians have a lower capacity to store fat subcutaneously, leading to earlier 'spillover' into harmful secondary visceral and ectopic depots, the so-called 'adipose tissue overflow' hypothesis [11,12]. However, the evidence from studies comparing the fat distribution in the two ethnic groups is conflicting; one study suggests that South Asians store more fat subcutaneously [13], whilst another suggests that they accumulate excess fat both subcutaneously and intra-abdominally [14], and another showing no substantial difference in fat depots between the two groups [12]. The fact that many of those studies were relatively small and thus lack of power, together with differences in study characteristics, may have contributed to the discrepancy in the findings. The aim of our study was to systematically collate all existing published data comparing the amounts of subcutaneous (SAT) and visceral (VAT) adipose tissue and liver fat between South Asian and white European adults, and supplement this with unpublished data from our group and the UK Biobank study, to provide the most robust assessment to date of potential ethnic differences in the levels of fat in key metabolic fat compartments. Methods The study, which was pre-registered with the OSF Registries (https://osf.io/w5bf9), was conducted according to the PRISMA guidelines [15], and followed a structured protocol that was agreed among the authors in advance of the literature search. Data eligible for meta-analysis included both original research and existing publications identified by systematic review. Original research Unpublished data from two studies undertaken by the authors were included in the meta-analysis. Both studies were cross-sectional and assessed the lifestyle and cardiometabolic risk factors of South Asian and white European men and women, without diabetes, aged 40-70 years, who lived in Scotland (UK). Both studies have been described in detail elsewhere [8,16], and involved radiological assessment of fat distribution in men and women. The methodology for fat measurement and the demographic characteristics of participants with radiological assessment are shown in electronic supplementary material (ESM) Methods and ESM Table 1). In addition, we included new data from the UK Biobank. UK Biobank is a large prospective study that recruited 502,643 participants (response rate 5.5%) between 2006 and 2010, age range 37-73 years, and consented for their records to be linked with routine data (hospital admissions and death registries). Participants attended one of 22 assessment centres across the UK, where they completed a touch screen questionnaire, had physical measurements taken, and provided biological samples as described in detail elsewhere [17,18]. The UK Biobank imaging study began in 2014, and intends to collect imaging data of the vital organs, including MRI measures of abdominal body fat, by recalling 100,000 participants. At the time of performing the analyses for this study, abdominal MRI data were available for approximately 30,000 participants. We used abdominal imaging data from South Asians without diabetes who were matched for age, sex and BMI with white Europeans without diabetes in a 1:5 ratio to maximise statistical power. The protocol for abdominal fat measurement in the UK Biobank imaging study has been published elsewhere [19,20]. Systematic review of published data and selection criteria To identify existing publications, we searched the Embase and PubMed databases from inception to August 2021, combining the MeSH terms 'obesity', 'adipocyte', 'liver', 'south asia', 'asian continental ancestry group', 'caucasian' and 'european', and using the keywords 'obes*', 'fat*', adipos*', 'liver?fat*', 'fatty?liver*', 'south?asia*', 'india*', 'bangladesh*', 'sri?lanka*', 'pakistan*', 'caucasian*', 'white*' and 'european*' with Boolean rules. A search filter for studies related to humans with a restriction to English language was included. Two researchers (JM and SI) screened all the titles and abstracts, and studies were read in full when they fulfilled the selection criteria. The reference lists of eligible studies were hand-searched to find further relevant studies. Grey literature was also searched via the OpenGrey website (https:// opengrey.eu/). We included studies that met the following criteria: (1) participants were men or women aged over 18 years; (2) participants had measurements of abdominal SAT and VAT, and/or liver fat by computed tomography (CT) or MRI; (3) the study included a South Asian group and a comparison group of white European descent; and (4) any study design apart from case reports. South Asian ethnic background was either reported as such in the studies or participants were of Indian, Pakistani, Bangladeshi or Sri Lankan background. In the meta-analysis, we included studies for which we could extract mean values and standard deviations from published or requested data. We only included data stratified by sex. Two researchers (JM and SI) independently assessed the papers for final selection. Any discrepancies were resolved by discussion. A third reviewer (JMRG) was consulted if any unresolved issues persisted. Data extraction and quality assessment We developed a data extraction spreadsheet that included the following information: study characteristics (first author, year of publication, number of people of South Asian descent and number of people of white European descent, study design), study sample characteristics (sex, mean age and BMI, mean fasting glucose and insulin, diagnosis of diabetes [yes or no]), test characteristics (method of measuring abdominal and/or liver fat, mean value for fat quantity and standardised mean difference [SMD] for each group). If the numerical data were not extractable from the published data, the authors were contacted via email. We were unable to obtain data for insulin and glucose concentrations for four studies [19,[21][22][23][24][25]. References [22][23][24] are multiple papers referring to one study dataset. We used a preliminary version of the ROBINS-E tool (risk of bias in non-randomised studies of exposures) to assess the risk of bias in the individual studies selected across seven domains; the results for the individual studies were then summarised to provide an overall study-level assessment regarding the risk of bias (low, moderate, serious or critical) [ 2 6 ] . W e a l s o u s e d t h e G R A D E ( G r a d i n g o f Recommendations, Assessment, Development and Evaluations) approach to assess the overall certainty of evidence of the meta-analysis findings to provide an evidence certainty score (very low, low, moderate or high) [27]. Data analysis We used Stata software version 14.1 (Stata, USA) for statistical analysis. The weighted SMD (with 95% CI) was calculated by combining the mean differences in fat between the two groups in each study using a random-effects model. One study reported hepatic attenuation to assess liver fat, rather than the liver fat percentage [28]. As lower hepatic attenuation implies higher liver fat, the sign of the standardised mean ethnic difference in hepatic attenuation was reversed to make the findings comparable with other studies. Analyses were stratified by sex. We performed two sensitivity analyses: (1) by separating the studies that included any participants with diabetes from those without diabetes to assess whether the presence of diabetes modified the results, and (2) by only including the studies with matched BMI between the ethnic groups. We also performed an analysis stratified by assessment tool (CT vs MRI). Heterogeneity resulting from the mean difference in each study not being identical with the pooled estimate was quantified using the I 2 measure [29]. We assessed the risk of publication bias and potential small-study effect by constructing funnel plots, which plot the mean difference from each study against the SEM as a measure of study size [30]. Ethics Previously unpublished data from studies by Iliodromiti et al and Ghouri et al were included in these analyses [8,16]. Both studies were approved by the West of Scotland Research Ethics Committee, and performed according to the Declaration of Helsinki. All participants gave written informed consent to participate. The UK Biobank study was approved by the North West Multi-Centre Research Ethics Committee, and all participants provided written informed consent to participate. Ethical approval was not required for the analysis of data from previously published studies. Results Original research Two of the studies included were studies performed by our group for which data on radiologically assessed adiposity measures had not previously been published. The methodology of fat measurement for these two studies is described in ESM Methods. ESM Table 1 summarises the demographic and cardiometabolic profile of the participants with radiological data from the unpublished studies by Ghouri et al and Iliodromiti et al. Other data from these studies have been reported previously [8,16]. Systematic search results Figure 1 shows the search and numerical selection flowchart. The systematic search of the biomedical databases resulted in 3228 hits; including 2248 from the Embase search and 975 from PubMed. Five additional studies were identified by bibliographic search. Of these, 99 papers were selected and read in full, of which 89 were excluded for a variety of reasons as detailed in Fig. 1. Therefore, 11 studies (with one study contributing two different but not overlapping datasets [21]) including data from the UK Biobank, were finally selected for the meta-analyses (n=4047 men and 2968 women for SAT and VAT comparisons and n=3071 men and 2651 women for liver fat comparison) [12, 13, 21-25, 31, 32]. The papers by Kohli and Lear and Dick et al [23,24] refer to the same study, data for which were initially published by Lear et al [22]. The study by Shah et al [28] did not present data stratified by sex, but the authors kindly shared stratified results after we contacted them by email. Table 1 summarises the characteristics of the studies included in the systematic review. ESM Tables 2 and 3 summarise the mean age, BMI and fasting glucose and insulin levels (when available) for all the included studies, stratified by sex and ethnicity. The mean age did not differ between ethnic groups of either sex. South Asian men had a mean BMI that was approximately 0.7 kg/m 2 lower for the SAT and VAT comparisons and approximately 0.5 kg/m 2 lower for the liver fat comparison compared with their white European counterparts. South Asian women had a mean BMI that was approximately 0.9 kg/m 2 lower for SAT, VAT and liver fat comparisons compared with their white European counterparts. Description of studies Quality assessment ESM Tables 4 and 5 present the studylevel judgements of bias using the ROBINS-E tool for the SAT and VAT, and the liver fat outcomes, respectively. Four studies for SAT and VAT and two studies for liver fat outcomes were rated at moderate risk of confounding due to differences in BMI between ethnic groups for one or both sexes. In all instances where this occurred, the BMI values were lower in the South Asian group, which would have acted to bias the differences between the ethnic groups in the outcome towards the null. One study was rated as being at serious risk of confounding due to inclusion of participants with diabetes in the sample and BMI differences between groups. All studies, except UK Biobank in which outcome measures of SAT, VAT and liver fat were obtained using an automated algorithm, were rated as having a moderate risk of bias for the measurement of outcomes, as these measures were not reported to have been undertaken in a blinded manner, which may have biased findings against the null hypothesis as assessors may have expected more ectopic fat in South Asian participants. Thus, the overall study-level bias was rated as moderate for all studies, except that by Eastwood et al [21], which was rated as having serious risk of bias, and the UK Biobank study, which was rated as having low risk of bias. ESM Table 6 summarises the certainty of evidence for studies included in meta-analysis as assessed using the GRADE approach. The overall certainty of evidence from summary findings of the meta-analysis was assessed as moderate due to heterogeneity, study limitations/bias, and possible publication bias for the SAT/VAT outcomes (see below). However, in the sensitivity analyses described below, exclusion of studies that included participants with diabetes, and only including studies in which BMI was matched between ethnic groups, did not materially affect the findings. Factors that increased the summary certainty of evidence from low to moderate included large numbers of participants, the size of effect, precision and directness. Meta-analysis We summarised imaging data on SAT and VAT from 1156 South Asian men and 2891 white European men (of comparable age but the mean BMI in South Asians was approximately 0.7 kg/m 2 lower). We also compared data on liver fat from 677 South Asian men vs 2394 white European men (of comparable age but the mean BMI in South Asians was approximately 0.5 kg/m 2 lower). For women, we compared the data on SAT and VAT from 697 South Asian participants vs 2271 white European participants (of comparable age but the mean BMI in South Asians was approximately 0.9 kg/m 2 lower), and data on for liver fat from 575 South Asian participants vs 2076 white European participants (of comparable age but the mean BMI in South Asians was approximately 0.9 kg/m 2 lower). Figure 2 shows the SMD in fat in men. In nine studies, South Asian men had 0.34 SMD (95% CI 0.12, 0.55; I 2 =83%; p<0.001) more SAT than their white European counterparts. In seven studies, South Asian men had 0.56 SMD (95% CI 0.14, 0.98; I 2 =93%; p<0.001) more liver fat than their white European counterparts. There was no substantial difference in VAT between South Asian and white European participants in nine studies (SMD −0.03; 95% CI −0.24, 0.19; I 2 =85%; p<0.001). All meta-analyses in men showed high heterogeneity. Figure 3 shows the SMD in fat in women. There was no substantial difference between South Asian and white European participants in eight studies of SAT or VAT (SMD 0.03; 95% CI −0.17, 0.23; I 2 =72%; p=0.001 and SMD 0.04; 95% CI −0. 16 South Asians lower liver fat South Asians higher liver fat . In five studies, South Asian women had 0.31 SMD (95% CI 0.14, 0.48; I 2 =53%; p=0.07) more liver fat than their white European counterparts. For women, all metaanalyses showed high heterogeneity, except for the liver fat data, which showed moderate heterogeneity. Sensitivity analysis No studies investigating liver fat included any participants with diabetes. When we compared data for VAT and SAT in South Asian vs white European men and women after excluding data from the one study that included participants with diabetes [21], the results did not materially change for either sex ( ESM Figs 1 and 2). For the studies with matched BMI between the two ethnic groups, point estimates for the standardised differences in SAT and liver fat between South Asian and white European men were similar to those observed in analyses including all studies (ESM Figs 3 and 4), although the 95% CI were wider. Findings were similar in studies using MRI vs CT as the assessment tool (ESM Figs 5-8). Publication bias ESM Fig. 9 presents funnel plots for each main analysis, suggesting symmetry and therefore a small likelihood of publication bias or small-study effect for VAT and liver fat for men and liver fat for women. We cannot exclude the possibility of publication bias or a small-study effect for SAT and VAT for women and SAT for men, with the asymmetry in the funnel plots suggesting that small studies showing greater abdominal fat for white European participants may be lacking. Discussion To our knowledge, this evidence synthesis, including data from 1853 participants of South Asian descent and 5162 participants of white European descent, is the largest analysis comparing robust imaging data (CT or MRI) of various abdominal fat compartments between South Asian and white European adults. These data suggest that both South Asian men and women store greater ectopic fat in the liver at a lower BMI compared with their counterparts of white European descent, and that there may be a sexspecific difference in ethnic distribution of SAT. South Asian men had greater amounts of SAT and ectopic fat accumulated in the liver than their white European counterparts despite having a slightly lower BMI, although this was not clearly accompanied by higher levels of VAT. In women, there was no substantial difference in SAT or VAT distribution between South Asians and white European participants; however, like men, South Asian women had more ectopic fat in the liver compared with their white European counterparts, despite having a BMI that was approximately 0.9 kg/m 2 lower. The slightly lower BMI in the South Asian participants compared with white European participants in these studies may have contributed to the absence of a difference in VAT between the two ethnic groups. In the subset of studies where the BMI did not differ between the ethnic groups [16,19,[22][23][24][25], South Asian men and women showed a numerically higher level of VAT, as well as higher levels of SAT and ectopic liver fat, compared with men and women of white European descent, but the statistical power in these subgroup analyses was limited. Thus, taking all data together, we can be most confident about the finding of higher liver fat levels in South Asian participants, as there were similar findings in both South Asian men and women relative to their white European counterparts, and broadly concordant findings in the subgroups of those without diabetes or matched for BMI. In addition, the liver analyses showed a low likelihood of publication bias or small-study effect. However, given the available data, our conclusions about ethnic differences in VAT are more cautious. The central role of the liver in diabetes pathogenesis has become increasingly apparent in recent years, with the organ being a site of excess fat storage in those with hyperinsulinaemia due to either genetic or familial factors, with consequent excessive hepatic gluconeogenesis [33]. It has been shown that surrogate markers of liver fat and their change over time predict diabetes [33,34], whereas substantial weight loss from use of low-energy diets can lead to rapid fat loss from the liver and improved insulin sensitivity in people with diabetes [34]. These studies were performed predominantly in participants of white European origin, and align with the importance of liver fat in the pathogenesis of diabetes in this ethnic group, as well with molecular mechanisms whereby fat-derived metabolites impair insulin signalling [35]. Export of excessive triacylglycerol from the liver may also be a key feature in the beta cell dysfunction in those who develop diabetes [33], and South Asians are known to have elevated circulating triacylglycerol levels at similar levels of BMI compared with white Europeans [36]. More recently, genetic studies have further suggested a causal role for liver fat in the pathogenesis of type 2 diabetes [37]. Greater SAT at a lower BMI in South Asian men implies there must be lower lean muscle mass in this group, which is an additional independent risk factor for type 2 diabetes [38], and other data has shown that lower lean mass contributes to the higher levels of insulin resistance observed in South Asians compared with other ethnic groups [39]. Clearly, in view of the present findings, more work on understanding ethnic differences in ectopic fat is urgently needed, including examining why South Asians appear to accumulate liver fat more rapidly at lower BMIs, and whether excess liver fat can be reversed by lifestyle measures, in particular intentional weight loss, in this group. According to the 'adipose tissue overflow' hypothesis [11,12], fat deposition starts predominantly in the subcutaneous region until inflammatory mediators halt the recruitment of new adipocytes. At this point, the capacity of subcutaneous tissue for further fat storage is reduced, and positive energy balance leads to an overflow of fatty acids to deeper adipose compartments (i.e. visceral) or ectopic tissues (i.e. hepatic). The 'tipping' point at which subcutaneous tissue reaches its maximum storage capacity is thought to vary for each individual, and depends on genetic and environmental factors [40], and it has been hypothesised that this occurs at a lower BMI in South Asians [11,12]. The present findings are partially in agreement with this. South Asian participants of both sexes accumulated more ectopic fat in the liver at similar or lower BMI than white European participants. However, South Asian men also had higher levels of SAT, so the relative importance of a lower capacity for SAT storage vs greater overall adipose tissue accumulation at a given BMI in terms of higher liver fat levels is unclear. Nevertheless, data suggest that South Asian men have larger adipocytes in their subcutaneous compartment compared with their white European counterparts even when they are matched for total and abdominal body fat [13]. Thus, it is plausible and consistent with our findings that the subcutaneous adipocytes in South Asian men have the capacity to become more hypertrophic and therefore allow accumulation of more fat in superficial depots. In addition, hypertrophic adipocytes are associated with greater insulin resistance, which may be the mediating pathway in the development of type 2 diabetes [13]. Strengths and weaknesses To our knowledge, this is the first study pooling imaging data from abdominal fat compartments in a large group of South Asian participants and comparing this with data from individuals of white European origin. We only included data obtained using CT and MRI, which are considered the gold standards for measuring abdominal fat, to minimise heterogeneity and measurement bias. We used an extensive search to ensure all the available relevant published and unpublished studies were included. However, we used a filter to restrict searches to 'humans' and 'English language'. While it is unlikely that studies including both South Asians and a white European comparator group would not be published in English, the use of filters may have excluded very recently completed studies that had not yet completed the MEDLINE indexing process. Although the process of systematic review and meta-analysis is a robust way of estimating the true difference with less random error because of increased sample size, the mean differences estimated by the pooled data are subject to the limitations of the primary studies. Between-study heterogeneity may be selflimiting when pooling studies together to estimate a summary measure; however, we calculated the pooled estimate by using a random-effects model that accounts for unexplained heterogeneity within studies. We used established methodology to assess the impact of small-study bias on our pooled estimates and acknowledge that some potential biases may have occurred, although liver estimates, the most interesting and novel finding in our study, appear not to be meaningfully influenced. In addition, the results were similar in men and women, lending confidence that the findings are real. The sensitivity analysis on a subset of studies that included participants matched for BMI had limited power but showed biologically plausible results that South Asians of both sexes store more fat in all fat depots for any given BMI compared with their white European counterparts. The same was true when we examined data in those without diabetes. Conclusion We conclude that both South Asian men and women store more fat in ectopic depots (liver) at a lower or comparable BMI than their counterparts of white European origin. South Asian men, but not women, appear to accumulate more fat superficially compared with their white European counterparts, but evidence for ethnic differences in VAT accumulation was less clear-cut, with no statistically significant differences between ethnic groups observed for this outcome. Given our knowledge of the importance of liver fat in diabetes, the excess liver fat at a lower BMI in the South Asians compared with their counterparts of white European descent may be a key factor contributing to the development of insulin resistance and type 2 diabetes at lower levels of overall adiposity in South Asians. Further work is now needed to understand why South Asians accumulate liver fat more readily and at lower BMIs than their counterparts of white European descent, and to what extent weight loss interventions can normalise liver fat and blood glucose levels as they have been shown to do in white Europeans. Acknowledgements The imaging data from the UK Biobank Resource were provided under application 6569. We thank the UK Biobank participants and coordinators for this unique dataset. The authors thank L. Coyle, University of Glasgow, for her assistance with manuscript preparation. Data availability The datasets generated and analysed during the current study are available from the corresponding author on reasonable request. Funding There was no primary direct funding for undertaking the systematic review and meta-analysis and authors of the study were supported by their affiliated organisations/institutions during the conduct of the research. The study was partially supported by funding from the European Federation of Pharmaceutical Industries Associations (EFPIA)-Innovative Medicines Initiative (IMI) Joint Undertaking-European Medical Information Framework (EMIF) (grant no. 115372). SI is funded by a Medical Research Council postdoctoral fellowship (MR/N015177/ 1). The funders had no role in the study design, data collection, data analysis, data interpretation, or writing of the report.
2022-10-14T06:17:14.330Z
2022-10-13T00:00:00.000
{ "year": 2022, "sha1": "a7cbd5d3861bb955d0b89e5fe11a54548099302d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00125-022-05803-5.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "b435ca44869db2e0dc2151231cda5370a5214042", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54435116
pes2o/s2orc
v3-fos-license
Low-Dose Computed Tomography for the Optimization of Radiation Dose Exposure in Patients with Crohn's Disease Magnetic resonance imaging (MRI) is the mainstay method for the radiological imaging of the small bowel in patients with inflammatory bowel disease without the use of ionizing radiation. There are circumstances where imaging using ionizing radiation is required, particularly in the acute setting. This usually takes the form of computed tomography (CT). There has been a significant increase in the utilization of computed tomography (CT) for patients with Crohn's disease as patients are frequently diagnosed at a relatively young age and require repeated imaging. Between seven and eleven percent of patients with IBD are exposed to high cumulative effective radiation doses (CEDs) (>35–75 mSv), mostly patients with Crohn's disease (Newnham E 2007, Levi Z 2009, Hou JK 2014, Estay C 2015). This is primarily due to the more widespread and repeated use of CT, which accounts for 77% of radiation dose exposure amongst patients with Crohn's disease (Desmond et al., 2008). Reports of the projected cancer risks from the increasing CT use (Berrington et al., 2007) have led to increased patient awareness regarding the potential health risks from ionizing radiation (Coakley et al., 2011). Our responsibilities as physicians caring for these patients include education regarding radiation risk and, when an investigation that utilizes ionizing radiation is required, to keep radiation doses as low as reasonably achievable: the “ALARA” principle. Recent advances in CT technology have facilitated substantial radiation dose reductions in many clinical settings, and several studies have demonstrated significantly decreased radiation doses in Crohn's disease patients while maintaining diagnostic image quality. However, there is a balance to be struck between reducing radiation exposure and maintaining satisfactory image quality; if radiation dose is reduced excessively, the resulting CT images can be of poor quality and may be nondiagnostic. In this paper, we summarize the available evidence related to imaging of Crohn's disease, radiation exposure, and risk, and we report recent advances in low-dose CT technology that have particular relevance. Introduction Crohn's disease is characterized by transmural inflammation that may affect any part of the gastrointestinal tract, and it is a life-long condition that relapses and remits throughout its course [1]. Improved understanding of the pathogenesis of Crohn's disease combined with recent availability of immunomodulatory treatments has expanded the range of medical therapies available to physicians who treat patients with Crohn's disease [2]. Tailored imaging investigations are a key component of the decision-making process for a number of reasons. First, determining the extent and activity of Crohn's disease informs treatment-related decisions. This requires radiological evaluation of small intestine and extra intestinal manifestations as well as endoscopic (gastroscopy and colonoscopy) and laboratory investigations [3]. Second, monitoring disease progression and response to treatment including surgery using imaging allows therapeutic optimization. Appropriate investigations allow early detection of complications, which potentially require surgical treatment including fibrostenotic disease, which can cause bowel obstruction, or fistulating disease which can lead to abscess formation ( [4]). Third, treatment side effects range from nausea, which may limit compliance, to an increased risk of lymphoproliferative disorders, lymphoma, melanoma, and nonmelanoma skin cancers associated with immunosuppressant [5][6][7]. Similarly, chronic inflammation of the gastrointestinal tract increases the risk of colorectal cancer [5][6][7]. These associations sometimes necessitate surveillance strategies for patients with Crohn's disease [4]. Nonionising Radiation Modalities The use of imaging modalities that do not require ionizing radiation is the preferred method of reducing radiation exposure among patients with Crohn's disease. These include magnetic resonance imaging (MRI), ultrasound, and capsule endoscopy. MRI. MRI is used to a great effect to identify both acute and chronic features of Crohn's disease [8]. Magnetic resonance enterography (MRE) is the preferred method of small bowel cross-sectional imaging. This is partly due to concern regarding cumulative ionizing radiation exposure from CT and fluoroscopy, especially in children and young adults, who will undergo many examinations throughout their life, otherwise amassing a potentially significant cumulative radiation exposure [9]. MRI is well suited for evaluating smallbowel inflammatory disease with reported sensitivity of 93% and specificity of 93% [10]. MRI offers superior soft tissue contrast resolution, multiplanar capability, and the potential of obtaining functional information. The main indications for MRE include small bowel imaging in patients with suspected or surveillance of known Crohn's disease. The examination may also be combined with the assessment of perianal disease, which is also optimally performed using MRI. The absence of ionizing radiation is a strong advantage of MR imaging. These advantages often outweigh the disadvantage of the relatively long time it takes to perform MR enterography and increased cost relative to CT [11,12]. In the clinical setting of a critically ill patient, the MRI suite presents many additional challenges over CT in terms of patient safety, especially regarding monitoring lines and support equipment which need to be nonferromagnetic. MRI imaging in the setting of Crohn's disease also often requires the administration of a gadolinium-based contrast agent (GBCA), which has recently come under increasing scrutiny due to gadolinium deposition in the dentate nuclei, pons, globus pallidus, and thalamus of patients undergoing multiple MRIs requiring GBCA administration [13,14]. The clinical significance of this deposition is as yet unknown but has led to the European Medicine Agency's Pharmacovigilance Risk Assessment Committee recently recommending the suspension of marketing authorization for four linear GBCAs [15]. Diffusion-Weighted Imaging. Diffusion-weighted imaging (DWI) can be used as a valuable sequence for the depiction of lesions and can change the MRI protocol and obviate the need for gadolinium administration. While long used in other parts of the body such as the brain, the use of DWI to assess bowel is relatively new. Increased T2 signal intensity and restricted diffusion on DWI of the bowel wall have been shown to relate to acute inflammation [16]. The ability of DWI to differentiate between actively inflamed small bowel segments and normal small bowel in CD has been demonstrated, showing superior sensitivity versus dynamic contrast-enhanced MR [17]. A prospective study involving 31 patients with CD compared DWI with conventional MRE in estimating small bowel inflammation. DWI hyperintensity was highly correlated with disease activity evaluated using conventional MRE [18]. DWI has also been shown to complement T2-weighted imaging of the internal fistula and sinus tracts [19]. However, improved spatial resolution to facilitate thinner image slices is required before DWI can replace gadoliniumenhanced sequences or be used as a reliable quantitative biomarker for monitoring disease activity [20]. Recent developments in MRI technology, such as faster gradient sequences and refined receiver coils, will boost its convenience and allow for more efficient MR imaging. Not only should these advances increase patient through-put but these will also reduce motion artifact and improve image spatial resolution which are current limitations of MRI compared with CT. Being radiation-free, these advances are most significant for younger cohorts of patients with CD and those undergoing serial and repeated imaging studies for known CD [21]. 2.3. Ultrasound. The pathognomonic finding of Crohn's disease is discontinuous and dishomogeneous transmural inflammation extending through all layers of the intestinal wall. The presence of these features forms the basis of ultrasound (US) diagnosis of Crohn's disease. Standard B-mode ultrasound is of limited utility in this setting, but some recent papers have suggested that contrast-enhanced US may be of use in determining disease activity [22][23][24], although this is slow to be implemented into widespread clinical practice. US assessment in patients with CD typically reveals stiff and thickened bowel walls variably associated with an alteration of normal peristaltic activity in the small bowel as well as the absence of colonic haustral folds. Contrast-enhanced, power, and color Doppler ultrasound allow for increased accuracy in the assessment of the small bowel CD [25]. Recent studies have shown that US reliably locates and characterizes inflammatory infiltration of the bowel wall and assesses local abnormalities such as abscess formation [26]. A significant problem with the use of ultrasound in this setting, however, is that it is heavily operator-dependent and is extremely time-consuming. Capsule Endoscopy. Capsule endoscopy (CE), introduced in 2000 [27] , is an increasingly available method for assessing small intestinal pathology. Current indications for CE include the identification of obscure gastrointestinal tract bleeding (OGIB) and investigation of Crohn's disease, small intestine tumours, and malabsorptive states [28]. In the evaluation of Crohn's disease, stricturing or penetrating disease increases the risk of capsule retention (defined as the capsule remaining in the gastrointestinal tract for longer than 2 weeks [28]) and capsule perforation [29,30]. CT and MRI techniques to determine luminal patency are useful prior to CE. Importantly, MRI use is contraindicated in cases of capsule retention [28], which may occur due to gastroparesis and motility disorders, as well as for mechanical reasons secondary to complications of Crohn's disease [28,30]. Furthermore, nonspecific mucosal abnormalities are often detected with CE and, without biopsy capability, this can lead to high false positive rates of Crohn's disease [29], reducing benefits of CE over radiological modalities. Capsule aspiration is a rare complication, most often seen in patients with neurological or swallowing disorders and reduced or absent cough [29]. In carefully selected cases, particularly stable OGIB and nonstricturing Crohn's disease, CE is a safe, noninvasive investigative tool that reduces radiation exposure for patients in the evaluation of small intestinal mucosa. Computed Tomography 3.1. Background. CT uses ionizing radiation in the form of X-rays, to form an image of a patient. The traditional method of image reconstruction (i.e., the reconstruction algorithm) used by the computer to form the images is called filtered back projection (FBP). This method relies on the patient being exposed to a relatively large dose of radiation in order to create diagnostic quality images. Image noise becomes an issue at low radiation doses with FBP, and images with large amounts of noise can significantly impair the ability of the interpreting radiologist to form an accurate opinion of the images. Recent advances in the computational power and efficiency have facilitated the use of iterative reconstruction of CT images. These new iterative reconstruction (IR) techniques have some major advantages over FBP: they reduce image noise, reduce the occurrence of artefacts (e.g. streak from metallic implants), and facilitate the acquisition of CT images at much lower radiation doses while maintaining diagnostic image quality [31]. Radiation Dose. Three main metrics are used to estimate patient radiation exposure in CT. The radiation dose output from the scanner is represented by the CT dose index (CTDI) vol measured in milligrays (mGy); the dose over the total length of the scan is represented by the dose-length product (DLP) measured in mGy·cm; the effective dose (ED), measured in millisieverts (mSv), which represents the equivalent whole-body dose that would have the same risk of the biologic effect, can be derived by multiplying DLP by a conversion factor based on the CT scan parameters and the body part imaged [32]. A standard CT of the abdomen and pelvis (CT-AP) exposes the patient to an ED of approximately 8 mSv although values reported in the literature range from 3.5 mSv to 25 mSv [33]. The radiation exposure associated with CT-AP is significantly more than the annual natural background radiation of 3-4 mSv received by the average person from the environment [34]. Risks of Radiation Exposure. High-dose radiation exposure leads to predictable deterministic effects that only occur above a certain threshold dose, and the severity of the injury once this threshold is reached is dose dependent; examples of this include skin burns (threshold 2 Gy) [35,36] and cataract formation (threshold 0.5 Gy) [37]. However, even below these thresholds, exposure to low-level ionizing radiation is associated with stochastic effects; these are probabilistic effects that are unrelated to dose and are responsible for cancer induction in human cells. The current widely accepted model of cancer risk from low-level radiation exposure is called the linear nothreshold model, whereby any exposure to ionizing radiation, however small, has the potential to cause harm. Several clinical studies have attempted to quantify cancer risk from CT radiation exposure [38,39], but this is very difficult to accurately perform as the increased risk due to diagnostic imaging is small and it is extremely difficult to control for confounding factors in the required large study population over a long time period. Another difficulty in risk quantification is that there is usually a latent time from radiation exposure to cancer development of many years. Age at the time of radiation exposure is known to be an independent risk factor for subsequent cancer mortality [40], and this is particularly relevant in the setting of Crohn's disease where most patients are diagnosed between the ages of 15 and 40; one large US epidemiological study reported a median age at the diagnosis of 29.5 years [41]. Patient knowledge of the risk associated with radiation exposure is generally low [42], and the information available to these patients, primarily from the internet, can be of questionable accuracy [43], so it is the responsibility of referring physicians and radiologists to communicate these risks to patients in an easily understandable and effective way. Guidance on how to source accurate information on the internet may be very helpful to patients, similar to the way in which physicians appraise medical literature. This includes factors such as the presence of Health on the Net Foundation Code of Conduct Certification (HONcode), an identifiable author, and references to the peer-reviewed literature [43]. Some published estimates of cancer risk include a 3-fold increased risk of leukaemia with 50 mGy exposure as a child [38], a 3-fold risk of brain cancer with 60 mGy exposure as a child [38], the induction of 125 breast cancers per 100,000 women screened between ages 40 and 74 [44], and a 1.8% increase in lung cancers if 50% of the population between aged 50 and 75 were screened for lung cancer with CT annually [45]. However, the issue of estimation of risk associated with exposure to ionizing radiation in the diagnostic range remains extremely controversial. With increased attention to this subject in the media and more alarmingly on the internet and in social media, physicians must ensure that misinformation does not lead to situations where clinically indicated CT scans are being refused by patients because of exaggerated fears of developing malignancy. Medical imaging now accounts for approximately 50% of total population radiation dose [46], and CT accounts for approximately 60% of the dose received from medical imaging [47]. The importance of patient radiation exposure from serial CT examinations has been highlighted by the International Commission on Radiological Protection (ICRP) in its Publication 102 [48]. It is believed that any dose of radiation, however small, has the potential to cause harm, and so the increased radiation dose from CT is of concern to many [40,49]. For now and until there is irrefutable contrary evidence, the "as low as reasonably achievable" (ALARA) principle guides radiation protection practices [50,51]. CT in Crohn's Disease. CT imaging as an alternative of MRI and ultrasound is often used for imaging in Crohn's disease due to its availability, accessibility, familiarity, rapid acquisition time, and ability to evaluate mural, extramural, and extraintestinal manifestations in a single examination [3]. There are clinical circumstances where CT is the preferred method of imaging assessment, for example, in the acute setting, postoperatively, in patients with contraindications to MRI, or in claustrophobic patients. The development of low-dose CT scans can reduce patient radiation exposure which is particularly important when required for young patients with CD. It is of particular use in acutely unwell patients for the assessment of abscess formation or perforation [52]. CT enterography (CTE) is a variation of routine CT that specifically assesses the extent and severity of CD in the small bowel. It is performed with the combination of 1 litre of a neutral or low-density oral drink/beverage with intravenous iodinated contrast media. This combination optimizes luminal distention and contrast resolution in the small bowel and improves visualization of mural abnormalities such as strictures or fistulae. Diagnostic criteria for Crohn's disease using CTE include bowel wall thickening, bowel hyperemia, submucosal fat deposition, and lymphadenopathy. This cross-sectional imaging technique can also detect complications of CD including bowel obstruction, fistula, perforation, or abscess [53]. CTE is indicated in symptomatic patients, older patients (over 35 years old), and when there are contraindications to MR imaging [54]. Conventional CT is preferred for acutely unwell patients especially where there are signs of abscess formation or hollow viscus perforation. Low-dose CTE using iterative reconstruction techniques (e.g., model-based iterative reconstruction, adaptive statistical iterative reconstruction, and sinogram-affirmed iterative reconstruction) has been found to be sensitive and specific for the detection of active inflammatory changes of CD while utilizing radiation doses significantly lower than those associated with conventional techniques. [55] There are known risk factors in Crohn's disease patients that tend to result in higher lifetime cumulative effective doses, including a history of surgery, biologic therapy, pain-predominant symptoms, isolated ileal disease, and structuring or penetrating Crohn's disease [56][57][58]. Between 7 and 11% of patients with IBD are exposed to high CED (>35-75 mSv), mostly patients with CD [59][60][61][62]. Low-Dose CT. While there is no standard definition for what constitutes a low-dose abdominal CT protocol, we consider scans where the effective dose delivered approaches that of a standard abdominal plain film or KUB radiograph to be a low-dose CT examination. There have been huge strides made recently in the attempts to reduce the radiation exposure to patients from CT. Dose reduction techniques include automatic tube current modulation [63], truncated protocols with fewer images [64], increasing acceptable image noise [65], reduced mA and kV scanning, and clinical use of new iterative reconstruction techniques [66,67]. There is a fine balance to be struck between reducing individual patient radiation exposure and maintaining sufficient image quality to allow an accurate diagnosis to be made, and this is an area of intensive research. An example of parameters for a low-dose abdominal CT protocol at our institution is shown in Table 1. The low-dose protocol is designed to impart a radiation exposure of 10-20% of a routine abdominal CT. The data are reconstructed using a pure iterative reconstruction algorithm (model-based iterative reconstruction, MBIR, Veo, GE Healthcare, GE Medical Systems, Milwaukee, WI). The mean effective radiation dose imparted by such a protocol is 0.83 mSv for normal weight patients increasing to 2.0 for overweight patients [67]. The typical conventional protocol effective radiation dose is 6.1 mSv. Both low-dose CT of the abdomen and pelvis and CTE can be performed using these parameters and appropriate patient preparation. It is important to highlight that imaging parameters need to be tailored towards the technology being used, the patient size, and the familiarity of the reporting radiologist with the altered appearance of a low-dose CT examination. Low-dose CT lends itself well to thoracic imaging, partly due to the high inherent tissue contrast in the lungs. Lowdose CT in the abdomen and pelvis is challenging due to similar densities of adjacent structures and little difference between the attenuations of normal and pathological processes that can be easily obscured by increased image noise in the low-dose setting. For example, the identification of subtle stranding of the fat or prominence of the vasa recta associated with inflamed loops of small bowel is sometimes vital in the detection of active disease. Image noise in low-dose CT images may potentially impact detection of these subtleties. New reconstructive algorithms, termed iterative reconstruction, use a more complex process of image formation from raw projectional data by taking into account the scanner geometry and noise statistics and in some cases mathematical models to incorporate the shape and nonlinear polychromatic nature of the X-ray beam, the focal spot geometry, and the three-dimensional shape of the voxels. This more computationally intense method of image reconstruction results in lower levels of image noise, and therefore, the CT scans may be acquired using a reduced amount of radiation while maintaining equivalent image quality. Hybrid IR methods blend FBP with a percentage of iterative reconstruction whereas pure model-based IR is a fully IR-based image reconstruction algorithm. Many IR algorithms have been shown to be reliable for image reconstruction in a number of clinical settings including but not limited to cystic fibrosis [68], urolithiasis [66], CT enterography [67], followup of testicular cancer [69], and carotid angiography [70] with most reporting dramatic dose reductions while maintaining diagnostic image quality. In the setting of Crohn's disease, there have been several studies detailing markedly reduced radiation exposure from CT due to the utilization of iterative reconstruction algorithms [55,[71][72][73][74], with dose reductions reported from 34 to 74% compared with standard dose CT-AP. This represents an effective dose reduction from 3.5 mSv to 0.98 mSv with no significant differences in terms of diagnostic ability reported (Figures 1-3). Iterative reconstruction methods for low-dose CT rely on the modelling of statistical characteristics in the image domain. The current methods for this direct processing of reconstructed images lead to significant amounts of image noise. Innovative application of deep learning technology has demonstrated a great potential of deep learning for noise suppression, structural preservation, and lesion detection at a high computational speed for low-dose CT imaging [75]. This development of a specialized neural network may lend itself to future applications such as 3D and dynamic reconstruction as well as adaptation to other imaging modalities. A recent retrospective study comparing a novel IR algorithm improved sinogram-affirmed iterative reconstruction (SAFIRE * , Siemens Healthcare, Erlangen, Germany), with the standard filtered back projection (FBP). It was found that half-dose CT datasets reconstructed with SAFIRE * maintained acceptable image quality compared with full-dose CT datasets reconstructed with FBP [76]. The IR algorithm in this case, though a research-only prototype currently, potentially allows dose reductions in the order of 50% over conventional CT scans. While preliminary low-dose protocol optimization can be performed with anthropomorphic and quality insurance phantoms, individual patient size characteristics, such as body weight, body-mass-index, and effective diameter, can readily change the performance of some dose-reduction tools. As iterative reconstruction algorithms are nonlinear, results from phantom measurements will not predict performance in humans. Also, the alteration of low-dose protocol parameters is not usually feasible during routine clinical practice because adequate diagnostic image quality must be ensured for all CT studies. This problem has led to the development of simulation tools that subsequently simulate lowdose data from clinically acquired high-dose scans allowing clinicians to improve low-dose protocols using patient data acquired during clinical practice [77]. This tool has been used to determine the lowest achievable radiation dose using iterative reconstruction for CT imaging of the appendix in young adults [78]. A 1.0 mSv appendiceal CT was found to be noninferior to 2.0 mSv CT in terms of diagnostic performance. These results point to the possibilities that simulation tools offer to clinical departments. The projection data from previously acquired patient scans enable researchers to optimize the performance of iterative reconstruction algorithms. With standardized data sets, researchers from any discipline could evaluate IR algorithms against the results of competing methods, helping them to more rapidly determine which methods optimally reduce radiation dose [79]. With the number of CT study acquisitions increasing annually, it is paramount to further develop innovative dose reduction tools while expanding the utility of abdominal CT diagnostics. Another important aspect of low-dose CT acquisitions is patient positioning. A recent study investigated the effect of patient off-centering in human cadavers [80]. Failure to ensure correct patient positioning resulted in a dose overestimation of up to 92%. Techniques such as laser-guided automatic patient-centering software have the potential to offer dose saving of up to 30% for chest CT and up to 56% for abdominal CT [81]. This emphasizes the valuable function such techniques have in CT organ dose conservation. Conclusion Patients with Crohn's disease are susceptible to high cumulative radiation exposures, particularly patients with recurrent disease and those who require steroid administration or surgery. In order to minimise radiation exposure to patients with Crohn's disease, imaging methods which do not entail ionizing radiation need to be used where possible. Nevertheless, there is a continuous global trend towards increased use of CT in medical imaging which is particularly relevant for patients with Crohn's disease who are frequently young at diagnosis and require lifelong imaging [56]. Recent developments in CT technology have the potential to considerably reduce the ionizing radiation exposure to patients with CD. Concerns remain regarding the risk of patient exposure to ionizing radiation, and with CT contributing most to medical radiation dose, it is imperative that we continue to strive for continuous improvements in patient radiation protection in order to keep radiation exposure as low as reasonably achievable. Many recent research studies have focused on the utility of new iterative image reconstruction algorithms in this regard and have highlighted the ability of these new software developments to facilitate the CT scanning at low-doses while maintaining diagnostic image quality. Future research will focus on optimizing these algorithms even further in order to achieve the minimum CT radiation dose without compromising diagnostic ability. There is little doubt that CT will retain a central role in imaging of Crohn's disease patients, but optimization of radiation exposure must remain central to future developments.
2018-12-12T19:54:04.365Z
2018-10-31T00:00:00.000
{ "year": 2018, "sha1": "428a84f9249a1a5bc840c224fb109a37b62c59bb", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/grp/2018/1768716.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "96145d4a3dfd3e9e209156c7e622747f77e226bf", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
268884399
pes2o/s2orc
v3-fos-license
The construction of a nomogram to predict the prognosis and recurrence risks of UPJO Objective This study was conducted to explore the risk factors for the prognosis and recurrence of ureteropelvic junction obstruction (UPJO). Methods The correlation of these variables with the prognosis and recurrence risks was analyzed by binary and multivariate logistic regression. Besides, a nomogram was constructed based on the multivariate logistic regression calculation. After the model was verified by the C-statistic, the ROC curve was plotted to evaluate the sensitivity of the model. Finally, the decision curve analysis (DCA) was conducted to estimate the clinical benefits and losses of intervention measures under a series of risk thresholds. Results Preoperative automated peritoneal dialysis (APD), preoperative urinary tract infection (UTI), preoperative renal parenchymal thickness (RPT), Mayo adhesive probability (MAP) score, and surgeon proficiency were the high-risk factors for the prognosis and recurrence of UPJO. In addition, a nomogram was constructed based on the above 5 variables. The area under the curve (AUC) was 0.8831 after self cross-validation, which validated that the specificity of the model was favorable. Conclusion The column chart constructed by five factors has good predictive ability for the prognosis and recurrence of UPJO, which may provide more reasonable guidance for the clinical diagnosis and treatment of this disease. Introduction Ureteropelvic junction obstruction (UPJO) is the most common cause of congenital hydronephrosis.The prevalence of UPJO ranges from 1:1,500 to 1:500 among newborns, mainly affecting males (with a male-to-female ratio being 2:1) (1-3).Further, left involvement accounts for 60%, and bilateral involvement accounts for 10%-40%.The management of UPJO has posed a challenge for both pediatric and adult urologists.Dissected pyeloplasty for UPJO is considered one of the most common urological reconstruction interventions (4,5). However, the postoperative recurrence of UPJO has always been a thorny problem for clinicians.Braga et al. identified the recurrence rate (5.2%) of UPJO after various open surgical procedures in 2008 (6).According to the calculation of Ceyhan et al. in 2019, the recurrence probability of UPJO was 6.7% (7).In recent years, surgical techniques and instruments have been continuously improved (5), and the diagnosis and treatment of UPJO become more reasonable due to the continuous improvement of prenatal diagnosis with the aid of B-ultrasound (8) and the development of MR urography (MRU) (9).However, the postoperative recurrence of UPJO still exists and has not been significantly reduced.Further, there are fewer studies to explore the risk factors of postoperative recurrence of UPJO, which has not been fully explored in the medical circles at home and abroad. Although Ceyhan (7) and Braga (6) included a sufficient sample size of UPJO, the risk factors associated with the postoperative recurrence after UPJO were not fully clarified.Both of them only conducted a simple controlled study based on clinical case cohorts.Besides, due to the less rigorous statistical methods in their studies, it was still difficult to predict the risks for the recurrence of UPJO to guide clinical diagnosis and treatment. As a new statistical method to predict the prognosis of diseases in recent years, the nomogram can be used to evaluate the prognosis accurately.In addition, this tool contributes to preventing low-risk patients from unnecessary examinations in the decision-making process and avoiding delayed treatment for patients with a high probability to obtain favorable net benefits (10)(11)(12).The nomogram has been employed to predict the prognosis of patients with colorectal cancer (13), prostate cancer (14), and multiple myeloma (15).In addition, some investigators also adopted deep learning (DL) algorithms (16) to predict the recurrence risks of UPJO after surgery.Moreover, some investigators also constructed a clinical prediction model for the reoperation of UPJO after surgery (17).In this study, the prognosis of patients with a surgical history for UPJO was evaluated based on such variables as anteroposterior diameter (APD) of the renal pelvis, preoperative renal parenchymal thickness (RPT), and surgical methods, thus predicting the recurrence risk of UPJO after surgery. This study aimed to incorporate more risk factors that may be associated with the recurrence of UPJO after surgery and conduct relevant explorations.Meanwhile, a clinical prediction model for predicting the recurrence probability of UPJO after surgery was established based on the APD of the renal pelvis, preoperative RPT, surgical methods, and other variables with the aid of various mature and reliable statistical methods.Moreover, this prediction model could be applied to patients with a surgical history for UPJO to predict the recurrence risk after surgery.Furthermore, these efforts are expected to establish a systematic diagnosis and treatment system for the prognosis and recurrence of UPJO, thus reducing the recurrence risks of UPJO after surgery in clinical practice. Materials and methods This study was approved by the Academic Research Ethics Committee of Shandong University, and the clinical privacy of patients was fully protected from disclosure.In this study, pediatric patients with UPJO who received surgical treatment (open pyeloplasty, laparoscopic pyeloplasty, and robot-assisted pyeloplasty) in the Pediatric Surgery Department of Qilu Hospital of Shandong University from January 2005 to December 2022 were retrieved from the Lianzhong Medical Database of Qilu Hospital of Shandong University as per the names of attending physicians (SUN Fengyin, LI Aiwu, CUI Xinhai, and DONG Zhixing).Eventually, a total of 890 patients with UPJO were identified from January 2005 to December 2022.During the follow-up, the aggravation of collective system separation revealed by CT urography (CTU) and MR urography (MRU) or the aggravation of nephron destruction compared with preoperative conditions revealed by emission computed tomography (ECT) was found in 57 patients.Based on that, a retrospective analysis was conducted (Figure 1).Meanwhile, antibiotics and analgesics were not routinely administered in all patients before and after surgery, and ureteral stents were routinely removed under general anesthesia 6-8 weeks after surgery.This study was designed and implemented in strict accordance with the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) Statement (18).According to the Event per Variable (EPV) criteria and sample size guidelines for logistic regression of observational studies, a minimum sample size of 800 patients was required (19).The exclusion criteria included: (1) patients without other congenital malformations of the urinary system, such as horseshoe kidney, duplicate kidney, and double ureter; (2) patients without other chronic diseases unrelated to this disease (excluding hypertension, renal injury, and preoperative UTI); (3) patients with incomplete clinical data or a loss to follow-up.According to the exclusion criteria, 8 patients with secondary conditions, 8 patients with horseshoe kidney, and 22 patients with a loss to follow-up were excluded.Statistical results were expressed based on two patterns, namely "recurrence" and "no recurrence".Specifically, recurrence indicated that the patient received a second surgical procedure except for ureteral stent removal (salvage pyeloplasty performed with the above three different approaches).Non-recurrence indicated that the patient did not undergo any additional surgery related to the urinary system (such as balloon dilatation, ureteral stent implantation, laser intrapelvic pyeloplasty, or other repetitive pyeloplasty) within 30 months after the initial operation. Statistical analysis The process flow of this study is shown in Figure 1.In the first step, relevant data were collected.In the process of data collection, unnecessary data were deleted in strict accordance with the above standards.In the second step, the collected data were collated according to the variables that were assumed to be related to UPJO recurrence, so that these data can be used for the subsequent statistical calculation.In the third step, the rms (6.4.0) and ResourceSelection (0.3-5) packages in R (4.2.1) were used for binary and multivariate logistic regression analyses.In the processing process, data cleaning was carried out first; Then, the glm function was used to screen variables by single-factor binary logistic regression.Next, the multivariate binary logistic regression was conducted, and the model correlation test was performed.In terms of the variable screening strategy, the single-factor sample would be included in the multi-factor model if it met the p-value threshold (<0.05).Eventually, the risk factors with the most significant correlation with UPJO recurrence were screened out.After data cleaning, the binary logistic model was constructed with the aid of the glm function.Moreover, the rms package was employed to construct and visualize nomogram-related models.As a result, a nomogram based on 5 risk factors related to UPJO recurrence was constructed.In the fourth step, the Bootstrap sampling method was adopted.First, the data (S) of 852 samples from the overall sample were obtained.Then, these 852 original sample data were subjected to sampling with replacement to obtain a sample with a size of 100, which was repeated 1,000 times.The sample in each sampling was called a Bootstrap sample, and a total of 1,000 Bootstrap samples were obtained.After that, the statistics of each Bootstrap sample were estimated, and 1,000 statistics in total were obtained.Finally, the sampling distribution was constructed based on these 1,000 Bootstrap statistics.The ROC analysis of these data was performed using the pROC (1.18.0) package, and the results were visualized using ggplot2 (3.3.6).Among them, the pROC package could correct the ending order of data by default (ensuring that the result was convex upwards).Besides, the 95% confidence interval (CI) was set, and 2.5% of quantiles were taken at both ends of the sorted sampling distribution, thus completing the confidence interval estimation of the overall median.In the fifth step, the binary classification model and survival model were fitted with the logistic model and logistic-LASSO (least absolute shrinkage and selection operator) model, respectively.The leave-one-out (LOO) risk score was calculated over a range of model complexity parameters (lambda λ).The lambda values with the highest AUROC and consistency, respectively, were selected for the construction of the final model.The bootstrap sampling on the empirical percentile (1,000 times of sampling) was utilized to infer the point estimation.Additionally, parametric reasoning of model coefficients was performed through selective inference (SI) designed based on the LASSO model.The binary Logistic model was constructed with the aid of the glm function.Moreover, the rms package was adopted to perform calibration analyses and visualization.Meanwhile, the glm function was employed to construct a binary Logistic model, and the rmda package was utilized to calculate the corresponding net return rate and perform visualization. Result A total of 852 patients with UPJO who underwent dissected pyeloplasty over an established period were explored.Among them, 57 patients underwent a second pyeloplasty after surgery, and the median time from the recurrence to the initial operation was 17 months.Among these 852 patients, the median age of patients at their initial operation was 40 months.Among them, there were 144 (17%) female patients; The average body weight was 16.4 kg.Besides, there were 465 (54%) patients with left involvement and 537 (63%) patients with unilateral involvement.The average APD before surgery was 3.16 cm.The median GRI grade was rated as 3 before surgery.The average RPT measured by preoperative imaging was 1.15 cm.The average surgical time was 73.27 min and the average blood loss was 84.58 ml.There were 555, 64, and 233 patients undergoing laparoscopic pyeloplasty, robot-assisted pyeloplasty, and open pyeloplasty, respectively.All data were evenly distributed. The data of 852 patients were included, and most of the included variables were evenly distributed.The binary logistic model was constructed to select variables.As a result, 36 variables were initially screened, of which 5 variables were reserved for the construction of the prediction model (Table 1; Figure 2).Based on these 5 variables, the prediction model was established with the assistance of the logistic regression equation.The parameters of the ROC curve at the best cut-off value in different models were recorded.The results demonstrated that the AUC of the model was 0.883, which exhibited high sensitivity and specificity (Figure 3).In addition, the calibration curve revealed that the fitting degree of the model was high (Figure 4). Finally, 5 predictive factors were selected as the prognostic characteristics of the nomogram (Figure 2), including preoperative APD, preoperative UTI, preoperative RPT, MAP score, and surgeon proficiency.Based on the nomogram, patients can roughly estimate the risk of secondary surgery in a treatment evaluation program.This nomogram can be used to predict the individualized risk for the recurrence of UPJO after surgery. The DCA results confirmed that the net benefit of the prediction model was improved compared with the default strategy.In the default strategy, it was assumed that all or no patients among these 852 patients needed centralized interventions (Figure 5).The DCA results were also verified by transforming net benefits into reduced interventions per 100 patients.As shown in the DCA diagram, the clinical strategy based on the nomogram would reduce the number of unnecessary interventions with a wide range of threshold probabilities in the training set and the test set. Discussion The postoperative recurrence of ureteropelvic junction obstruction (UPJO) has always been a thorny problem for clinicians.Braga analyzed the recurrence rate (5.2%) of UPJO and proposed that not performing retrograde pyelography or selecting the lumbar dorsal incision in open pyeloplasty was independently associated with the high risk of UPJO recurrence (6).Ceyhan et al. confirmed that the recurrence rate of UPJO and the incidence of complications were 6.7% and 11.4%, respectively.Urinary tract infection (UTI) (7.8%), complications associated with urinary diversion (1.8%), and urethral polyps (1.4%) are the most common complications.Preoperative shunt (P = 0.020) and early complications after pyeloplasty (P < 0.001) are significantly associated with the recurrence of UPJO (7).As revealed in previous studies, the overall postoperative recurrence rate of UPJO is about 5%-10% (20-22).However, the postoperative recurrence of UPJO still exists and has not been significantly reduced. Besides, there are significant differences in the risk factors for the recurrence of UPJO after surgery between different reports.It has been reported that vascular compression and tortuous stenosis of the proximal ureter are also the causes of UPJO recurrence after surgery (23).However, after the literature on the salvage surgery for UPJO was reviewed, dense fibrous tissues and scarring around anastomosis were recognized as the main causes of UPJO recurrence (24-30).Besides, there were incomplete indicators in previous studies.The main risk factors included urinary fistula, inappropriate conditions at the anastomotic stoma, anastomotic stoma, scar hyperplasia, iatrogenic valve, anastomotic adhesion, non-absorption of silk thread, high ureteral anastomosis and so on.Meanwhile, in the research on other aspects of UPJO, some investigators selected the age at the initial operation, BMI, gender, unilateral or bilateral involvement, and left or right involvement as the basic evaluation indicators for patients who did not achieve favorable outcomes in the initial operation (31,32).However, Lim et al. reported that the age at the initial operation was a factor affecting the surgical outcome (33).Both Braga and Ceyhan reported that age was not associated with the surgical outcome.The results of this study demonstrated that the age at the initial operation was not significantly related to the prognosis and recurrence of UPJO.WENBIN FU also mentioned that calculus, a complication of UPJO, may be a risk factor for the prognosis and recurrence of this disease.Silay MS et al. maintained that UVJO exerted certain impacts on postoperative remission of UPJO (34).In this study, the risk factors related to the prognosis and recurrence of UPJO were obtained based on the above reports and the research on salvage surgery for recurrent UPJO.Additionally, these The table lists the various indicators included in this study, and under the standard of P < 0.05, through binary and multiple logistic regression analysis, indicators closely related to postoperative recurrence of UPJO were selected. Predicting nomogram of postoperative recurrence in UPJO through preapd, preuti, PT, MAP, and proficiency.Odds represents the probability of UPJO prognosis recurrence corresponding to the obtained score.Patient prognosis values are located on the axis of each variable; Then draw a line upwards at a 90 angle to determine the number of points for that specific variable.The sum of these numbers is located on the total score axis and plotted downwards at a 90°angle along the UPJO prognostic recurrence risk axis to determine the likelihood of UPJO prognostic recurrence.authors only adopted the statistical method of cohort studies to explore the risk factors for the prognosis and recurrence of UPJO.They did not apply systematic statistical methods, nor did they carry out verification.Therefore, more rigorous, systematic, and convincing statistical methods, such as clinical prediction models, were employed to analyze all statistical indicators.Under this circumstance, the indicators related to UPJO recurrence can be explored more comprehensively, and a more practical clinical prediction model was also established. In recent years, clinical prediction models have been used in clinical research in the form of nomograms (35) .In addition to the above application in clinical cases, Ruo-Yang Chen, Jie Wu, Yu-xiang Song and other investigators also applied nomograms to clinical research (36)(37)(38), and reliable clinical prediction models were also constructed.In this study, the data of 852 patients with UPJO were collected to construct a clinical prediction model for predicting the postoperative recurrence of UPJO.Besides, 5 risk factors for the prognosis and recurrence of UPJO were screened by single-factor logistic regression and multivariate regression analyses, including APD of the renal pelvis, RPT, MAP score, preoperative UTI, and surgeon proficiency.In addition, a nomogram was constructed based on the multivariate logistic regression analysis results.Moreover, the ROC curve was plotted to verify the discriminability of the model, with the AUC of this model being 0.883.All the combinations of sensitivity and specificity of the whole probability range were included in the AUC calculation.The calculation results indicated that the model had favorable discriminability.Furthermore, a calibration curve was plotted to evaluate the fitting degree of the model, and it was found that the model had a high fitting degree.This suggested that there was no significant systematic difference between the data after internal sampling and those in the clinical prediction model.Such common calibration errors were not observed.The above results demonstrated that the model had high clinical application value. Jiayi Li et al. (17) performed univariate and multivariate logistic analyses and maintained that patient weight, preoperative APD of the renal pelvis, and difficulty in ureteral D-J stent implantation were independent risk factors for surgical failure.They constructed a clinical prediction model with high diagnostic specificity and high fitting degree.The APD of the renal pelvis was also selected as a risk factor for the prognosis and recurrence of UPJO in their study.However, the difference lay in that it was concluded in our study that the body weight of children may not be an independent factor directly affecting the effectiveness of surgery.The weight gain with the growth and development of children and the thickening of the perirenal fascia were the factors that may directly affect the postoperative recurrence of UPJO.Additionally, the MAP score commonly used in adult urology was also adopted to perform quantitative analyses, thus more intuitively reflecting that weight gain brought more difficulties to the surgical treatment of UPJO.Erik Drysdale et al. (16) adopted AI (deep learning) to identify the risk of UPJO recurrence after dissected pyeloplasty.They found that APD and renal parenchyma function before and after surgery were positively correlated with the recurrence of UPJO after surgery, and they are independent risk factors for UPJO recurrence after dissected pyeloplasty.Their findings were consistent with our results. The RPT can directly reflect the degree of renal compression and the severity of renal injury.Josefin Nordenstrom et al. confirmed that the severity of RPT damage was an independent risk factor for the fetus to receive surgical treatment after birth (39).This may explain the view of this study that the severity of renal injury in children was closely related to a second operation.The results revealed that preoperative UTI was a risk factor for the recurrence of UPJO.Meanwhile, other researchers also proposed that infection was an important reason for the failure of the initial operation of UPJO (40,41), which may be related to anastomotic adhesion caused by infection (42).However, it was also confirmed that the administration of antibiotics (43,44) was not effective in preventing UTI after the surgical treatment of UPJO.Therefore, further exploration is required to identify whether antibiotics should be routinely used to control infection before surgery.However, some researchers also confirmed that infection was only related to the ureteral width (7 mm) (45). As revealed in several studies (46)(47)(48)(49)(50)(51)(52)(53), open pyeloplasty, laparoscopic pyeloplasty, and robot-assisted laparoscopic pyeloplasty may generate different curative effects.Meanwhile, the proficiency of surgeons was also one of the factors affecting the curative effects.Compared with conventional surgery, the other two surgical methods have certain requirements for the proficiency of surgeons.The consensus published by European Association of Urology (EAU) showed that surgeons could proficiently perform laparoscopic pyeloplasty after implementing the surgery for 50 cases.Niklas Pakkasjärvi (46) found that surgeons were adept at roboticassisted laparoscopic pyeloplasty after implementing the surgery for 31 cases.This suggested that surgeon proficiency in both procedures should also be regarded as a risk factor for the prognosis and recurrence of UPJO (54,55).In this study, these two thresholds were selected as a division to identify the proficiency of surgeons. The recurrence risk of patients with UPJO can be obtained by evaluating the APD of the renal pelvis, RPT, MAP score, preoperative UTI, and surgeon proficiency.As illustrated in the model-related decision curve, the recurrence risks of UPJO are related to the overall clinical benefits and losses of interventions, which further highlights that the model can more effectively predict the risks or benefits of readmission for patients with UPJO.Meanwhile, it is also proved that the model can improve the benefits of patients and reduce the loss of patients after the actual clinical intervention.This contributes to obtaining more benefits in clinical diagnosis and treatment, surgical procedures, surgical timing, and surgical mode improvement when applying this model in practice.With the assistance of this model, patients can be provided with a more individualized diagnosis and treatment regimen, which may affect decision-making.Moreover, this may also reduce unnecessary examinations and treatment procedures, which would further reduce the treatment costs, thus exerting far-reaching social impacts.In clinical practice, it can be maintained that the average cost for the re-admission of patients may be reduced based on this model, and the losses and expected benefits of patients may be calculated more accurately.On that basis, a more feasible prediction model may be constructed for clinical practice.Nevertheless, only internal sampling verification has been performed in this study, and external verification is not conducted.Hence, the clinical practicability of the model has not been further verified.It is necessary to conduct external verification in subsequent clinical studies to verify the clinical practicability of this model. FIGURE 4 FIGURE 4The calibration curves for the nomogram.The x-axis represents the nomogram-predicted probability and y-axis represents the actual probability of recurrence of UPJO.Perfect prediction would correspond to the 45°dashed line (Ideal).The Apparent line represents the entire cohort (n ¼ 852), and the blue solid line is bias-corrected by bootstrapping (B ¼ 1,000 repetitions), indicating observed nomogram performance. FIGURE 5 FIGURE 5Decision curve analysis for predicting prognosis recurrence of UPJO based on nomograms.The figure represents the decision benefits. FIGURE 3 The FIGURE 3The ROC curve obtained through internal validation after establishing the model.The value of AUC in the figure indicates that the model has good diagnostic ability. TABLE 1 Relevant data on the recurrence of UPJO.
2024-04-04T15:11:48.360Z
2024-04-02T00:00:00.000
{ "year": 2024, "sha1": "9a3aa9a49b7d3e6ec179a1daace241df72f41c28", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2024.1376196/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1c4996b3b2d274e31c316917b747fd394d67938a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237325511
pes2o/s2orc
v3-fos-license
Exergy-Based Multi-Objective Optimization of an Organic Rankine Cycle with a Zeotropic Mixture In this paper, the performance of an organic Rankine cycle with a zeotropic mixture as a working fluid was evaluated using exergy-based methods: exergy, exergoeconomic, and exergoenvironmental analyses. The effect of system operation parameters and mixtures on the organic Rankine cycle’s performance was evaluated as well. The considered performances were the following: exergy efficiency, specific cost, and specific environmental effect of the net power generation. A multi-objective optimization approach was applied for parametric optimization. The approach was based on the particle swarm algorithm to find a set of Pareto optimal solutions. One final optimal solution was selected using a decision-making method. The optimization results indicated that the zeotropic mixture of cyclohexane/toluene had a higher thermodynamic and economic performance, while the benzene/toluene zeotropic mixture had the highest environmental performance. Finally, a comparative analysis of zeotropic mixtures and pure fluids was conducted. The organic Rankine cycle with the mixtures as working fluids showed significant improvement in energetic, economic, and environmental performances. Introduction The organic Rankine cycle (ORC) has a large potential for electricity generation from heat sources with relatively low temperatures such as geothermal, solar, biomass, and waste industrial heat. Different aspects of ORCs have been studied intensively. In ORC, the selection of a working fluid is an essential factor that affects the cycle's performances [1] including the economic and environmental aspects. For the bibliometric analysis of the state-of-the-art developments in the field of multiobjective optimization applied for ORC, the Scopus database (April 2021) was used with the following algorithm. The initial keyword "ORC" was used with the following equivalents: "Organic Rankine cycle" = "Organic Rankine cycle (ORC)" = "Organic Rankine cycles" = "ORCs". The only publications were considered if they met the following criteria: (a) in English; (b) in an international journal, and (c) in the proceedings of an international conference. As a result, 3058 publications were selected. Through the application of filter "optimization", the number of publications was reduced to 2228. To describe state-of-theart developments in the field of authors' research, a second step of filters was applied. Finally, 456 papers were selected, with at least one of the following keywords: "working fluids", "economic analysis", "genetic algorithm", "binary mixture", and "multiobjective optimization". To identify the links among the keywords, the software VOSviewer [2] was employed. Figure 1 shows the co-occurrence and links among the keywords. The evaluation of the obtained results demonstrates that within "multiobjective optimization", only keywords. The evaluation of the obtained results demonstrates that within "multiobjective optimization", only thermodynamic and economic variables were considered. None of the evaluated papers addressed the evaluation of ORC using thermodynamic, economic, and environmental aspects simultaneously (particularly, based on the concept of exergy) as well as included in the optimization. The genetic algorithm approach was applied in a larger number of papers than "multiobjective optimization". The detailed literature review of the most representative papers is as follows. Note that in the below mentioned studies, the research results for the ORC with one-component working fluids are reported (not included in Figure 1). For example, the thermodynamic analysis and optimization of ORC performance with one-component working fluids are discussed in [3][4][5][6][7]. Several studies have evaluated the ORC using different performance criterions, such as energetic, economic, and environmental, using exergy tools [8,9]. Exergy can be combined with economic analysis and an environmental assessment; these combinations are called exergoeconomic and exergoenvironmental analysis, or exergybased methods. In [4], a parametric optimization of an ORC using R123, R245fa, and isobutane as working fluids has been performed from the perspectives of thermodynamic and economic. The exergetic performance of an ORC with high critical temperature working fluids using genetic algorithm optimization was investigated in [10]. Thermodynamics and exergoeconomics performances of ORC with several one-component working fluids were investigated and compared with those of the Kalina cycle and trilateral power cycle. The obtained results reveal that the ORC system is the most recommended for generating power among the two cycles studied from the perspective of economics [11]. In [12], multi-objective optimization of an ORC with cyclohexane, benzene, and toluene as the working fluids using the exergy, exergoeconomic, and exergoenvironmental approaches has been reported. Within Figure 1, the following papers were included. The mismatch of the isothermal phase change line for evaporators and condensers and the heat source and sink lines led to large irreversibility in two main heat exchangers [13]. Similar to refrigeration applications, different mixtures were discussed for use as the working fluids for ORC. Zeotropic mixtures have the temperature glide in the two-phase zone; therefore, they can be selected in order to bring the temperature profiles closer in the heat exchangers [14]. The performance of the ORC using different zeotropic mixtures on the basis of thermodynamics and thermoeconomics is discussed in [1]. The results reveal that the ORC using the mixture, The detailed literature review of the most representative papers is as follows. Note that in the below mentioned studies, the research results for the ORC with one-component working fluids are reported (not included in Figure 1). For example, the thermodynamic analysis and optimization of ORC performance with one-component working fluids are discussed in [3][4][5][6][7]. Several studies have evaluated the ORC using different performance criterions, such as energetic, economic, and environmental, using exergy tools [8,9]. Exergy can be combined with economic analysis and an environmental assessment; these combinations are called exergoeconomic and exergoenvironmental analysis, or exergy-based methods. In [4], a parametric optimization of an ORC using R123, R245fa, and isobutane as working fluids has been performed from the perspectives of thermodynamic and economic. The exergetic performance of an ORC with high critical temperature working fluids using genetic algorithm optimization was investigated in [10]. Thermodynamics and exergoeconomics performances of ORC with several one-component working fluids were investigated and compared with those of the Kalina cycle and trilateral power cycle. The obtained results reveal that the ORC system is the most recommended for generating power among the two cycles studied from the perspective of economics [11]. In [12], multi-objective optimization of an ORC with cyclohexane, benzene, and toluene as the working fluids using the exergy, exergoeconomic, and exergoenvironmental approaches has been reported. Within Figure 1, the following papers were included. The mismatch of the isothermal phase change line for evaporators and condensers and the heat source and sink lines led to large irreversibility in two main heat exchangers [13]. Similar to refrigeration applications, different mixtures were discussed for use as the working fluids for ORC. Zeotropic mixtures have the temperature glide in the two-phase zone; therefore, they can be selected in order to bring the temperature profiles closer in the heat exchangers [14]. The performance of the ORC using different zeotropic mixtures on the basis of thermodynamics and thermoeconomics is discussed in [1]. The results reveal that the ORC using the mixture, generally, demonstrates a low economic performance. The thermodynamic and thermoeconomic comparison analysis of an ORC system with one-component working fluids and mixtures are reported in [13]. The considered one-component working fluids are high and low critical temperatures. The obtained results demonstrate that the thermoeconomic performance of working fluids with high critical temperatures is better than those with low critical temperatures. A comparative study of one-component working fluids and mixtures for ORC, from the energy and exergy viewpoints, was reported in [15]. They reported that evaluated mixtures have lower efficiency than one-component working fluids. In [16], performance analysis and parametric optimization of several zeotropic mixtures for an ORC using an exergy approach were performed; the mixture R245fa/R600a (0.9/0.1) was reported as most advantageous. Thermodynamic analysis and multi-objective optimization for various configurations of ORC using zeotropic mixtures were performed in [17]. The results indicated that zeotropic mixtures showed a higher performance than one-component working fluids. A comparison of thermodynamic and exergoeconomic performances for supercritical CO 2 recompression cycle combined with regenerative organic Rankine cycle using the zeotropic mixture as working fluid was reported in [18]. In [19], a complex thermo-economic-environmental optimization and advanced exergy analysis were applied for a dual-loop organic Rankine cycle (DORC) using zeotropic mixtures. The payback period was selected as an economic evaluation criteria and annual CO 2 emission reduction as an environmental evaluation criterion. Higher performance was observed for the mixtures as working fluid of ORC. Both criteria, payback period and annual CO 2 emission reduction, could not be linked to the exergy variables (therefore, [19] was not included in Figure 1). As it can be seen from the literature review, there are valuable research works that address the use of mixtures as working fluids for ORC. However, to the best of the authors' knowledge, there are no research results regarding the exergoeconomic and exergoenvironmental evaluation of ORC with mixtures as the working fluids. The main purpose of this study was to evaluate an ORC system with a zeotropic mixture as the working fluid for power generation using waste heat from a cement plant. The zeotropic mixtures under this study were toluene/cyclohexane and toluene/benzene. For the evaluation, exergy-based methods were applied, and for the optimization, a multi-objective optimization approach was used. System Description A flow diagram of the proposed ORC system is given in Figure 2a. The system consists of four components: a generator as a combination of a preheater and an evaporator, a turbine, a condenser, and a pump. The processes within the ORC are illustrated in the temperature-entropy (T-s) diagram in Figure 2b. The pump pressurizes the working fluid (state 2) to the evaporator pressure. The working fluid is heated and evaporated by absorption of the heat from a heat source. The working fluid vapor (state 4) flow enters the turbine and generates the shaft work. The low pressure vapor (state 5) leaves the turbine to the condenser. In the evaluated ORC system, the heat source is the exhaust gas with a temperature of 350 • C from a technological process [20]. The utilization of waste technological heat requires the use of intermediate working fluid-thermal oil. The considered working fluids, which are zeotropic mixtures that have high critical and boiling temperatures, are also characterized by "dry" properties and high thermodynamic performance for ORC application [21]. This choice was based on the slope of the saturated vapor line for the working fluid on a T-s diagram and the temperature level of the heat source [6,22]. The mixtures of toluene with cyclohexane and toluene with benzene at different concentrations are discussed in the present study as well. System Modeling and Analysis The ORC system model was developed using MATLAB software. Refprop software and equations [23] were used for calculating the properties of the working fluids. Dowtherm was chosen as an intermediate heat transfer fluid. All properties were calculated using the equations in [24]. The simulation model was developed with the following assumptions: (a) steady-state operation conditions, (b) pressure drop and exergy losses within heat exchangers are neglected, and (c) the mass fraction shift of a zeotropic mixture is neglected in the case of each composition. Thermodynamic Modeling According to Figure 2, the thermodynamic model of the ORC is described below. All components were simulated under the assumption of adiabatic operation conditions. -Turbine -Heat exchangers The heat balance equations in the intermediate heat exchanger (IHE), evaporator (evp), preheater (pre), desuperheater (desp), and condenser (con) can be, respectively, expressed as: All heat exchanges are a shell-and-tube type. System Modeling and Analysis The ORC system model was developed using MATLAB software. Refprop software and equations [23] were used for calculating the properties of the working fluids. Dowtherm . Q was chosen as an intermediate heat transfer fluid. All properties were calculated using the equations in [24]. The simulation model was developed with the following assumptions: (a) steady-state operation conditions, (b) pressure drop and exergy losses within heat exchangers are neglected, and (c) the mass fraction shift of a zeotropic mixture is neglected in the case of each composition. Thermodynamic Modeling According to Figure 2, the thermodynamic model of the ORC is described below. All components were simulated under the assumption of adiabatic operation conditions. . - All heat exchanges are a shell-and-tube type. . where the size of these components (i.e., heat transfer surface (A)) are calculated with the help of the heat transfer coefficient correlations U k [25,26] and the logarithmic mean temperature difference method LMTD k . Exergy Analysis The exergy analysis was performed using the approach of "exergy of fuel, . E F,k " and "exergy of product, . E P,k ". The value of irreversibilities was expressed through exergy destruction, . E D,k . The exergy balance for each system component is written as [27]: . Exergoeconomic Analysis In order to proceed with an exergoeconomic analysis, a cost balance is supposed to be written for the kth component of ORC. Where necessary, the auxiliary equations should be added to the corresponding cost balance [27] using the P-rule and/or F-rule: With the exergy costing principle Z k represents the total capital investment cost rate; it was determined according to cost equations reported in [12]. Cost balances for each component must be resolved simultaneously. A linear equations system was developed by combining Equation (11) with the auxiliary equations: The matrix form of the cost equations is given in Figure 3. where the size of these components (i.e., heat transfer surface (A)) are calculated with the help of the heat transfer coefficient correlations Uk [25,26] and the logarithmic mean temperature difference method LMTDk. Exergy Analysis The exergy analysis was performed using the approach of "exergy of fuel, , " and "exergy of product, , ". The value of irreversibilities was expressed through exergy destruction, , . The exergy balance for each system component is written as [27]: Exergoeconomic Analysis In order to proceed with an exergoeconomic analysis, a cost balance is supposed to be written for the kth component of ORC. Where necessary, the auxiliary equations should be added to the corresponding cost balance [27] using the P-rule and/or F-rule: With the exergy costing principle = . The term represents the total capital investment cost rate; it was determined according to cost equations reported in [12]. Cost balances for each component must be resolved simultaneously. A linear equations system was developed by combining Equation (11) with the auxiliary equations: The matrix form of the cost equations is given in Figure 3. Exergoenvironmental Analysis The methodology of exergoenvironmental is similar to the exergoeconomic analysis [27]. Exergoenvironmental analysis combines exergy analysis and LCA. Environmental balances can be written as follows: Correlations were developed for calculating the environmental impact of the components ( . Y k ) in the construction period. The LCA was conducted according to Eco-indicator 99 [28]. A linear equations system was developed by combining Equation (13) with the auxiliary equations: Figure 4 shows the matrix formulation of the environmental impact equations. Exergoenvironmental Analysis The methodology of exergoenvironmental is similar to the exergoeconomic analysis [27]. Exergoenvironmental analysis combines exergy analysis and LCA. Environmental balances can be written as follows: Correlations were developed for calculating the environmental impact of the components ( ) in the construction period. The LCA was conducted according to Eco-indicator 99 [28]. A linear equations system was developed by combining Equation (13) with the auxiliary equations: Figure 4 shows the matrix formulation of the environmental impact equations. Table 1 represents all exergy, cost, and environmental balance equations for different components of the evaluated ORC system. System Optimization The ORC system was optimized using a multi-objective approach based on the particle swarm algorithm [29]. Pareto frontier was supposed to be obtained for the total system. The following three objective functions were considered in this study: -Cost per exergy unit of the power generated -Environmental impact of the power generated Results and Discussion The used ORC model was validated using the reported data [30] for the basic ORC system with the R245fa/R600azeotropic mixture as the working fluid. The temperature and the mass flow rate of the heat source were set as 120 • C and 1 kg/s. The pinch temperature difference in the evaporator and the condenser were considered to be 10 • C and 5 • C, respectively. The turbine efficiency and pump efficiency were assumed to be 85% and 65%, respectively. As shown in Table 2, the present results and data from [30] are in good agreement. Parametric Study In order to investigate the effect of certain parameters on the ORC cycles performances, a parametric study was carried out. The key input parameters and the underlying assumptions to simulate the ORC are provided in Table 3. The effect of the mass fraction of working fluid on the ORC performances is shown in Figure 5. For the mixtures of cyclohexane/toluene and benzene/toluene, the exergy efficiency decreased with the increasing mass fraction of toluene. The cost per unit of exergy for both mixtures increased with the increase in the mass fraction of toluene. According to the results of the parametric study reported in [12], ORC using cyclohexane and benzene as pure fluids was more effective compare to toluene in terms of thermodynamic and economics. Increasing the mass fraction of toluene will degrade the exergetic and exergoeconomic performances of the ORC system. In addition, the environmental impact decreased as the mass fraction of toluene increased for both mixtures ( Figure 5). This is because the exergoenvironmental performance of ORC with toluene as working fluid was better than that of ORC with cyclohexane and benzene as pure fluids [12]. Table 3. A summary of the major parameters for the simulation of ORC [12]. Figure 6 shows the variation in the objective functions with turbine inlet pressure for working fluids. The exergy efficiency (Figure 6a) was maximized and cost per exergy unit (Figure 6b) minimized at a special value of turbine inlet pressure, while the environmental impact decreased with the increase in the turbine inlet pressure. These results exhibit the same characteristics as those shown in the previous work [12]. Figure 6 shows that the best exergetic and exergoeconomic performances were observed for the cyclohexane /toluene mixture, while the best exergoenvironmental performance was for the benzene/tol- 10 of 17 Figure 6 shows the variation in the objective functions with turbine inlet pressure for working fluids. The exergy efficiency (Figure 6a) was maximized and cost per exergy unit (Figure 6b) minimized at a special value of turbine inlet pressure, while the environmental impact decreased with the increase in the turbine inlet pressure. These results exhibit the same characteristics as those shown in the previous work [12]. Figure 6 shows that the best exergetic and exergoeconomic performances were observed for the cyclohexane /toluene mixture, while the best exergoenvironmental performance was for the benzene/toluene mixture. Variations in the performances of the ORC cycle with the heat transfer fluid temperature were given in Figure 7 for both mixtures. It can be seen that as the heat transfer fluid temperature increased, the exergy efficiency and the environmental impact increased. On the other hand, the increase in the heat transfer fluid temperature caused a decrease in the cost. Figure 7 also indicates that when the temperature was below 270 • C, both mixtures offerred the same performance. Optimization Results A parametric optimization was conducted using the MOPSO (multi-objective particle swarm optimizer) algorithm. Particle swarm optimization is one of the most efficient evolutionary optimization algorithms widely used to resolve multi-objective optimization problems. This technique is based on the evolution of a population of solutions called particles that move within the search space. The basic parameters of the algorithm are specified according to the values presented in [12]. Figures 8 and 9 show the Pareto frontier of the multi-objective optimization using cyclohexane/toluene and benzene/ toluene at different mass fractions. All Pareto frontier points are potentially an optimum solution. Therefore, one optimal solution must be selected. Optimization Results A parametric optimization was conducted using the MOPSO (multi-objective particle swarm optimizer) algorithm. Particle swarm optimization is one of the most efficient evolutionary optimization algorithms widely used to resolve multi-objective optimization problems. This technique is based on the evolution of a population of solutions called particles that move within the search space. The basic parameters of the algorithm are specified according to the values presented in [12]. Figures 8 and 9 show the Pareto frontier of the multi-objective optimization using cyclohexane/toluene and benzene/ toluene at different mass fractions. All Pareto frontier points are potentially an optimum solution. Therefore, one optimal solution must be selected. In the present study, the final optimum design point and the optimal zeotropic mixture were selected through a fuzzy-based mechanism [31]. Thermodynamic properties and optimization results for the zeotropic mixture are indicated in Tables 4 and 5. It should be noted that the best results were found for both mixtures with a concentration of 0.9/0.1. Referring to Table 5, the exergy efficiency of the ORC using cyclohexane/toluene was higher than that of using benzene/toluene. This is because the cyclohexane/toluene mixture exhibited the highest turbine inlet pressure. As mentioned in a previous work, a higher turbine inlet pressure working fluid provided the highest values of power and exergy efficiency [12]. On the other hand, a cyclohexane/toluene mixture provides the best result from the viewpoint of exergoeconomics, while the best exergoenvironmental performance was obtained for the benzene/toluene mixture. problems. This technique is based on the evolution of a population of solutions called particles that move within the search space. The basic parameters of the algorithm are specified according to the values presented in [12]. Figures 8 and 9 show the Pareto frontier of the multi-objective optimization using cyclohexane/toluene and benzene/ toluene at different mass fractions. All Pareto frontier points are potentially an optimum solution. Therefore, one optimal solution must be selected. In the present study, the final optimum design point and the optimal zeotropic mixture were selected through a fuzzy-based mechanism [31]. Thermodynamic properties and optimization results for the zeotropic mixture are indicated in Tables 4 and 5. It should be noted that the best results were found for both mixtures with a concentration of 0.9/0.1. Referring to Table 5, the exergy efficiency of the ORC using cyclohexane/toluene was higher than that of using benzene/toluene. This is because the cyclohexane/toluene mixture exhibited the highest turbine inlet pressure. As mentioned in a previous work, a higher turbine inlet pressure working fluid provided the highest values of power and exergy efficiency [12]. On the other hand, a cyclohexane/toluene mixture provides the best result from the viewpoint of exergoeconomics, while the best exergoenvironmental performance was obtained for the benzene/toluene mixture. When comparing performances of pure and mixture fluids, it can be found that the zeotropic mixtures exhibited low turbine inlet pressure, which may be desirable because high pressures lead to mechanical constraints and, therefore, expensive equipment may When comparing performances of pure and mixture fluids, it can be found that the zeotropic mixtures exhibited low turbine inlet pressure, which may be desirable because high pressures lead to mechanical constraints and, therefore, expensive equipment may be needed [32]. It can also be seen that the exergetic performances of zeotropic mixtures were slightly higher than pure cyclohexane and pure benzene. Compared to pure toluene, a significant increase in the exergy efficiency was observed; the exergetic performance improved 53.0% and 43.5% when the toluene was mixed with cyclohexane and benzene, respectively. From Table 5, we also can see that the zeotropic mixtures of cyclohexane/toluene and benzene/toluene had the best exergoeconomic performances in comparison with pure fluids. The improvement in exergoeconomic performance for cyclohexane and benzene when they were mixed with toluene was 4.9%, while the improvement was 14.6% and 13.0% if toluene was mixed with cyclohexane and benzene. On the other hand, the zeotropic mixtures also showed a significant improvement in exergoenvironmental performance. The improvement was 8.2% and 10.2% for cyclohexane and benzene, respectively, while the improvement was 14.8% and 18.8% for toluene if it was mixed with cyclohexane and benzene, respectively. In Table 6, the results obtained from the exergy, exergoeconomic and exergoenvironmental analyses are reported. The results of exergy analysis indicate that the highest exergy destruction for both working fluids occurred in the heat exchangers. Based on the exergoeconomic analysis, the heat exchangers had the highest cost rate ( B D ), more attention is needed on these components. Conclusions In this research paper, exergy, exergoeconomic, and exergoenvironmental analyses were applied in order to evaluate the performance of the ORC system using zeotropic mixtures as working fluids. Parametric studies were carried out to evaluate the influence of operational parameters on the exergetic, economic, and environmental performances of the evaluated system. Multi-objective optimization was applied to ensure the optimum performances of the ORC system with two zeotropic mixtures (cyclohexane/toluene and benzene/toluene). A comparison between performances of pure and mixture working fluids was discussed and the following conclusions were obtained: - The application of zeotropic mixtures as a working fluid for ORC led to an increase in exergetic, exergoeconomic, and exergoenvironmental performances compared to using their pure constituents; - The heat exchangers were the most important ORC system components based on the exergy, exergoeconomic, and exergoenvironmental points; -The mass fraction of working fluids within a zeotropic mixture, turbine inlet pressure, and heat transfer fluid temperature had a significant effect on the exergetic, exergoeconomic, and exergoenvironmental performance of the ORC system; -Cyclohexane/toluene (mass fraction 90/10) and benzene/toluene (mass fraction 90/10) are recommended as the optimal mixtures for the selected operating conditions; - The mixture of cyclohexane and toluene will be a better choice only if energetic and economic criterions are considered. However, the mixture benzene/toluene is a beneficial choice to fulfill the environmental criteria. Conflicts of Interest: The authors declare no conflict of interest.
2021-08-28T06:17:18.500Z
2021-07-26T00:00:00.000
{ "year": 2021, "sha1": "77001bf1b9d13ea1a2c66b142884fb95e114c912", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/23/8/954/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "287652166a3eeafce63625ac2ee8cb733533f3e7", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
235485739
pes2o/s2orc
v3-fos-license
Shark liver oil supplementation enriches endogenous plasmalogens and reduces markers of dyslipidemia and inflammation Plasmalogens are membrane glycerophospholipids with diverse biological functions. Reduced plasmalogen levels have been observed in metabolic diseases; hence, increasing their levels might be beneficial in ameliorating these conditions. Shark liver oil (SLO) is a rich source of alkylglycerols that can be metabolized into plasmalogens. This study was designed to evaluate the impact of SLO supplementation on endogenous plasmalogen levels in individuals with features of metabolic disease. In this randomized, double-blind, placebo-controlled cross-over study, the participants (10 overweight or obese males) received 4-g Alkyrol® (purified SLO) or placebo (methylcellulose) per day for 3 weeks followed by a 3-week washout phase and were then crossed over to 3 weeks of the alternate placebo/Alkyrol® treatment. SLO supplementation led to significant changes in plasma and circulatory white blood cell lipidomes, notably increased levels of plasmalogens and other ether lipids. In addition, SLO supplementation significantly decreased the plasma levels of total free cholesterol, triglycerides, and C-reactive protein. These findings suggest that SLO supplementation can enrich plasma and cellular plasmalogens and this enrichment may provide protection against obesity-related dyslipidemia and inflammation. Abstract Plasmalogens are membrane glycerophospholipids with diverse biological functions. Reduced plasmalogen levels have been observed in metabolic diseases; hence, increasing their levels might be beneficial in ameliorating these conditions. Shark liver oil (SLO) is a rich source of alkylglycerols that can be metabolized into plasmalogens. This study was designed to evaluate the impact of SLO supplementation on endogenous plasmalogen levels in individuals with features of metabolic disease. In this randomized, double-blind, placebo-controlled crossover study, the participants (10 overweight or obese males) received 4-g Alkyrol® (purified SLO) or placebo (methylcellulose) per day for 3 weeks followed by a 3-week washout phase and were then crossed over to 3 weeks of the alternate placebo/Alkyrol® treatment. SLO supplementation led to significant changes in plasma and circulatory white blood cell lipidomes, notably increased levels of plasmalogens and other ether lipids. In addition, SLO supplementation significantly decreased the plasma levels of total free cholesterol, triglycerides, and C-reactive protein. These findings suggest that SLO supplementation can enrich plasma and cellular plasmalogens and this enrichment may provide protection against obesityrelated dyslipidemia and inflammation. Supplementary key words diet and dietary lipids • plasmalogens • lipidomics • lipid metabolism • inflammation • metabolic disease • immunometabolism Metabolic disease refers to a group of complex chronic conditions including obesity, type 2 diabetes, cardiovascular disease, and certain forms of cancer (1). These disorders share some common pathogenic features including altered lipid metabolism or dyslipidemia, which often leads to lipid accumulation at diverse cellular/tissue locations. Such aberrant lipid accumulation alters cell and/or tissue function, inducing events such as oxidative stress and inflammation that contribute to disease pathogenesis. Lipidomic profiling provides the opportunity to identify novel lipid signatures in metabolic diseases and explore their relationship with disease pathogenesis (2). Using this approach, a deficit of circulating plasmalogens has been identified as a feature of metabolic disease that is independent of age, sex, and BMI in multiple population and clinical cohorts (3)(4)(5)(6)(7). Plasmalogens have diverse biological functions. They are important constituents of the plasma membrane and can modulate its biophysical properties (10). They are also considered as endogenous antioxidants because of their vinyl ether linkage, which is highly susceptible to attack by reactive oxygen species, and this could therefore be helpful in protecting other biomolecules from oxidative damage (11,12). In addition, plasmalogens may regulate cholesterol (COH) metabolism (13,14) and immune responses (15)(16)(17)(18). Plasmalogens are present in all mammalian tissues; however, their abundance varies across tissues and cell types such that levels are relatively high in the brain, heart, kidney, skeletal muscle, and certain immune cell types but lower in the liver and small intestine (9,19). Plasmalogen biosynthesis involves a complex metabolic pathway through the peroxisome and endoplasmic *For correspondence: Peter J. Meikle, peter.meikle@baker.edu.au. reticulum (20) (Fig. 2). The rate-limiting steps occur in the peroxisome but can be bypassed through oral administration of alkylglycerols (Figs. 1A and 2). These alkylglycerols can be incorporated directly into the biosynthetic pathway (21) (Fig. 2) and lead to an increase in circulating and tissue plasmalogens (21,22). Although alkylglycerols are present in our diet, the levels in typical western diets are insufficient to significantly boost our plasmalogen levels. Shark liver oil (SLO), a dietary supplement rich in alkylglycerols in the form of monoalkyl-diacylglycerols (TG(O)) ( Fig. 1B), could be used to increase endogenous plasmalogen levels (Fig. 2). SLO has been used to treat a number of conditions, including lung inflammation (23), alimentary tract diseases (24), lymphadenopathy (25), cancer (26)s and dermatitis (27) and to help with wound healing (23). SLO supplementation also improved immune function in surgical patients (28). However, the mechanistic basis of the beneficial effects observed with SLO supplementation is not well defined, possibly because of the lack of proper understanding of the impact of SLO alkylglycerols on endogenous lipid metabolism. Here, we report on the characterization of the alkylglycerols contained within SLO and the effects of SLO supplementation on the plasma and cellular lipidome in overweight or obese individuals. Study design for SLO supplementation in humans In this double-blind, placebo-controlled crossover study, participants (n=10) were overweight or obese (BMI in the range of 28-40 kg/m 2 ) adult males (aged 25-60 years) with no signs of cardiovascular disease or diabetes. Among the 10 participants, only four met the definition of having the Plasmalogen biosynthesis and modulation by alkylglycerol precursors. Dietary alkylglycerols can bypass the rate-limiting peroxisomal biosynthetic steps (red pathway). Metabolites are shown in red and black and enzymes are shown in blue. AADHAP-R, alkyl/acyl-DHAP-reductase; AAG3P-AT, alkyl/acyl-glycero-3-phosphate acyltransferase; ADHAP-S, alkyl DHAP synthase; AG kinase, alkylglycerol kinase; CoA, coenzyme A; CoA-IT, coenzyme A-independent transacylase; C-PT, choline phosphotransferase; iphospholipase A2, calcium-independent phospholipase A2; Δ1-desaturase, plasmanylethanolamine desaturase; DHAP, dihydroxyacetone phosphate; DHAP-AT, DHAP acyltransferase; E-PT, ethanolamine phosphotransferase; Far1/2, fatty acyl-CoA reductase 1 or 2; GPC, glycerophosphocholine; GPE, glycerophosphoethanolamine; PC, phosphatidylcholine; PE, phosphatidylethanolamine; PEMT, phosphatidylethanolamine N-methyltransferase; PH, phosphohydrolase; PLC, phospholipase C. metabolic syndrome according to the strict International Diabetes Federation criteria (29); however, all the participants had at least two features of metabolic syndrome. Written informed consent was obtained from all study participants before the commencement of the study. This study was performed in accordance with the ethical principles set forth in the Declaration of Helsinki and received approval from the Alfred Hospital Ethics Committee (approval number: 436/15). Participants were randomized into placebo or treatment arms and received 4-g Alkyrol® (purified SLO; Eurohealth, Ireland) per day or placebo (methylcellulose) for 3 weeks followed by a 3-week washout phase and were then crossed over to 3 weeks of the alternate placebo/Alkyrol® treatment. Methylcellulose was chosen as a placebo to avoid possible confounding effects of an oil-based placebo. Both Alkyrol® and methylcellulose capsules were prepared to have similar visual appearance. Participants were instructed to keep their dietary composition and food intake constant during the two treatment phases. Fasting blood samples were collected at the start and end of each intervention (Fig. 3). Isolation of plasma and white blood cells from whole blood Participants' blood samples were collected in K3-EDTA tubes and centrifuged at 1,711 g for 15 min at room temperature. The top plasma layer was aspirated, 1 μl of 100 mM butylhydroxytoluene per milliliter of plasma was added, and the plasma was stored at −80 • C. The buffy layer was mixed with 8 ml of PBS and layered on top of 5 ml of Ficoll-Paque and centrifuged at 400 g for 30 min at room temperature with the lowest brake. The resulting upper layer (containing plasma and platelets) was discarded, and the thin cloudy layer of white blood cells was collected and transferred to a fresh tube. PBS (8 ml) was added, and the sample was centrifuged at 250 g for 10 min at room temperature with the highest brake. The cells were then resuspended in 1.5 ml of PBS and centrifuged at 100 g for 10 min at room temperature. After centrifugation, the supernatant was discarded and the white blood cell pellet was suspended in 400 μl PBS and stored at −80 • C. Clinical measurements The fasting plasma levels of glucose, COH, triglycerides, HDL-C, LDL-C, insulin, and high-sensitivity C-reactive protein (hsCRP) were measured using commercially available kits on a COBAS Integra 400 Plus blood chemistry analyzer (Roche Diagnostics, Australia) following standard procedures. Remnant COH was estimated as the total COH minus LDL-C minus HDL-C, non-HLD-C was calculated as total COH minus HDL-C, and homeostatic model assessment for insulin resistance was calculated as fasting insulin (mIU/l) multiplied by fasting glucose (mmol/l) and then divided by 22.5. The measurement of tumor necrosis factor alpha, monocyte chemoattractant protein-1, and vascular cell adhesion protein 1 levels in plasma was performed by Cardinal Bioresearch, Queensland, Australia. Briefly, 100 μl of whole blood was added to 5 ml of the lysis buffer (BD Pharm Lyse) and then incubated in the dark for 5 min. The sample was then added to 10 ml of the wash buffer (9:1 ratio of PBS and fetal bovine serum) and centrifuged at 300 g for 5 min at 4 • C. The resulting pellet was then resuspended in the wash buffer, placed in an Eppendorf tube, and centrifuged at (300 g, 5 min, room temperature). Antibodies were then added to the samples and incubated for 30 min in the dark. The samples were washed with PBS and centrifuged (300 g, 5 min, room temperature) before transferring the cells to a fluorescence activated cell sorting tube. Finally, the cells were analyzed using the BD FACSCanto II flow cytometer. The following gating strategy was used to define the various monocyte populations: white blood cells Screening Crossover End Either Fig. 3. Study design for shark liver oil supplementation in humans. Participants were recruited into the study and asked to attend an initial screening. At the screening visit, participants underwent a medical examination to assess their eligibility. Eligible participants were recalled, within three weeks, where they were randomized to take either Alkyrol® (shark liver oil gel caps) or placebo for three weeks. At the three-week visit, the participants discontinued the treatment/placebo for a three-week washout period. At visit 4, the participants commenced the alternative treatment for 3 weeks. At visit 5, the participants underwent the same medical examination as visit 1 to assess any change throughout the study period. Fasting blood samples from each participant were collected at the initial screening and at the start and end of each intervention. were initially gated based on size and granularity (forward scatter and side scatter). To identify monocytes, cells were gated on the basis of being HLA-DR + and cell-linage marker (CD56, CD2, CD19, NKp46, and CD15) negative. HLA-DR + cells were subsequently assessed for CD14 and CD16 expression, with classical monocytes being defined as CD14 ++ CD16 − , intermediate monocytes being defined as CD14 + CD16 + , and nonclassical monocytes being defined as CD14 dim CD16 ++ , as described previously (30). The flow cytometry data were analyzed using the BD FACSDiva software. Lipidomic analysis Characterization of TG(O) species in SLO. Alkyrol® was diluted 1:50,000 in chloroform:methanol (1:1) and infused into a QTRAP 4000 triple quadrupole mass spectrometer (AB Sciex) using a Harvard syringe pump at a flow rate of 20 μl/min, and a Q1 scan in positive-ion mode (mass range: 300-1,000 Da) was performed. The most abundant molecular species in SLO were then identified based on the peak intensity from the Q1 spectrum. For relative quantification of these species, Alkyrol® was diluted 10,000 times in chloroform:methanol (1:1) and 10 μl of diluted Alkyrol® was then mixed with 10 μl of internal standard mix (supplemental Table S1) and 40 μl of water-saturated butanol and 40 μl of methanol with 10 mM ammonium formate. The resultant mixture was then analyzed using the method described in the LC/MS/MS section. Characterization of alkylglycerol composition in SLO. Alkylglycerols are present in Alkyrol® as TG(O), that is, consisting of one alkyl chain at the sn1 position and two acyl chains at sn2 and sn3 positions. The 1-O-alkylglycerol composition of Alkyrol® was determined after an alkaline hydrolysis of the acyl chains. In brief, Alkyrol® was diluted 10,000 times with chloroform:methanol (1:1) and 10 μl of the diluted samples was mixed with 100 μl of 0.8 M sodium hydroxide in methanol and then incubated at 37 • C for 2 h. Then, 10 μl of 8 M formic acid was added to stop the hydrolysis reaction. Next, 10 μl of internal standard mix (supplemental Table S1) was added, and lipids were extracted following Folch extraction procedure (31) and finally reconstituted with 50 μl of water-saturated butanol and 50 μl of methanol with 10 mM ammonium formate. The extract was then analyzed using the method described in the LC/MS/MS section. Extraction of lipids from plasma and white blood cells. Lipids were extracted using a single-phase chloroform:methanol (2:1) extraction protocol as described previously (5). Briefly, 10 μl of plasma or 20 μl of white blood cell pellet (suspended in PBS) was combined with 20 volumes (200 or 400 μl) of chloroform:methanol (2:1) and 10 μl of the internal standard mix (supplemental Table S1) and then vortexed. Samples were mixed in a rotary mixer for 10 min, sonicated for 30 min, and then allowed to stand for 20 min at room temperature. Samples were then centrifuged (16,000 g, 10 min, 20 • C), and the supernatant was dried under a stream of nitrogen at 40 o C. The extracted lipids were finally resuspended with 50 μl of water-saturated butanol and 50 μl of methanol containing 10 mM ammonium formate. LC/MS/MS. Analysis of lipids were performed on an Agilent 1200 HPLC system coupled to an AB Sciex QTRAP 4000 triple quadrupole mass spectrometer using scheduled multiple reaction monitoring experiments described previously (32). LC separation was performed on a 2.1 × 100 mm C18 Poroshell column (Agilent) at 400 μl/min. The following gradient conditions were used: 10% B to 55% B over 3 min, then to 70% B over 8 min, to 89% B over 0.1 min, and finally to 100% B over 3.3 min. The solvent was then held at 100% B for 1 min. Equilibration was as follows: the solvent was decreased from 100% B to 10% B over 0.1 min and held for an additional 4.5 min. The solvent system consisted of solvent A: 50% water/ 30% acetonitrile/20% isopropanol (v/v/v) containing 10 mM ammonium formate and solvent B: 1% water/9% acetonitrile/ 90% isopropanol (v/v/v) containing 10 mM ammonium formate. The conditions for the MS/MS of each lipid class are provided in supplemental Table S1. The concentrations of individual lipid species were calculated by taking a ratio of the area under the curve of the lipid of interest to the area under the curve of the internal standard of the corresponding lipid class (supplemental Table S1) and then multiplying the said ratio by the amount of internal standard added into the sample. Response factors were also applied for some lipid species (supplemental Table S2) to better estimate true lipid concentrations as described previously (6). Lipid class concentrations were calculated from the sum of individual species within that class. TG(O) species were measured both as single ion monitoring and neutral loss of specific fatty acyl/alkyl chains. As single ion monitoring measurements captured more diverse species, they were used for calculation of the concentration of total TG(O). Statistical analysis Lipidomics data were either used as concentrations or as concentrations normalized to the total PC concentration. Zero values (i.e., values below the detection limit) and values more than 4.5 standard deviations below the mean of the considered lipid (i.e., extreme low outliers due to measurement errors around the detection limit) were set to missing. Values were log10-transformed before analyses. All missing values were then single-imputed using sample-wise k-nearest neighbor imputation (using k=5, given that only 10 participants were available at each time point). Modeling results for the lipids with imputed values may thus be considered as overconfident, although these results aligned well with results for other species in these classes. For each lipid species or class as well as for clinical measures, blood cell count, monocyte subpopulations, and inflammatory markers, we posited linear mixed models explaining (log10) levels by an overall intercept, a treatment effect (either none or placebo/SLO at visits V3 and V5), and a carryover effect (either none or placebo/SLO at visit V4 only), with a random intercept for each participant. Contrasts for treatment and carryover effects were designed with across-group averaging vectors for use in the type-III ANOVAs below. The treatment contrast then compared placebo with baseline and SLO with placebo, whereas the carryover contrast compared placebo with none and SLO with none. For each species or total, the inclusion or exclusion of treatment or carryover effects was done by an ad hoc forward stepwise feature selection process: first, treatment was considered, and only included if the type-III ANOVA P value for that term was below 0.10, and then, carryover was considered in a similar way. Our modeling thus allowed carryover effects to be estimated independently of treatment effects. We allowed this as we reasoned that not all long-term effects (ie, at visit V4) of supplementation would necessarily reflect the direct effect of supplementation seen at visit V3: longer response times (ie, time taken for changes to be visible in the lipidome greater than the 3 weeks between visits V2 and V3), compensatory mechanisms, slow metabolism modifications, behavioral changes, and more might impact any carryover effect in a way unrelated to the treatment effect seen at V3. The similar pre-SLO and pre-placebo plasma ether lipid levels in the participants (supplemental Fig. S1) irrespective of their SLO treatment order (first or second) indicate that the 3week washout period was sufficient. We then extracted the beta coefficients, 95% confidence intervals, and corresponding post hoc P values from each model. We applied Benjamini-Hochberg (BH) multiple testing correction to said P values (because of the selection process above, the impact of the multiple testing correction was thus reduced for model terms that were less frequently included) across lipid species and classes separately. As the outcome was on a log scale, beta coefficients (and their confidence intervals) were transformed into fold changes by a power transformation (fc=10 beta ). The mean percentage change of the alkenyl chain composition of plasma alkenyl phosphatidylethanolamine (PE plasmalogen or PE(P)) after Alkyrol® and placebo treatments were compared with repeated measures ANOVA using the combined data from the two intervention arms (visit 2 to 3 and visit 4 to 5), taking into account treatment (as a betweensubject variable) and treatment order. Corrected P values less than 0.05 were considered statistically significant. All analyses were performed in R (v3.5), in particular using packages lme4 (v1.1.21) and lmerTest (3.1.0) for the linear mixed modeling. Composition of alkylglycerols in SLO The relative proportions of major TG(O) species in Alkyrol® are presented in Fig. 4A and supplemental Table S3 Table S3). The results also depict that the 1-O-alkyl portion of these TG(O) species is dominated by the O-18:1 alkyl chain, whereas the fatty acyl portion is more diverse and mostly consisting of 16:0, 18;1 20:1, 22:1, and 24:1 acyl chains (supplemental Table S3 Table S4). Baseline characteristics of the participant cohort This study consisted of 10 male participants and was conducted between December 2015 and August 2016. Table 1 presents the baseline characteristics of the participants. The average age and BMI of the participants were 50 years and 32 kg/m 2 , respectively. The blood pressure, heart rate, and biochemical parameters of the participants were within the normal range. No side effects were reported for any participants when treated with SLO. The effect of SLO supplementation on the plasma lipidome We observed significant changes in 16 plasma lipid classes/subclasses (BH-corrected P < 0.05, Fig. 5 and supplemental Table S5) after SLO supplementation relative to placebo treatment. Among these, the levels of 5 lipid classes/subclasses were increased and 11 decreased with SLO supplementation. At a species level, the changes in 293 individual plasma lipid species after SLO treatment were significant (BH-corrected P < 0.05; Fig. 5 and supplemental Table S6). Of these, the levels of 139 lipid species were significantly decreased and the levels of 154 species were significantly increased. Of the 293 lipid species, 159 were ether lipids and 134 were nonether lipids. Shark liver oil supplementation increases plasmalogens As expected, there was a significant increase in total TG(O) level in the SLO treatment group compared with the placebo group (Fig. 5), which was because of increases in multiple TG(O) species (Fig. 6). Table S5). When we looked into individual species, we observed that most of the PC(O) species were significantly elevated after SLO supplementation, although the greatest increase was observed for a species with a O-18:1 alkyl chain, PC(O-18:1/18:1) (fold change: 3.20, BHcorrected P = 9.98e-16; Fig. 6 and supplemental Table S6). Furthermore, we noted that the increase in the LPC(O-18:1) level was the highest among the increases in LPC(O) species (fold change: 1.75, BHcorrected P = 6.69e-12; Fig. 6 and supplemental Table S6). Similarly, in the case of PE(O) species, the increases were much greater for species with O-18:1 alkyl chain such as PE(O-34:1) and PE(O-18:1/22:6) than increases in other species (fold change: 6.82 and 5.13, respectively, BH-corrected P = 3.24e-19 and 3.62e-18, respectively; Fig. 6 and supplemental Table S6). In contrast to these increases, there were significant decreases in ceramide, sphingomyelin, PC, PE, phosphatidylinositol, lysophosphatidylinositol, and phosphatidylglycerol levels after SLO treatment (Fig. 5 and supplemental Table S5). Moreover, there were significant reductions in the levels of COH, cholesterol ester, diacylglycerol, and triacylglycerol with SLO supplementation (Fig. 5 and supplemental Table S5). Interestingly, it was also noted that the level of PC decreased significantly ( Triglycerides (mmol/l) 2.14 ± 1.08 Fasting plasma glucose (mmol/l) 4.90 ± 0.48 a n = 10 male participants Table S5) in the SLO treatment group. As PC is the major phospholipid making up the surface layer of all lipoprotein particles, this suggests either a decrease in the number of lipoprotein particles or a change in the surface lipid of the lipoprotein particles. To assess the relative change in lipid classes, we normalized the lipid data to total PC and performed the analysis on the normalized data. After normalization, 17 lipid classes/subclasses showed a significant difference between the responses to SLO relative to placebo (supplemental Fig. S3). Of interest, the increase in plasmalogen and other ether phospholipids is even more notable after accounting for decreasing total PC level, indicating a strong enrichment of these lipid species in the surface layer of the lipoprotein particles. The effect of SLO supplementation on molecular PE(P) species and PE(P) alkenyl chain composition in plasma We observed that SLO supplementation predominantly increased the levels of PE(P) species with an 18:1 alkenyl chain (Fig. 6 and supplemental Table S6). Looking more closely into the relative abundance of PE plasmalogens with 5 different alkenyl chains available in this study (Fig. 7), we found that the increase in PE(P-18:1) (+72%, from~20% to~35%) came at the expense of PE(P-16:0) (-11%), PE(P-18:0) (-28%), and PE(P-20:0) (-26%). Interestingly, the levels of PE(P-20:1), although low, also seem to increase after SLO supplementation (+87%, from~0.75% to~1.5%). The effect of SLO supplementation on the white blood cell lipidome Any potential beneficial effects of SLO supplementation on metabolic disease would likely be mediated by their uptake and metabolism in various cells and tissues. Accordingly, after demonstrating that SLO supplementation enriched the plasma lipidome with plasmalogens and other ether lipids, we wanted to determine if SLO supplementation could alter plasmalogen and ether lipids within cells. To do this, we isolated white blood cells from circulation and analyzed their lipidome. Indeed, SLO supplementation resulted in significant posttreatment changes in 14 lipid class/subclasses (after BH correction) in circulatory white blood cells (Fig. 8, supplemental Table S7). There were significant increases in the concentrations of ether phospholipids such as PC(O), alkenyl phosphatidylcholine (PC(P)), LPC(O), PE(O), and PE(P) with SLO supplementation. We observed that there were significant changes (after BH correction) in 131 lipid species after SLO supplementation (increases in 111 species and decreases in 20 species). For ether phospholipid species, increases were most noticeable for the species with O-18:1 alkyl/alkenyl chain (supplemental Table S8). In addition to the increases in ether phospholipids, we also observed significant increases in COH, sphingolipid, and other phospholipid and glycerolipid classes/subclasses after SLO supplementation ( Fig. 8 and supplemental Table S7). The effect of SLO supplementation on clinical measures, inflammatory markers, blood cell counts, and monocyte populations Although the present study was not explicitly designed to test the impact of SLO supplementation on clinical indices of metabolic dysfunction, we nonetheless thought it would be of interest to use the current experiment to conduct a pilot analysis to examine the potential of SLO supplementation to impact several clinical measures. We observed that there were significant decreases in the levels of total COH (BH-corrected P = 2.84e-03), non-HDL COH (BH-corrected P = 7.13e-03), and triglycerides (BH-corrected P = 4.84e-02) after SLO supplementation (Table 2). However, there were no significant changes in the plasma levels of fasting glucose, glycated hemoglobin, insulin, homeostatic model assessment for insulin resistance, HDL-C, LDL-C, and remnant COH with SLO treatment relative to placebo treatment ( Table 2). SLO supplementation did reduce the level of hsCRP relative to placebo treatment (BH-corrected P = 6.82e-02, Table 2). However, we note that the hsCRP level was higher in the pre-SLO group than pre-placebo group with a higher standard error, suggesting that acute effects in some individuals may have contributed to this result. There was no significant effect of SLO supplementation on other inflammatory cytokines (tumor necrosis factor alpha, monocyte chemoattractant protein-1, and vascular cell adhesion protein 1) ( Table 2). There was a borderline significant decrease in the total number of white blood cells after SLO supplementation (BH-corrected P = 6.52e-02, Table 2). This was mostly due to a trend to the reduction in the number of neutrophils after SLO treatment (BH-corrected P = 4.85e-02, Table 2). In addition, there were decreases in hemoglobin (Hb) and red blood cell (RBC) levels with SLO supplementation (BH-corrected P = 6.52e-02 and 5.27e-02, respectively, Table 2). SLO supplementation did not show any significant effect on the other measures of whole blood count (Table 2). We also evaluated the impact of SLO supplementation on total circulatory monocyte count and monocyte subsets but did not observe any significant effect of SLO supplementation on monocyte populations (Table 2). DISCUSSION SLO has long been used as a traditional dietary supplement for therapeutic health benefits in many countries (33,34). One of the most active ingredients in SLO is the alkylglycerols (35); however, the impact of alkylglycerols on endogenous lipids in humans has not previously been reported. In this study, 4 g per day of SLO supplementation in overweight or obese individuals over a three-week period resulted in a significant increase in circulating levels of multiple ether lipid classes including PC(O), LPC(O), PE(O), PE(P), and TG(O). Importantly, SLO supplementation also led to a significant increase in the levels of plasmalogens and other ether phospholipids within white blood cells. Although further large trials with SLO supplementation will be required, our results provide evidence that SLO supplementation may have clinical utility in patients with metabolic disease. More specifically, we observed reductions in the levels of free COH (BHcorrected P = 2.84e-03), triglycerides (BH-corrected P = 4.84e-02), and hsCRP (BH-corrected P = 6.52e-02) with SLO supplementation. SLO supplementation impacts the plasma lipidome We observed a substantial increase in plasma TG(O) (665%) level after SLO supplementation. Considering the low TG(O) concentration in human plasma and the high concentrations of TG(O) in SLO, this is not surprising. However, differential changes in different TG(O) species after SLO supplementation, and the distinctive composition of TG(O) in SLO and plasma of SLO-supplemented individuals (Supplemental Fig. 1), are suggestive of substantial remodeling of the acyl chain portion of SLO TG(O) species after ingestion. SLO supplementation also led to increases of plasma ether phospholipid subclasses PE(O) (162%), PC(O) (39%), LPC(O) (24%), and PE(P) (29%) (Fig. 5). It should be noted that a significant increase in PE(P) but not in other ether lipid classes within a particular circulatory lipoprotein class (HDL3) was also observed with statin treatment (4 mg/day) for 180 days (36); however, this increase is likely to be a nonspecific downstream effect of other metabolic events (reduction in oxidative stress or improvement in dyslipidemia). The differential changes in different ether phospholipid classes observed with SLO supplementation can be explained by looking at the plasmalogen Fig. 8. Effect of shark liver oil supplementation on white blood cell lipids. Estimated effects of shark liver oil (SLO) supplementation on (log-transformed) circulatory white blood cell lipid concentrations (normalized to phosphatidylcholine concentration) relative to placebo treatments. Open gray circles: lipid species, nonsignificant, no confidence intervals (CIs); violet circles: lipid species, nominally significant (P < 0.05), with CIs; blue circles: lipid species, significant after multiple testing correction (P < 0.05) using Benjamini-Hochberg's approach, with CIs; red diamonds: lipid class/subclass totals, significant after multiple testing correction (P < 0.05) using Benjamini-Hochberg's approach, with CIs. Cer, ceramide; COH, cholesterol; CdhCer, dihydroceramide; DG, diacylglycerol; E, cholesteryl ester; HexCer, monohexosylceramide; Hex2Cer, dihexosylceramide; Hex3Cer, trihexosylceramide; GM3, G (Fig. 7). This is due to the predominance of O-18:1 alkyl chain in SLO alkylglycerols (Fig. 4B) and the inability of these chains to be remodeled in the same way that acyl chain remodeling occurs via phospholipase and acyltransferase activities. These observations suggest that the alkylglycerol composition of dietary supplements can alter the composition of endogenous ether phospholipids and so should be considered in the formulation of future alkylglycerol supplementations. Although we observed a substantial enrichment of ether lipid classes within the plasma lipidome after SLO supplementation, our data do not provide insight into the effect of SLO supplementation on individual lipoprotein classes. Moreover, SLO supplementation for a short duration (3 weeks) may be insufficient to ensure a steady state in the lipidome of plasma lipoproteins. Hence, further in-depth and time course studies on the impact of SLO supplementation on individual lipoprotein classes are warranted. SLO supplementation impacts the white blood cell lipidome In addition to the changes in plasma lipids, we also observed increases in the levels of plasmalogens (PC(P) (31%) and PE(P) (15%)) as well as intermediates of the plasmalogen biosynthetic pathway (PC(O) (61%) and PE(O) (28%)) in circulating white blood cells upon SLO supplementation. This is important as it demonstrates that the supplemented SLO alkylglycerols are being incorporated into cells and tissues, where they may influence cell/tissue function. However, further studies to assess the impact of the enrichment of immune cell plasmalogens after SLO supplementation on immune cell function are warranted. Indeed, plasmalogens have been reported to provide functional benefits to immune cells. In vitro studies demonstrate that enrichment of RAW macrophages with plasmalogens increases cellular resistance to chemical hypoxia and reactive oxygen species (11) and enhances their phagocytic activity (43). Moreover, synthetic analogues of lyso PE(P) have been found to be highly potent in activating thymic and peripheral invariant natural killer T cells (cells with important immunoregulatory functions) (44), suggesting potential immunomodulatory functions of plasmalogens. Furthermore, during in vitro differentiation of human monocytes to macrophages, the plasmalogen profiles have been found to change, suggesting a dynamic role of plasmalogens in preparing these cells for their phagocytic and inflammatory roles (15). SLO supplementation has beneficial effects on clinical lipids and inflammatory markers In addition to the changes in plasma and white blood cell molecular lipid species, SLO supplementation was also found to reduce the plasma levels of free COH and triglycerides, and the inflammatory marker, hsCRP. Our observation of decreased COH is also supported by a previous study showing the COH-lowering effect of SLO alkylglycerols in obese individuals (45). The plasma triglyceride-lowering effect of SLO was not reported before. It is beyond the scope of this study to elucidate the mechanism behind this effect. Moreover, we cannot exclude the possibility of driving triglyceride accumulation in the liver by SLO supplementation and that as the major contributor of lower triglyceride in the plasma. Here, we also observed a significant decrease in hsCRP after SLO supplementation. CRP is regarded as prothrombotic and proatherogenic in nature and is commonly used as a marker of systemic inflammation (46). Similar reductions in hsCRP levels in old-aged surgical patients with SLO supplementation have also been reported (28). Another study observed reduced serum complement (C3 and C4) and plasma vascular endothelial growth factor levels with SLO supplementation in obese individuals (45). Altogether, supplementation of SLO seems to have a role in modifying systemic inflammation in humans and could be effective in reducing the risk of progression of metabolic diseases to more advanced forms. In this study, we did not observe any detrimental effect of SLO supplementation in humans. The SLO that we used for this study (Alkyrol®) has been purified to enrich for alkylglycerols and to remove all contaminants and unwanted substances such as polychlorinated biphenyl, pesticides, heavy metals, fatty substances such as COH, squalene, and excessive amounts of vitamin A and D. However, nonpurified or semipurified SLO could have deleterious effects on the liver and other major organs because of the presence of substantial amount of squalene and other undesirable components. Therefore, SLO should be consumed with caution. There are some alternatives such as monoalkylglycerols (chimyl alcohol, batyl alcohol, or selachyl alcohol) (47) or marine food supplements (48), which can be used to enrich endogenous plasmalogens. However, the currently available monoalkylglycerols are not approved for human consumption, and the potential of marine food supplements in modulating endogenous plasmalogens is still in question. CONCLUSION SLO supplementation modulated plasmalogens and other ether lipids in human plasma and white blood cells. These changes, together with the observation of small but significant improvements in clinically important markers of dyslipidemia and inflammation, provide a strong rationale for larger trials examining the impact of SLO supplementation on metabolic diseases. Data availability All data in this article are contained within this article and are available upon request to the corresponding author. Supplemental data This article contains supplemental data.
2021-06-21T06:15:54.015Z
2021-06-16T00:00:00.000
{ "year": 2021, "sha1": "5d6d81d9cceb95f2dd5ed2e8ff4160fe14b80652", "oa_license": "CCBY", "oa_url": "http://www.jlr.org/article/S0022227521000742/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef1da56a28982802a9953b050ea3139da895a35d", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
238829195
pes2o/s2orc
v3-fos-license
Consequences and Remedies of Indigenous Language Loss in Canada : Many Indigenous languages in Canada are facing the threat of extinction. While some languages remain in good health, others have already been lost completely. Immediate action must be taken to prevent further language loss. Throughout Canada’s unacceptable history of expunging First Nations’ ways of life, systemic methods such as residential schools attempted to eradicate Indigenous cultures and languages. These efforts were not entirely successful but Indigenous language and culture suffered greatly. For Indigenous communities, language loss impaired intergenerational knowledge transfer and compromised their personal identity. Additionally, the cumulative effects of assimilation have contributed to poor mental and physical health outcomes amongst Indigenous people. However, language reclamation has been found to improve well-being and sense of community. To this objective, this paper explores the historical context of this dilemma, the lasting effects of assimilation, and how this damage can be remediated. Additionally, we examine existing Indigenous language programs in Canada and the barriers that inhibit the programs’ widespread success. Through careful analysis, such barriers may be overcome to improve the efficacy of the programs. Institutions must quickly implement positive changes to preserve Indigenous languages as fluent populations are rapidly disappearing. Introduction Language is more than just a mode of communication, especially for Indigenous communities that have long endured the attempted erasure of their culture and heritage. Instead, language should be viewed as a natural resource [1]. As such, stewardship is necessary to preserve this resource for the benefit of future generations. The importance of preservation cannot be overstated, as ancestral language is essential and foundational to the collective Indigenous identity [2,3]. Bamgbose [4] reveals that the net effect of colonial hegemony, in many regions of the world, has been the dominance of the colonizer's language at the expense of native languages. This is indeed the case in Canada, where Indigenous languages and communities continue to fade. Most of the remaining Indigenous languages spoken throughout Canada are at risk of extinction. Statistics Canada [5] has reported that of the 70 actively used Indigenous languages, 40 had fewer than 500 fluent speakers remaining. Thus, an urgency to revitalize these languages has been sparked nationwide. The situation is particularly dire in British Columbia [6], as the province is home to nearly half of Canada's remaining Indigenous languages [7]. A recent survey in Indigenous communities across British Columbia revealed that only 5% of Indigenous people are fluent in an ancestral tongue, and the majority of that small percentage is now over the age of 65 [7]. This revelation that language fluency is primarily maintained by elders is extremely concerning. If language is not passed down to future generations imminently, it will be lost forever. To prevent this, action must be taken now. While these statistics are specific to British Columbia, this is certainly not the only province facing language preservation deficits. Indigenous languages throughout Canada action must be taken now. While these statistics are specific to British Columbia, this is certainly not the only province facing language preservation deficits. Indigenous languages throughout Canada are experiencing a watershed moment, with revitalization dependent on concerted efforts over the next several years. Dr. Lorna Williams, a member of the Lil'wat First Nation and holder of the Canada Research Chair in Indigenous Knowledge and Learning, emphasizes the need for immediate action. She asserts that "we don't have much time left to document the knowledge of these languages [and] to hear their beauty" [1]. This sentiment is echoed by many Indigenous groups, who also recognize that loss of language leads to a loss of culture; subsequently, this results in substantial impacts on a person's sense of self-identity [8]. Given that language, cultural identity, and society are undeniably interwoven, the depletion of any one of these elements often causes the deterioration of all three. Those affected often find themselves torn between two cultures, feeling lost when they do not easily fit into either [9]. Furthermore, a strong cultural identity is a primary and important psychosocial determinant of health and well-being for Indigenous populations [10][11][12]. Therefore, the triad of language, cultural identity, and self-identity must be strengthened to improve the lives of Indigenous people (see Figure 1). The literature supports this relationship; it has been found that providing support to enhance Indigenous culture will result in positive mental health and coping outcomes [13,14]. Given the relationship between language and health, support is needed at various institutional levels to repair the damage caused by past injustices. Recently, the entirety of Canada has been made aware of these transgressions. Non-Indigenous people in Canada, such as the author, are giving their support to Indigenous communities seeking to revitalize their culture. Prompted by the 94 Calls to Action issued from the Truth and Reconciliation Commission (TRC) [15], select post-secondary institutions have also given their support by creating Indigenous language programs. These are steps towards revitalization and reparations, but more can be done by examining the structural elements of such programs to maximize benefits for Indigenous communities. It is hoped that the barriers to Indigenous language education would be dismantled, especially at the institutional level. By doing so, Indigenous students may access language education through thoughtfully designed programs, strengthening collective cultural identity as a result. Assimilation tactics have damaged language Self-Identity Self identity has suffered due to loss of language and culture Culture Language and culture build up self-identity Given the relationship between language and health, support is needed at various institutional levels to repair the damage caused by past injustices. Recently, the entirety of Canada has been made aware of these transgressions. Non-Indigenous people in Canada, such as the author, are giving their support to Indigenous communities seeking to revitalize their culture. Prompted by the 94 Calls to Action issued from the Truth and Reconciliation Commission (TRC) [15], select post-secondary institutions have also given their support by creating Indigenous language programs. These are steps towards revitalization and reparations, but more can be done by examining the structural elements of such programs to maximize benefits for Indigenous communities. It is hoped that the barriers to Indigenous language education would be dismantled, especially at the institutional level. By doing so, Indigenous students may access language education through thoughtfully designed programs, strengthening collective cultural identity as a result. This paper aims to understand the ramifications of language loss by further exploring the relationship between language, identity, and health through a sociological lens. Furthermore, recommendations to improve adult Indigenous language programs are proposed so that institutions offer support to Indigenous communities in their area. Historical Context of Language Loss Settler colonialism in Canada began more than 200 years ago, but it persists even today. Colonialism is an ongoing system of oppression in which Indigenous people are alienated from their lands and subjected to government-sponsored programs of assimilation [16,17]. English and French have historically been portrayed as superior languages in Canada, whereas Indigenous languages have been characterized as "primitive" [18]. Indigenous languages were seen as barriers to civilization and modernity, to the extent that Indigenous men were considered "disabled" until they could demonstrate proficiency in English or French [19]. Such hegemonic ideologies were responsible for the highly destructive policies that have impacted Indigenous people throughout recent history [12]. Indigenous languages were victims of these policies, by way of the residential school system and state-imposed domination of the French and English languages [20]. Although there have been efforts to undo this damage, discriminatory language discourse persists in Canada, largely based on systems of power that maintain a Eurocentric narrative [21]. Figure 2 highlights key events in Canada's Indigenous language history. This paper aims to understand the ramifications of language loss by further exploring the relationship between language, identity, and health through a sociological lens. Furthermore, recommendations to improve adult Indigenous language programs are proposed so that institutions offer support to Indigenous communities in their area. Historical Context of Language Loss Settler colonialism in Canada began more than 200 years ago, but it persists even today. Colonialism is an ongoing system of oppression in which Indigenous people are alienated from their lands and subjected to government-sponsored programs of assimilation [16,17]. English and French have historically been portrayed as superior languages in Canada, whereas Indigenous languages have been characterized as "primitive" [18]. Indigenous languages were seen as barriers to civilization and modernity, to the extent that Indigenous men were considered "disabled" until they could demonstrate proficiency in English or French [19]. Such hegemonic ideologies were responsible for the highly destructive policies that have impacted Indigenous people throughout recent history [12]. Indigenous languages were victims of these policies, by way of the residential school system and state-imposed domination of the French and English languages [20]. Although there have been efforts to undo this damage, discriminatory language discourse persists in Canada, largely based on systems of power that maintain a Eurocentric narrative [21]. Figure 2 highlights key events in Canada's Indigenous language history. The residential school system, which was operational in Canada until 1996, inarguably dealt the heaviest blow to Indigenous languages. Early on, the Gradual Civilization Act of 1857 aimed to assimilate Indigenous men, as deemed fit by legislators, into Canadian society [22]. In 1876, further legislation required Indigenous children to leave their families and instead live and be educated at residential schools, making criminals out of any parent that defied the order [23]. In many instances, the residential schools were located as far as hundreds of kilometers away from the parent community [24]. This pattern of aggressive assimilation focused on targeting children, as they were easier to mold than adults. In most cases, children were kept away from their parents for ten months of the year and even segregated from their own siblings at school [25]. The schools were meant to forcibly assimilate Indigenous children and prohibit the use of Indigenous languages as a method of integrating them into the European customs of the colonizers. Corporal punishment was often used if children were caught speaking their native language. In short, the residential school system played a significant role in the decline of Indigenous The residential school system, which was operational in Canada until 1996, inarguably dealt the heaviest blow to Indigenous languages. Early on, the Gradual Civilization Act of 1857 aimed to assimilate Indigenous men, as deemed fit by legislators, into Canadian society [22]. In 1876, further legislation required Indigenous children to leave their families and instead live and be educated at residential schools, making criminals out of any parent that defied the order [23]. In many instances, the residential schools were located as far as hundreds of kilometers away from the parent community [24]. This pattern of aggressive assimilation focused on targeting children, as they were easier to mold than adults. In most cases, children were kept away from their parents for ten months of the year and even segregated from their own siblings at school [25]. The schools were meant to forcibly assimilate Indigenous children and prohibit the use of Indigenous languages as a method of integrating them into the European customs of the colonizers. Corporal punishment was often used if children were caught speaking their native language. In short, the residential school system played a significant role in the decline of Indigenous languages in Canada [26]. Outside of the school system, edicts forbidding core cultural ceremonies and traditions such as the Potlatch festival and Tamanawas spirit dancing were enacted. Defying the edict resulted in fines and even incarceration [27]. The ban remained in place for more than sixty years, from 1886 to 1951. During this time, the status of Indigenous culture declined immeasurably [28]. Even after the era of residential schools and the cultural ban, Indigenous language interests continued to be marginalized in policy priorities. Despite increased Indigenous activism and calls for "Indian control of Indian education," the structures of settler colonialism continued to undermine Indigenous ways of life. It was only in 1982 that the repatriated Charter of Rights and Freedoms recognized "existing" Indigenous treaty rights. It did not, however, refer to Indigenous language rights [29]. While Indigenous communities have made great efforts since then to revitalize languages, progress has been limited by a lack of resources [18]. Even as early as 2002, the Indian and Northern Affairs Canada Report observed that, with the decreasing numbers of Indigenous language speakers, more than a dozen Indigenous languages in Canada were either extinct or on the verge of extinction. Without sustained revitalization efforts, the remaining Indigenous languages will soon follow [30]. In 2021, Canadian Indigenous issues gained global attention following the discovery of the remains of 215 children at the site of the former Kamloops Indian Residential School in British Columbia. These children were buried in an unmarked mass grave, and their deaths were previously undocumented [31]. This event, while unspeakably tragic, has provided further momentum for Indigenous culture and language revitalization movements, which is critical as part of reparations for current Indigenous communities still experiencing the effects of Canada's genocidal past. The mistreatment of Indigenous people throughout Canada's history is distressing, to say the least, but it cannot be taken back. Now, action must be taken to acknowledge the collective Indigenous cultural identity and its inherent importance to Canada's national history and future. By listening to Indigenous voices, we will gradually reach a more balanced society, inclusive of all its members [32]. Language Loss and Well-Being of Indigenous People Language is closely tied to cultural identity, a fundamental right of every human being [33,34]. Article 24 of the United Nations Declaration on the Rights of Indigenous Peoples [33] includes the right to the highest attainable standard of mental health. The Canadian government has failed Indigenous communities in this regard. Many Indigenous people continue to suffer from trauma associated with past assimilation attempts endorsed by the Canadian government. Several studies have associated the negative impact of residential schools, including the loss of language and culture, with adverse mental health effects, substance abuse, and suicide [35][36][37]. Oster et al. [3] found that the cultural continuity of those living in Alberta's Indigenous communities was inextricably linked with language and strongly correlated with one's overall health. Furthermore, Hallett et al. [38] found that suicide rates were six times higher in Indigenous communities in which fewer than half of the members could converse in their ancestral language. If language loss continues unchecked, a great injustice will have been committed towards future generations of Indigenous communities. Isolation and Cultural Discontinuity in Young People Younger generations within Indigenous communities tend to lack the same extensive knowledge of their familial history that would have previously been passed down, partly due to the inability to speak and understand ancestral language. Historically, Canada's Indigenous people passed their history through generations using oral tradition as opposed to written documentation [39]. Due to the longstanding suppression of Indigenous culture in Canada, this history was never recorded in writing. However, some efforts have recently been made by communities to digitize records [40]. Indigenous history is crucial to understanding cultural identity, especially for youth. Sivak et al. [41] attribute mental health issues to a lack of ancestral language knowledge. This finding is significant for developing children who are living amongst multiple generations of family. Some Indigenous grandparents in Canada do not speak English or French fluently and are therefore unable to communicate easily with their grandchildren [42]. Even in cases where they can speak colonial languages, the inability to communicate in their ancestral language with their grandchildren leads to the dilution of stories meant to be conveyed through inter-generational oral traditions. Thus, families are unable to effectively share their complex history. Without a connection to the comprehensive spiritual teachings of their familial elders in their ancestral language, this could create feelings of seclusion. Isolation, when experienced in childhood, has been identified as a precursor to mental health issues and suicidal thoughts later in life [43]. Thus, preventing potential feelings of loneliness brought about by language loss is imperative. Canada's failure to acknowledge Indigenous contributions may also affect the youths' perception of self. In America, it was found that Indigenous youth associated involvement with their culture with increased discrimination [44]. The vast majority of Canada has been built on unceded Indigenous lands, meaning it was never legally yielded to colonizers. The respect for Indigenous culture and language on these unceded land has been severely lacking. As such, it is possible for Indigenous children to feel as though their culture, heritage, and language are unvalued by those around them, further decreasing their motivation to learn about their ancestry and history. This attitude towards one's identity is detrimental to mental health and personality [45]. Children may therefore choose not to connect with their heritage at all, and thus cultural identity will be further weakened through successive generations. Sivak et al. [41] explain eight themes about the connection between language and culture: connection to body, connection to mind and emotions, connection to family, connection to community, connection to culture, connection to country, and connection to spirit, spirituality, and ancestors. Regarding connection to mind and emotions, the authors state that language reclamation improves motivation, mood, and general happiness [41]. Language reclamation also holds the potential to improve one's sense of belonging and cultural identity, as well as strengthen community connectedness [41]. Given these findings, it is crucial to devise strategies to undo the historical erasure and teach the native languages. Such solutions could also help improve mental health outcomes for younger generations. Abuse Aftermath among the Elderly Older generations still suffer from past policies aimed to eliminate language and culture. The impact of residential schools continues to affect older Indigenous adults in various manifestations, such as post-traumatic stress disorder (PTSD), despite the schools having been shuttered for more than twenty years [46]. Those who did not experience the aggressive assimilation firsthand could still suffer, as the trauma was passed down from parent to child [47]. Fontaine [26] found that elders are the primary speakers of Indigenous languages, and they tend to encourage younger generations to learn about their language. Unfortunately, residential schools robbed families of the opportunity to converse on deeper levels between generations using their ancestral languages. As students were not allowed to speak their languages, they started to lose hereditary connections with family members, increasing language barriers, and disjointing communities. This resulted in inevitable isolation for residential school students, even after they exited this system. The atrocities of residential schools are still coming to light today, as we saw with the Kamloops Indian Residential School discovery in 2021. In recent decades, many residential school survivors have shared experiences of sexual abuse in the schools. It has been estimated that one in five children in residential schools experienced sexual abuse in the system [48]. However, crimes of this nature are grossly under-reported due to the associated shame that survivors often experience [49]. Thus, the actual prevalence is likely even higher. Research has shown that victims of childhood sexual abuse are three times more likely to attempt suicide as an adult [50]. Sexual abuse as a child can also result in a litany of developmental issues as well as incite a cycle of abuse [51]. Issues such as these drive generations further apart by creating shame and trauma, negatively impacting the transfer of language and other cultural concepts. Harm Caused by Racism Racism is another factor influencing mental health and well-being for Indigenous people of all ages. Experiencing racism is known to negatively impact one's mental health, causing depression and anxiety, decreasing self-worth, inciting post-traumatic stress disorder (PTSD), and threatening one's sense of personal safety [52]. Racialized populations also demonstrate a higher occurrence of substance abuse disorders [53]. Priest et al. [54] found that experiencing prolonged, ongoing racism can result in physical health consequences as well, such as restlessness, sleep deprivation, increased or decreased appetite, and energy loss. Furthermore, in their research, it was determined that individuals experiencing frequent and repeated racism between the ages of twelve and seventeen are at heightened risk for suicide, substance abuse, and behavioral delays [54]. For centuries, Indigenous communities have experienced racism at various levels. The Canadian government, justice system, and even the healthcare system have demonstrated racist behavior towards Indigenous people [55,56]. This systemic racism has resulted in widespread racial profiling of Indigenous individuals. Out of fear, younger generations may refrain from practicing their ancestral language to feel more accepted by others in order to experience less racism. However, abandoning one's culture, history, and language has resulted in severely negative effects among Indigenous communities [45]. To combat the effects of racism, Canadians must be more respectful of Indigenous communities, with the hope that current and future generations can reclaim ancestral languages, increase cultural pride, and improve mental and physical health. Barriers to Generational Knowledge Transfer Cultural knowledge and identity among Indigenous communities have been declining with each passing generation [57]. Language loss and cultural minimization have contributed to this trend. Since language knowledge is largely missing, Indigenous communities are forced to primarily rely on art, clothing, and cultural traditions to connect with their heritage. By practicing traditions, there is still potential for younger generations to develop a sense of wholeness, spirituality, and self-identity by connecting with culture. However, residential schools stripped Indigenous children of their cultural rights and portrayed them as outcasts while bans on cultural displays communicated that Indigenous history was inherently unimportant. Due to these actions and their lasting effects, older Indigenous generations may refrain from sharing their experiences and knowledge. Loss of Culinary Knowledge A lack of generational knowledge transfer has also resulted in gaps related to traditional food and nutrition, which has caused widespread negative health implications. Indigenous linguicide could impair younger generations' opportunity to learn about traditional cuisine, which is an important aspect of culture. Effects of colonialization have also resulted in many Indigenous communities consuming overly processed, high-calorie food lacking nutritional value [58]. Generational nourishment-focused culinary knowledge is not being efficiently taught, resulting in a less nutritious diet for many individuals [59]. Not surprisingly then, malnutrition is a reoccurring issue within Indigenous communities, especially among women and children [58]. Children who are malnourished are more likely to develop mental illness or other health disorders as they grow older, as well as suffer from deficiencies in motor skills and physical abilities [60,61]. For Indigenous individuals living in a Westernized society, foods that were traditionally eaten pre-contact, may be difficult to obtain today. For example, the historical diet of British Columbia's Sylix First Nation is based on Four Food Chiefs: Siya (Saskatoon Berry), Spitlem (Bitter Root), Skemxist (Black Bear), and Ntyxtix (Salmon). These food chiefs represent healthy food that is available from the land in Southeast British Columbia, where the Sylix First Nation reside [62]. However, Indigenous people have lost the rights to these ancestral lands, making such food more difficult to access. Additionally, a lack of generational Societies 2021, 11, 89 7 of 12 knowledge surrounding the gathering and preparation of these foods may make it difficult for future generations to connect through cuisine. Current Efforts and Recommendations All of society bears some responsibility to support Indigenous communities as they heal from the injustices of the past. Specifically, influential institutions need to participate as leaders in restoration efforts. As such, universities and governments should spearhead language revitalization in Canada. Universities represent continual learning and innovation in society and are often the first institutions to adapt operations to changing societal expectations. This makes the university setting ideal for fostering innovation and community learning. Although individuals likely will not acquire fluency, post-secondary classes provide an opportunity to get in touch with one's cultural roots and help conceptualize self-identity in a positive way. The government also has a major role to play, as it has long been the primary perpetrator of deplorable acts against Indigenous people. Obstacles exist at several levels, which must be identified and overcome swiftly to save endangered Indigenous languages. Canadian universities, with their multiple stakeholders and bureaucratic structure, can inadvertently create impediments for Indigenous language programs [63,64]. These include the reinforcement of hierarchy, which prioritizes Western education above Indigenous knowledge [65]. Additionally, public universities typically allocate funding according to which programs generate the most revenue. The author can speak to this, having observed and overseen changes in academic institutions. Since Indigenous language programs are generally small when compared to others, such programs often become marginalized and are vulnerable to the exclusionary practices of academia. Distrust of a Western knowledge ideology that discounts Indigenous learning methods is leading to a lower participation rate amongst young Indigenous learners [66]. General academic outcomes for Indigenous students are also a cause for considerable concern; low enrollment, high dropout rates, and low academic success rates are prevalent among Indigenous learners [67]. Previously suggested solutions have included lowering admission requirements for Indigenous candidates and establishing alternative programs that improve attendance and remedy learning problems, but these policies have not offered an enduring solution [67]. Hence, emphasis on other programs based on Western knowledge continues to persist. Indigenous language education can only flourish if these impediments are understood and remedied [68]. A few concerted efforts have begun in earnest, despite challenges. Indigenous languages are now part of several course offerings at educational institutions across Canada. The University of Alberta and University of the Fraser Valley, BC have developed Indigenous language programs for students, in hopes of aiding the revitalization effort. Another example is the Canadian Indigenous Languages and Literacy Development Institute (CILLDI); this three-week summer school program prepares and educates students who aspire to learn or improve their fluency in Indigenous languages with the opportunity to receive a Community Linguistic Certificate (CLC) upon successful completion. Short programs like these help students get in touch with their culture. However, educators require more training in various dialects and pronunciations to make the program effective [69]. The University of British Columbia (UBC) has also increased offerings related to endangered Indigenous languages. Through the First Nations and Endangered Languages Program (FNEL), First Nations language courses are offered, as well as methodology classes on language documentation, conservation, and revitalization [70]. Early immersion programs are another solution at the educational level. While universities are largely offering Indigenous language programs for adult learners, research has shown that early immersion programs in Indigenous language results in positive academic outcomes for young learners [71]. Given children's ability to build language skills rapidly, there is an opportunity present for further research in this area. In doing so, a sense of identity can be formed with a solid foundation, improving outcomes for Indigenous communities. However, even if future generations of children are educated early, a gap currently remains for adults. Therefore, solutions specific to adult learners must be further researched, developed, and implemented. Young and middle-aged adults must gain language proficiency now, even before children. If more adults are educated in the next few years, the urgency of language revitalization can be eased as the life of the fluent population will be extended. To motivate adult learners, universities might consider incentives to gain Indigenous language fluency. Such incentives could include financial rewards for passing language classes or even paid positions within the university for the duration of the program, as is sometimes done for Ph.D. students. Increased government funding may be required for institutions to manage this. However, the potential benefits to Indigenous physical and mental health, as well as the likely enhancement to Indigenous students' academic careers, are well worth these efforts. Language revitalization solutions must also be sought at the governmental level. Unfortunately, Indigenous issues remain somewhat contentious in both provincial and federal governments in Canada, making policy development and enactment exceedingly slow. This lethargy has failed Indigenous people many times before. For instance, the federal government did not offer an official apology for residential schools until 2008 [72], more than one hundred years after the schools opened and long after many survivors had already died. Thus, we cannot rely solely on the government for swift action concerning Indigenous language revitalization. However, activism from society and other institutions can force a response from the government. In mid-2021, the Minister of Canadian Heritage announced the new Office of the Commissioner of Indigenous Languages, along with the appointment of a commissioner and three directors [73]. This is the first Canadian governmental office aimed at Indigenous language preservation and revitalization, representing a long-awaited shift in government priorities. Though it is only a start, this is meaningful for Indigenous peoples in "ensuring that languages grow and prosper so they can be shared and spoken for years to come" [73]. Conclusions Through acts of systemic racism and oppression, Indigenous language and culture were pushed to the brink of extinction. Within Indigenous communities, the unfavorable reverberations of these actions are still felt to this day, as they will be for generations to come. The right to one's language is irrevocable, and the Canadian government committed an injustice toward these communities by attempting to dissolve the languages. Although language acquisition will not resolve every issue faced by Indigenous communities, the links between one's language, cultural identity, self-identity, and overall health are clear and proven. Thus, all levels of society and leadership should fully participate in reparations. Given the benefits attributed to Indigenous language acquisition, future research should identify specific barriers present within post-secondary education systems and develop strategies to overcome them, both inside and outside the classroom. Students must not face impediments to accessing these programs based on socioeconomic status or otherwise. Additionally, when participating in the programs, structural issues stemming from Western knowledge ideologies must not prevent learning from taking place. Indigenous learning methodologies, such as taking a multiple disciplinary approach, should be further researched. Rigid structures that make it difficult for older generations who may have been the victims of the residential school system to participate should be dismantled, to allow this group a chance to learn more about their culture in a positive environment. In addition, studies should examine any barriers that the Canadian government faces when enacting meaningful reparation efforts and language revitalization programs. The aim of all involved should be to help in such revitalization efforts. Traditional state paternalism is responsible for language loss, and the mistakes of the past cannot be continued or repeated. Recommendations may be provided to Indigenous community leaders, but not forced upon them. The recommendations based on this review include improvements to the funding of language programs where needed, and further communication of benefits to prospective learners. Additionally, providing official acknowledgment and respect for such languages will communicate to Indigenous communities that their culture is valuable and appreciated. This is opposite to the narrative that has long dominated the Western world. To continue this trajectory of Indigenous acknowledgment, future Indigenous programs and educational opportunities must be explored and fully developed for successful language revitalization. This will perhaps also deepen the appreciation of cultural identity for the stakeholders of these language programs, strengthening ideological buy-in from the ground up. This author can confirm the value of such efforts from their experience in Winnipeg, Manitoba, working with Indigenous communities on health and cultural research. Lastly, future studies must explore the efficacy of current programs and propose further avenues of improvement. Events of 2021, such as the Kamloops Indian Residential School discovery and the founding of the Office of the Commissioner of Indigenous Languages, have created unprecedented momentum for Indigenous language revitalization efforts. To stop now would be yet another representation of Canada's monumental failure of Indigenous people.
2021-09-27T20:56:21.894Z
2021-08-02T00:00:00.000
{ "year": 2021, "sha1": "3316453aa5e14b517de9ebe05310c9010e6d52e0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4698/11/3/89/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c3d488198eb97858f5f80dcbfd544eb79f9e1ed1", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Political Science" ] }
58883874
pes2o/s2orc
v3-fos-license
Estimation of Critical Components of Internet Infrastructure Electronic communications and Internet plays a significant role in the current public life. Beside energy, transport, water supply and other sectors, Internet is considered to be an especially important infrastructure. Currently, more and more users, service providers and public institutions rely on security of Internet network. Network accessibility can indeed determine the parameters of quality service supply. A failure in network supply due to e.g. cyber attacks, results in service unavailability. As a result, the studies on the reliability and safety of Internet network infrastructure operation, and their continuity remain topical. The article [1] analyses regional Internet network as an integrated system formed of stochastically connected subnets, and suggests methods for analyzing the topology of such system. The article further analyses one of the fundamental characteristics of a network – Internet network connectivity – on the basis of network topology analysis. The methods suggested in the article are aimed at identifying the critical elements of network infrastructure. Eventually, constant monitoring of such elements would allow real-time assessment of network status. Introduction Electronic communications and Internet plays a significant role in the current public life.Beside energy, transport, water supply and other sectors, Internet is considered to be an especially important infrastructure.Currently, more and more users, service providers and public institutions rely on security of Internet network. Network accessibility can indeed determine the parameters of quality service supply.A failure in network supply due to e.g.cyber attacks, results in service unavailability.As a result, the studies on the reliability and safety of Internet network infrastructure operation, and their continuity remain topical. The article [1] analyses regional Internet network as an integrated system formed of stochastically connected subnets, and suggests methods for analyzing the topology of such system.The article further analyses one of the fundamental characteristics of a network -Internet network connectivity -on the basis of network topology analysis.The methods suggested in the article are aimed at identifying the critical elements of network infrastructure.Eventually, constant monitoring of such elements would allow real-time assessment of network status. Problem identification Cyber attacks have been classified by different impact aspects and some of them have a direct effect on the stability and reliability of Internet network.The number of such attacks on the Internet is increasing, which results in an increased effect on the normal network operation.The network has to process the flows generated by the attacks; and very often such attacks are targeted at the elements of network infrastructure [2].Normally, as a response to such attacks, an incident management model (a.k.a.detect-clean-recover) -Computer Emergency Response Team (CERT) -is used [3].The nature of such model operation is exceptionally reactive, i.e. an action is generated upon the fact of an attack.CERT has a shortterm effect, i.e. dealing with a specific attack, and responding to the outcomes [4,11].Due to anonymity on the Internet, the identification of the source of an attack is not always possible using CERT, therefore, attacks from the same source may recur.Therefore, we presume a need for new proactive (preventive) measures to be employed directing them rather towards protection than towards defense as in the case of using CERT. Another very important aspect is telecommunication.Internet Service Providers (ISP) forms their network infrastructures individually according to their business objectives, network expansion possibilities and user needs.Each ISP has its own routers and inter-network formation policy.Every ISP monitors its network perimeter, and controls the network security as well as its operation reliability.Connections to other networks are also arranged under the initiative of the very ISP using Border Gateway Protocol (BGP) for compiling Autonomous System (AS) routing tables.Such inter-network connections form a hierarchical structure of the Internet network [5].The general reliability of stochastically formed Internet network segment depends on various factors, including the reliability and topology of separate AS elements. This article is aimed at shaping the methodology for analyzing the Internet network infrastructure identifying the critical elements of the infrastructure the disturbances of which are influencing functionality of the entire network operation. Methodology and Criteria When analyzing the Internet network, a graph theory is usually applied [6].Works [7,8] demonstrates the adoption of graph theory for networks traffic analysis and traffic engineering while practice for Internet interconnections assessments is still lacking. A segment of Internet network is represented by a graph G net , at the vertexes of which are Autonomous Systems (AS).A stationary network status is represented by a connected graph.Such graph contains at least one route between the i th AS and any other AS belonging to G net .The article published [1] presents the topology and the respective graph of the Lithuanian National Internet Network infrastructure. The following elements of graph are of especially high importance: critical node -V c and critical link -E c . The descriptions of these critical elements vary among authors. By the strict rules node is critical if its removal will disconnect the graph into two components.Extended characterisation of critical node presented in paper [9] as a node V c whose failure or malicious behaviour disconnects or significantly degrades the performance of the network. The vague dual definition of node criticality aggravates the identification of critical nodes.In reality, the variations defined as "disconnecting or significantly degrading the performance" are identified using different methods.Therefore the following definitions are used in this article: critical node and Ș-critical node. A node shall be considered to be critical when its elimination or disturbance dissolves the original graph into two or more disconnected graph. Ș-node shall be considered to be critical when its elimination significantly degrades the network performance for the majority of users (ȘA). The nodes defined as matching the first description are applied the formal method of removing graph vertices.In case the elimination of i th AS creates separate subgraphs having no interconnection, such AS is considered to be V c . On the purposes of this article and specifying the definition of Ș-critical node, the criticality of a node shall be assessed in relation to the number of users A i connected to the i th AS.The criticality index of a node Ș is a relative value where A i is the number of users of the i th AS; ȈA j is the total number of Internet users in the network.For convenience, the expression of Ș-critical node shall be divided into two categories: Ș i 0.1 and Ș i < 0.1.Respectively, the criticality Ș i 0.1 shall be considered to be the highest in the general network infrastructure. The definitions of a Critical link E c also vary.One of the definitions is as follows: "a link AB is critical if both endpoints A and B are critical nodes".Broader E c description is the link connecting two critical nodes so that, when this link is eliminated from the graph, the graph becomes disconnected [9]. When identifying E c , G net is considered to be formed of all the ISPs operating on the Internet network corresponding to the node vertices.It is important to note the links the eliminations of which would disconnect small ISP (having no AS) from the National Internet network. By analogy with the concepts of a critical node used in this article, the following definitions are used: critical link and N-critical link. A link shall be considered to be critical when its elimination or disturbance forms several subgraphs having no interconnection (edges). N-critical link shall be considered to be critical when its elimination or disturbance significantly degrades network connectivity. Identification of E c according to the first definition is performed by the analogous V c principle -method of removing graph edges.In case the elimination of n th creates separate subgraphs having no interconnection, such line is considered to be E c .The graph in question corresponds to the regional Internet network with N int connections [1].N int are the links connecting the AS of the regional network with the AS of the International Internet network provider.In such case, applying the method of removing, N int shall correspond to E c .Specifying the concept of N-critical link, we suggest linking it with the interconnection bandwidth ǻ.The maximum installed bandwidth ǻ max of the link belonging to the i th AS shall be assessed in relation to the total bandwidth ȈBw of connections managed by i th AS.This relation is expressed by the capacity coefficient where ǻ max is installed connections capacity of the i th AS, Gb/s; ȈBw is the overall bandwidth of the i th AS for all connections of this particular AS, Gb/s.The estimation of Ș AS shows the criticality of the link for the i th AS connectivity compared to other links of i th AS.N-critical link shall be divided into two categories: Ș AS 0.9 and Ș AS < 0.9.Respectively, the criticality Ș AS 0.9 (criticality) of the lines shall be considered to be the highest for the total connectivity of i th AS.Essentially, the presence of the above-mentioned condition shows disproportionate distribution of i th AS resources. Analyzing N-critical links (E cN ), their traffic (bandwidth) intensity is also important to consider.The relation of the data flow ǻ traffic (Gb/s) of the n th link (n = 1, 2, ..., E cN ) and ǻ max shows the line traffic expressed by the traffic coefficient Ȝ n , Ȝ n = ǻ traffic /ǻ max .It is a dynamic parameter different from the above-mentioned parameters which are more or less static.ǻ traffic is one of the most significant network parameters often monitored by ISP. In a real network, given the normal status, connection links are not overloaded and usually have some reserves.However, subject to data flows generated due to user activeness or cyber attacks, traffic intensity may exceed the installed bandwidth.When Ȝ n 0.8, it alerts the critical level of resources used of the link, the critical bandwidth limit reached by more than one line may signal a cyber attack, which in turn may result in significant degradation of the whole network connectivity. Application The above-described metrics were applied to identify the critical nodes and lines of the Lithuanian national Internet network [1]. Having completed the experiment using the method of removing the vertices, 4 critical nodes were identified (V c ), whereas the number of Ș -critical nodes satisfying the condition Ș i 0.1 was 3. Increasing the Ș i (presented at table 1) will result in to the increase of number of V c respectively.It should be noted that one of that 3 nodes coincides with the respective critical node. The identification of critical lines (E c ) in the graph representing the Lithuanian Internet network was slightly more complicated since E c search must take place among several hundreds of connection lines.Using the method of line removal, 26 critical lines were identified.7he search of N-critical lines (E cN ) was performed for every ISP separately.Only 2 ISP (independent from E c ), including E cN were identified as satisfying the condition Ș AS 0.9.Decreasing the level of Ș AS will result the increase of number identified E cN . Monitoring We suggest monitoring the above-mentioned V c and E c in order to identify the failures of the critical elements of the network or critical levels of link traffic resources.Monitoring is very important for timely identification of the failures of the critical elements since the loss of such elements affects the whole network performance.For the troubleshooting, we shall use detectors in the subgraph G c consisting of vertices and edges E c .These detectors perform network monitoring through constant intercommunication. The simple way to perform monitoring would be routine checks carried on network switching nodes (V c ).Those could be simple ping, tracepath, pathping or traceroute commands, which would continuously (for instance, at 1-5 minutes intervals) check the response from all the critical nodes and the process itself would be automated and screened on the network topology map.The positive characteristic of such a method is its independence, since there would be no need for agreements with router administrators regarding placement of sensors.However, the method itself lacks flexibility.In addition, some ISP prohibits reception of the said commands in their networks. Our approach is to use for monitoring purposes the Simple Network Management Protocol (SNMP).SNMP is an application layer protocol that facilitates the exchange of management information between network devices.It is part of the Transmission Control Protocol/Internet Protocol (TCP/IP) protocol suite.SNMP enables network administrators to manage network performance, find and solve network problems.As most ISPs use SNMP as de facto standard for network supervision, idea is to monitor some parts of national network identified as critical nodes of network infrastructure. To get information about critical nodes functionality, dedicated cyclical algorithm invented and presented at Fig. 1. Generally, monitoring needs to follow several major steps: 1. Send request using SNMP protocol to V c (SNMP Agent). 2. Get response to monitoring system (SNMP Manager) using SNMP protocol from V c (SNMP Agent).3. Calculate and store that data using scripts or tools in central monitoring server with database.We suggest selecting the Ethernet Statistics Group MIB necessary for Ȝ n evaluation at SNMP Agent [10] where ¨in -the difference between two poll cycles of collecting the SNMP ifInOctets objects, which represents the count of inbound octets of traffic in bytes [10]; ¨outthe difference between two poll cycles of collecting the SNMP ifOutOctets objects, which represents the count of outbound octets of traffic in bytes [10]; ǻ max -the speed of the interface, as reported in snmpifSpeed object in bits/s [10]; ¨t -time period.Time period ¨t = 60 s.Implementation of the structural algorithm presented in Fig. 1 return.SNMP agents can be software-configured so that alarm messages are sent to the monitoring system not only in the case of total failure of the line (Fig. 1) but also when the critical limit of line traffic is reached, i.e. when Ȝ n 0.8.Thus the monitoring is performed even more expeditiously. Conclusions The assessment of an infrastructure of a network consisting of a large number of stochastically connected subnets (e.g.Internet) in an aspect of reliability is a difficult task due to network complexity.The metrics compiled during the study allows identifying the critical elements of such network: critical and Ș-critical nodes and critical as well as N-critical links.The analysis of these elements simplifies the above-mentioned task. Having applied the above-described metrics to the Lithuanian Internet Network infrastructure, 4 critical nodes (V c ) were identified, whereas the number of Ș-critical nodes satisfying the condition Ș i 0.1 was 3. Also, 26 critical links and 2 ISPs, including N-critical links satisfying the condition Ș AS 0.9, were identified.Thus we can make a conclusion that the majority of subnets in the infrastructure of the national internet network distribute their resources proportionally.In this way the risk of being dependant on the reliability of N-critical links' operation is reduced. We have proved that monitoring of critical network elements is possible on the basis of SNMP protocol using detectors in the critical network nodes and a monitoring system.Since SNMP is commonly used among ISP, there is no need to install a new system; an additional software installation is enough.The algorithm of network monitoring and its realization code were composed.All this allows for a real-time centralized monitoring of network status, analysis of network operation failures, etc.We suggest implementing such model, e.g. at the institutions managing electronic communication. Table 1 . Critical elements calculation results. . To calculate Ȝ n for fullíduplex connections, we propose formula taking the largest of the in and out traffic values
2019-01-24T00:36:50.213Z
2011-06-08T00:00:00.000
{ "year": 2011, "sha1": "54dd7228b86a58c2e4e67ba1cb3864ad81c23489", "oa_license": "CCBY", "oa_url": "https://eejournal.ktu.lt/index.php/elt/article/download/282/237", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "932cf3619d2b2bd4e6ed6640c4bea64fc4bd1c50", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
98340255
pes2o/s2orc
v3-fos-license
Cycloaddition of phenyl azide to unsymmetrical azabicyclic alkenes Addition of phenyl azide to selected derivatives of the 2-azabicyclo[2.2.1]hept-5-ene, 2-azabicyclo[2.2.1]hept-5-en-3-one and 2-azabicyclo[2.2.2]oct-5-en-3-one ring systems is described. Modest levels of regioselectivity are observed; 100% exo-facial selectivity is found in the [2.2.1] systems but exo-and endo-adducts are formed from the [2.2.2] substrate allowing isolation of all four possible stereoisomers. Photolytic removal of dinitrogen from the triazolines gives aziridines which are potential precursors to stereospecifically functionalised aziridino-cyclopentanes and aziridino-cyclohexanes. Introduction The cycloaddition of azides to norbornenes and benzonorbornadienes 1 has been well studied as has the effect of a bridging heteroatom in 7-oxa 2 and 7-aza-3 derivatives of the latter ring systems.Typical benzo-annelated examples demonstrate the characteristic addition from the exoface giving triazolines 1a; subsequent photolysis of the triazolines yields aziridines 2a (Figure 1).Analogous addition of diazomethane gives pyrazolines 1b from which cyclopropanes 2b are accessible via photolytic deazetisation; 4 epoxides are accessible via direct epoxidation of norbornene, 5 as are the corresponding benzo-derivatives 2c from 1,4-iminonaphthalenes (benzonorbornadienes) and 1,4-imino-anthracenes.We are not aware of corresponding studies with bicyclic alkenes containing unsymmetrically-placed amino-nitrogen and we have therefore examined azide addition to the strained bicyclic amine 5. Turning to higher homologues, potential substrates such as the heterobicyclo[2.2.2]octenes 3 unfortunately decompose rapidly by retro-Diels-Alder cycloaddition and we were therefore unable, for example, to achieve [4+2] cycloaddition of cyclic dienes to 3. 7 However, the alternative cycloaddition of dienes to the lactam 4 proceeds readily and subsequent removal of the carbonyl group using hydride reduction is straightforward. 7We have therefore examined the reaction of 4 with phenyl azide in order to probe the potential facial selectivity and regioselectivity offered by this unsymmetrical substrate.We have also included the lactam 6.This readily-available substrate 8 has formed the basis for recent syntheses of epoxy 9 and cyclopropano 10 derivatives of 7 and their conversion into stereospecifically-substituted bicyclo[3.1.0]hexanes8 which are intermediates in the synthesis of novel nucleoside variants (Scheme 1).We are prompted to report our results by the developing activity in this area and also by a recent report of the formation of the aziridines 7c by cycloaddition of azides to 6 (in its N-Boc-protected form) using high pressure, followed by deazetisation. 11 Scheme 1 Whilst we expected attack to occur exclusively from the exo-face of 5 and 6, we were mindful of a report that endo-addition has been observed in epoxidation of 6. 8 We expected both faces of the double bond in 4 to be accessible to cycloaddition on the basis of our earlier work on the addition of cyclic dienes. 7There has been disagreement concerning the influence of the homoallylic nitrogen in analogues of 5 on the regioselectivity of addition to the double bond 12 Results and Discussion The amine 5 was treated with phenyl azide in dichloromethane solution at room temperature; the reaction was followed by IR spectroscopy and was complete within 5 days (Scheme 2).A quantitative yield of triazolines 9 and 10 was obtained in a 40:60 ratio as measured by NMR integration.The exo-stereochemistry was assigned on the basis of the small coupling constants J 1,6 and J 4,5 (< 1 Hz).Early work with norbornene/phenyl azide adducts established that the proton adjacent to N=N was further downfield than that next to the N-phenyl substituent; 1 this distinction was evident in all of the adducts obtained in the present study and formed a consistent basis for assignment of 1 H NMR signals.Clearly, the bridgehead proton H 1 always appears downfield of H 4 but the amino-nitrogen at the 2-position exerts an additional influence in a variety of 2-alkyl-2-azabicyclo-[2.2.1]heptane and -[2.2.2]octane derivatives, causing H 6-endo to appear downfield of H 5-endo (Scheme 2) by between 0.2 and 0.5 ppm. 13Despite the complexity of the spectrum of the mixture of triazolines 9 and 10, two downfield doublets were resolved at δ 4.91 (major) and 4.65 (minor) and these signals were therefore assigned to H 6endo in 10 and H 5endo in 9 respectively, consistent with the assignment of 10 as the major component.The cycloadducts could not be separated and the mixture of 9 and 10 was photolysed in acetone solution in a quartz vessel using a medium pressure mercury lamp.Conversion into the single aziridine 11 was complete within 4 hours and gave a yield of 90% after chromatography on silica (Scheme 2).The corresponding reaction of phenyl azide with the lactam 6 (Scheme 3) occurred more slowly but was complete on heating overnight in dichloromethane in a sealed tube at 90 o C. Attempts to perform the reaction at higher temperatures in toluene led to substantial decomposition.Similar cycloadditions of azides to the N-Boc-protected lactam 6a were reported to require high pressure; 11 the use of a secondary lactam in our study may be significant in making the reaction easier but we did not investigate this question further.The exo-selectivity in attack on 6 was maintained, as was the 40:60 ratio of cycloadducts 12:13 [the ratio of regioisomers 15:16 from 6a is based on isolated yields 11 and is included in Scheme 3 for comparison].The assignments for 12 and 13 were confirmed by considering the major and minor signals at δ 3.13 and 3.22 due to the bridgehead protons H 4 (adjacent to the amide carbonyl group). Examination of bridgehead proton signals in the compounds produced in this work shows that the bridgehead proton (H 1 or H 4 ) on the same side as the N=N bond of the triazoline is consistently at lower field than the corresponding bridgehead proton adjacent to the triazoline Nphenyl, allowing assignment of the minor signal at δ 3.22 to H 4 in isomer 12.A NOESY experiment confirmed this, showing an interaction between H 4 and the aryl ring in the case of 13 but not 12.The isomeric triazolines 12 and 13 were not separated and photolysis of the mixture gave 14 as a single stereoisomer.Clearly, 14 can be converted into a 6-azabicyclo[4.1.0]hexanederivative corresponding to 8 using established hydrolysis or reduction procedures. We wanted to explore the feasibility of addition to the 2-azabicyclo[2.2.2]oct-5-ene-3-one ring system as a potential source of the corresponding 7-azabicyclo[4.1.0]heptanehomologues and we chose the readily available benzo-derivative 4. Equimolar amounts of phenyl azide and the lactam 4 were heated in toluene solution at 85 o C for 17 hours.The product was shown by NMR and TLC analysis to consist of a mixture of four cycloadducts (Scheme 4) and the triazoline products were investigated in some detail.A small quantity of each of the triazolines 18 -21 was separated by chromatography on silica (60% recovery, together with ca.10% of unchanged 4).Additional mixed fractions were eluted containing [18 & 19] and [20 & 21].Analysis of the 1 H NMR spectra of all of the isolated fractions gave the percentages indicated in Scheme 4. The mixed fractions were photolysed separately; each pair of compounds gave a single aziridine showing that in one pair the aziridine was exo-to the benzo-group and in the other pair, it was endo-. 14The structural assignments shown in Scheme 4 were made on the basis of this information and a detailed analysis of the 1 H NMR data (Table 1).The dramatic upfield shift of the N-Me signal for 18 relative to the other three isomers (ca.0.3 ppm) is consistent with the unique placement of the methyl group within the shielding zone of the triazoline N-phenyl group in this stereoisomer and provides a crucial point of reference.The relative J values measured for the aziridines 22 and 23 (Table 1) reflect similar differences in dihedral angle; homonuclear spin-decoupling experiments confirmed the assignments.These aziridines were produced efficiently (ca.80% yield) as single stereoisomers by photolysis of mixed samples of [18 and 19] and [20 and 21] respectively, in acetone solvent. Summary We have shown that cycloaddition of phenyl azide to selected bicyclic amines and secondary and tertiary lactams based on the 2-azabicyclo[2.2.1]hept-5-ene and 2-azabicyclo[2.2.2]hex-5-ene ring systems occurs at modest temperatures without the need for high pressure.Modest regioselectivity is observed in attack on the double bond with a very slight preference for the adducts having the N-phenyl group further from the amino-or amido-nitrogen.12b Only exoproducts are formed in attack on the bicyclo[2.2.1]hept-5-ene examples but there is no significant facial discrimination as far as the bicyclo[2.2.2]oct-6-ene system is concerned, allowing isolation and characterisation of all four possible stereoisomers.The yields of aziridines from photolysis of the triazolines in acetone solvent in the present work were significantly higher than those reported for photolyses carried out in acetonitrile. 11The established hydrolysis and reductive cleavage of the amide bond in bicyclic lactams 9,10,11 opens the way to a wider range of nucleoside and, with this in mind, we are currently looking at simpler 2azabicyclo[2.2.2]oct-6-ene examples which should allow formation of both aziridine stereoisomers in the higher homologues of 8 based on the 7-azabicyclo[4.1.0]heptanering system. Experimental Section General Procedures.NMR spectra were recorded on Varian EM 390 (90 MHz), Bruker ARX 250, AM 300, or DPX 300 spectrometers.Spectra were measured in CDCl 3 with tetramethylsilane (TMS) as internal reference unless indicated otherwise.Signal characteristics are described using standard abbreviations: s (singlet), d (doublet), dd (doublet of doublets), m (multiplet), br (broad).Selective spin-decoupling experiments were performed on the series of compounds 18 -23 in order to allow measurement of J values and to confirm the assignment of the methine protons.Selected NOESY experiments were performed as described in the discussion section.In the 13 C spectra, (s), (d), (t), (q), are used to indicate quaternary, methine, methylene and methyl carbons respectively, as shown by DEPT experiments.IR spectra were recorded on a Perkin-Elmer 298 spectrometer as solutions in CH 2 Cl 2 unless indicated otherwise.Mass spectra were measured routinely on VG Micromass 14 (EI) [an asterisk is used to indicate the base peak in EI spectra] or Micromass Quattro LC (ES) spectrometers.Accurate mass measurements were obtained using a Kratos Concept mass spectrometer (FAB); they were measured to 5 decimal places but are quoted to 4. Melting point measurements were made using a Kofler hot stage apparatus and are uncorrected.Petroleum ether refers to the fraction b.p. 40 -60 o C. Addition of phenyl azide to lactam 6 A solution of lactam 6 8 (0.2 g; 1.83 mmol) and phenyl azide (0.22 g; 1.85 mmol) in dichloromethane (2 mL) was heated at 90 o C in a sealed tube for 16 h with magnetic stirring.After removal of the solvent under vacuum, the crude product was washed with cold diethyl ether to give a mixture of two triazolines 12 and 13 (0.33 g; 79%) which could not be separated. Photolysis of triazolines 12 and 13 to give exo-aziridine 14 A sample of 12 and 13 (0.15 g; 0.657 mmole) was irradiated in acetone (50 mL) in a quartz tube for 4.5 h using a medium pressure mercury lamp.The solvent was evaporated and the product chromatographed on silica using 2:1 diethyl ether:petroleum ether to give the aziridine 14 as a crystalline solid (0.125 g; 95%) which was recrystallised from ethyl acetate/diethyl ether to give colourless crystals, m.p. 140 -142 o C. 1 Addition of phenyl azide to lactam 4 A solution of lactam 4 18 (0.31 g; 1.68 mmol) and phenyl azide (0.2 g; 1.68 mmol) in toluene (2 mL) was heated at 85 o C for 17 h.After cooling, the toluene was removed with a pipette and the yellow solid which remained was then washed with petroleum ether (yield 0.375 g; 75%).TLC the presence of four compounds.The product was chromatographed on silica using 1:1 diethyl ether:ethyl acetate as eluant to give samples of the four triazolines as pure fractions, together with mixed fractions (total 60%) and a small quantity of unchanged 17 Analysis of the 1 H NMR spectra gave the following overall yields: 18 (19%); 19 (33%); 20 (13%); 21 (32%). 1 H NMR data for all four compounds are shown in Table 1. Photolysis of triazolines 18 and 19; formation of exo-aziridine 22 A mixture of 18 and 19 (96 mg) in acetone (52 mL) was irradiated in a quartz tube for 2.5 h using a Hanovia medium pressure lamp.The solvent was removed under vacuum and chromatograped on silica using 70:30 diethyl ether:petroleum ether to give 22 as white crystals (71 mg; 81%), m.p. 184 -186 o C. 1 Photolysis of triazolines 20 and 21; formation of endo-aziridine 23 A mixture of 20 and 21 (58 mg) in acetone (30 mL) was irradiated for 2.5 h and was chromatographed as described above to give 23 as a white waxy solid (42 mg; 80%). 1 Table 1 1 H NMR data for compounds
2018-12-15T06:31:12.456Z
2002-09-12T00:00:00.000
{ "year": 2002, "sha1": "d71db90ebfa85e2bfe58bd638f07b6bb520c28cf", "oa_license": "CCBY", "oa_url": "https://www.arkat-usa.org/get-file/20535/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "57edc62dbdac3643ad5210a35d23231c4e070f2f", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
257683940
pes2o/s2orc
v3-fos-license
Phthalides Isolated from the Endolichenic Arthrinium sp. EL000127 Exhibits Antiangiogenic Activity Endolichenic fungi (ELF) produce specialized metabolites that have various medicinal properties. Inhibition of tumor angiogenesis efficaciously suppresses many types of cancer. This study aimed to discover novel antiangiogenic agents from specialized metabolite extracts of ELF strains isolated from Korean lichens. The EtOAc extracts of 51 ELF strains were subjected to a screening pipeline consisting of cell viability, scratch wound healing, and Transwell migration assays. The EtOAc extract of Arthrinium sp. EL000127 showed the most potent inhibitory activity against the chemotactic migration of human umbilical vein endothelial cells (HUVEC). Targeted isolation on the major LC-MS peaks exhibited a previously known phthalide, 3-O-methylcyclopolic acid (1), and two unknown analogues of 1, 3-O-phenylethylcyclopolic acid (2) and 3-O-p-hydroxyphenylethylcyclopolic acid (3). The structures were characterized by MS and NMR analyses. All the isolates were acquired and applied to bioassays as racemates due to spontaneous racemization. Among the isolates, compound 3 effectively inhibits HUVEC motility by suppressing mRNA expressions of genes regulating epithelial cell survival and motility, which suggested that compound 3 is a potent antiangiogenic agent suitable for further exploration as a potential novel therapeutic against cancers. L ichens are a symbiont between a fungus (the mycobiont) and a cyanobacterium or a green alga (the hotobiont); lichens rarely contain both a cyanobacterium and a green alga. 1 Lichen thalli are comprised of an unknown number of organisms, and endolichenic microbes reside inside lichen thalli without producing any visible disease symptoms. 2 There are two main types of endolichenic microbes: endolichenic fungi (ELF) and endolichenic bacteria. Oligotrophic endolichenic fungi do not damage or support the production of fructifications at the thallus surface. 3 Some ELF species produce bioactive specialized metabolites that have medicinal and economic potential. Examples of the chemically diverse array of bioactive specialized metabolites include alkaloids, steroids, xanthones, benzopyranoids, peptides, and allycylic compounds. These metabolites have cytotoxic, antioxidant, antifungal, and antibacterial bioactivities, which are very important qualities in drug development in the pharmaceutical industry. 4,5 The ability of ELF to produce unique specialized metabolites with anticancer properties provides a novel approach for identifying effective cancer therapeutics. Two cellular processes, vasculogenesis and angiogenesis, are involved in the development of the vasculature during embryogenesis. The development of new endothelial cells and their assembly into tubes is called vasculogenesis, and the growth of blood vessels from the existing vasculature is called angiogenesis. 6 After this morphogenesis, the normal vasculature becomes silent in the adult body, except during wound healing and female reproductive cycling. 7 By contrast, angiogenesis is constantly activated during cancer tumorigenesis to support tumor progression by forming new blood vessels to maintain a continuous supply of oxygen and nutrients. 8 Therefore, suppressing angiogenesis is one of the strategies of cancer therapeutics. The development of new chemical agents that inhibit angiogenesis is required to suppress tumor invasion and metastasis and eventually to inhibit cancer development. There is a long history of investigating lichen-derived substances for pharmaceutical properties, especially for their use as anticancer agents. 9−11 The investigation of the pharmacological properties of the specialized metabolites of ELF for medicinal purposes is a fast-growing area of research. But there are no studies investigating the ability of ELF-derived compounds to inhibit cancer by inhibiting tumor angiogenesis. Therefore, this study aimed to screen out potential antiangiogenic agents from specialized metabolites biosynthesized by ELF isolated from our collection of Korean lichens. Our findings revealed that 3-O-p-hydroxyphenylethylcyclopolic acid (3) derives from Arthrinium sp. EL000127 inhibited angiogenesis by suppressing HUVEC survival and motility. ■ RESULTS AND DISCUSSION The cytotoxic effect of the 51 ELF extracts (Table S1) on HUVEC was assessed by MTT assay. As shown in Figure 1, HUVEC had varying cell viabilities when treated with the ELF extracts at a concentration of 10 μg/mL. Among the 51 ELF extracts tested, 39 exhibited low or no cytotoxicity on HUVEC (cell viability >60%) and were subjected to the further step for evaluating the ability to inhibit angiogenesis. Cell migration plays a very important role in angiogenesis as it is the pivotal step for the formation of blood vessels by endothelial cells. 12 To identify the inhibitory ability of the 39 ELF extracts against HUVEC migration, the wound healing assay was performed. 33 ELF extracts showed measurable effect on HUVEC migration (Figures 2a and S1). The results revealed that ten ELF extracts (RWD < 100%), from strains EL002004, EL000175, EL000257, EL000099, EL000127, EL000181, EL001922, EL000027, EL001876, and EL001998, caused a lower relative wound density (RWD) in HUVEC than the control at 10 μg/mL (Figure 2b). The ten extracts were subjected to a Transwell migration assay, to evaluate the inhibitory effect of the ELF extracts on the chemotactic motility of HUVEC. EL000127 showed the highest inhibitory effect (40%) against the chemotactic motility of HUVEC (Figure 3a,b). As EL000127 was the most promising candidate among the tested strains, it was subjected to further chemical and biological characterization. Angiogenesis is initiated by vessel sprouting, which is mainly driven by VEGF signaling. 13 Therefore, suppressing endothelial cell migration in response to a signal stimulus is a key step in inhibiting tumor angiogenesis. In our screening, the extract of Arthrinium sp. EL000127 showed the more potent suppression of HUVEC migration in the presence of VEGF than any other candidates; it also suppressed mechanotaxic migration of HUVEC in the wound healing assay. EL000127 was an ELF strain isolated from a lichen thallus of Cladonia squamosal collected from Mt. Halla, Jeju Island, in 2009. According to ITS sequence analysis based on BLAST searches of the GenBank database (GenBank Accession No. Figure 1. Cytotoxic effects of 51 ELF extracts isolated from Korean Lichens on HUVEC. HUVEC were treated with 51 ELF extracts at 10 μg/mL for 48 h, and cell viability was measured by MTT assay. Data are represented as mean ± SD (standard deviation), n = 3. *p < 0.05; **p < 0.01; ***p < 0.001; NS, no significant difference when compared with the DMSO-treated group in each cell line. MW629845), EL000127 showed 98.65% similarity to the fungus Arthrinium pseudosinense, which suggested EL000127 is a member of the genus Arthrinium. Species of Arthrinium pseudosinense belong to the family Apiosporaceae of the genus Arthrinium Kunze. Endophytes, pathogens, and saprobes isolated from various substrates such as lichens, plants, soil debris, and marine algae belong to the genus Arthrinium Kunze. 14,15 To the best of our knowledge, this is the first study to reveal the promising bioactivity of specialized metabolites of ELF in lichen Cladonia squamosa. LC-MS/MS analysis on the Arthrinium sp. EL000127 extract exhibited several chromatographic peaks. Putative identification was tried using reference spectral library matching which is a part of a molecular networking workflow in GNPS; 16 however, none of the spectra were annotated. Three chromatographic peaks showing high ion intensities in the LC-MS base peak ion (BPI) chromatogram were prioritized and isolated to afford compounds 1−3 (Figure 4a suggested an additional presence of a phenylethyl moiety. The 1 H− 1 H COSY correlations between H-7′ and H-8′ and the HMBC correlations from H-2′/5′ (δ H 7.32) to C-7′ confirmed the spin system of the phenylethyl group, and the HMBC from H-3 (δ H 6.02) to C-8′ confirmed its attachment at C-3 via an ether bond (Figure 4c). The molecular formula of 2, which was suggested as C 19 . Ten ELF extracts inhibited HUVEC migration in the wound healing assay. (a) Quantitative analysis of the migratory ability of HUVEC expressed as the density of the wound region relative to the density of the cell region (RWD) after the treatment with the extracts (10 μg/mL) of 33 ELF. Three images per well were acquired, and scanning was performed every 2 h for 24 h. (b) EL002004, EL000175, EL000257, EL000099, EL000127, EL000181, EL001922, EL000027, EL001876, and EL001998 (10 μg/mL) inhibited HUVEC migration in the wound healing assay. Data represent mean ± SD (standard deviation), n = 3. *p < 0.05; **p < 0.01; NS, no significant difference compared with the DMSO-treated group. Compounds 1−3 showed no optical activity in polarimeter and ECD analysis, which indicated that all the isolates are racemic mixtures. Compound 1 was previously reported to show spontaneous racemization of the phthalide scaffold, 17 and similar phenomena were reported on other phthalides. 18−20 Our attempt on the chiral separation of compound 3 confirmed the equal amounts of enantiomers ( Figure 4d); however, they could not be kept in enantiomerically pure form due to the fast rate of racemization. Thus, all the isolates were subjected to bioassays as racemic mixtures. Based on the previously suggested mechanisms of phthalide racemization via aldehyde−carboxylic acid tautomers, 19,20 compounds 1−3 were proposed to racemize via their enol ether tautomers (Figure 4e). The cell viability of HUVEC was measured by MTT assay after treatment with various concentrations of compounds 1−3 for 48 h. Cell viability was dose-dependently decreased by treatments ( Figure 5). Compounds 1−3 exhibited very weak cytotoxicity against HUVEC with IC 50 values of 215.6 μM, 43.8 μM, and 1.83 mM, respectively. Nontoxic concentrations of 1−3 were used for further investigations on angiogenesis. To determine the effect of the isolates on the chemotactic motility of HUVEC induced by VEGF, Transwell migration assay was performed (Figure 6a). Compounds 1 and 3 inhibited chemotactic motilities of HUVEC by approximately 40% and 30% at 5 μM and 45% and 33% at 10 μM after 24 h, respectively, while 2 inhibited them by approximately 22% and 32% at 2.2 and 4.4 μM, respectively (Figure 6b). mRNA expressions of genes regulating endothelial cell survival and motility were tested after the treatment with 1 and 3 at the concentration of 10 μM and 2 at the concentration of 4.4 μM for 24 h. Compound 3 significantly decreased the expressions of VEGF and some genes related to epithelial cell survival, Akt and mTOR. Furthermore, 3 significantly downregulated the expressions of Src, cdc42, and MAPK genes that regulate the migration of epithelial cells. Compound 2 significantly decreased the expressions of mTOR, Src, cdc42, and MAPK and 1 significantly decreased the mRNA levels of mTOR and Src (Figure 6c). In addition, 3 significantly suppressed the phosphorylation of Akt and mTOR as detected by Western blotting ( Figure S2). VEGF is the pivotal factor of angiogenesis. Upon the binding of VEGF to VEGFR, phosphorylated VEGFR activates downstream signaling and initiates angiogenesis by recruiting endothelial progenitor from the bone marrow and promoting HUVEC proliferation. 21 Phosphorylation of Akt via VEGF signaling induces the phosphorylation of mTOR and eventually promotes HUVEC proliferation. 22 Activation of the PI3K/ Akt/mTOR signaling pathway plays a key role in regulating angiogenic functions in both epithelial and tumor cells. While regulating many cellular functions in endothelial cells such as survival, migration, proliferation, and blood vessel formation, PI3K/Akt/mTOR signaling promotes angiogenesis by stimulating the secretion of VEGF and modulating the expressions of nitric oxide and angiopoietin in tumor cells. Furthermore, activation of mTOR in tumor cells induces HIF-1α mediated VEGF production under hypoxia. 23,24 Cdc42 is a small GTPbinding protein which belongs to the Rho family of GTPases. Cdc42 regulates endothelial cell motility by controlling the movements of actin cytoskeleton, Rac-dependent formation of lamellipodia, and maintaining cell polarity. Activation of Src by VEGFR2 led to the activation of RhoA which plays a significant role in endothelial cell migration via causing stress fiber formation 25 (Figure 6d). Taken together, compound 3 effectively suppresses angiogenesis at a 24 h time point by significantly decreasing HUVEC survival and migration. As both compounds 1 and 2 significantly suppressed the chemotactic motility of HUVEC, further investigations are required in different time points to confirm their antiangiogenic effects. Our study demonstrates that specialized metabolites of endolichenic fungi are a promising source for discovering anticancer agents and highlights the urgent need for and importance of thorough investigations into the bioactive metabolites of ELF. Fifty-one endolichenic fungi associated with Korean lichens, including EL000127, were isolated using a surface sterilization method. 26 The isolated strains were maintained in potato dextrose agar (PDA) media at 25°C. For screening and preparative scale cultures, ELF mycelia grown on agar were cut and inoculated into 200 mL of potato dextrose broth (PDB) in 500 mL Erlenmeyer flasks and incubated at 25°C in a shaking incubator at 150 rpm for 3−4 weeks. The specialized metabolites were extracted by adding 200 mL of EtOAc into each flask, filtering, and separating the EtOAc-soluble layer. Crude extracts were evaporated to dryness under a vacuum using a rotary evaporator. The crude extracts were dissolved in 100% DMSO and were subjected to the screening. Cell Viability Assay. The MTT (3-(4,5-dimethylthiazol-2yl)-2,5-diphenyltetrazolium bromide; Sigma, St. Louis, MO, U.S.A.) assay was performed to measure the proliferation and viability of HUVEC cells. Cells were seeded at a density of 3 × 10 3 cells/well in 96-well plates, treated with 51 different ELF extracts or compounds 1−3 for 48 h, and then incubated in the MTT reagent for 4 h. The medium was aspirated, and 150 μL of DMSO was added to each well. The absorbance was measured at 540 nm using a microplate reader and analyzed with Gen 5 (2.03.1) software (BioTek, Winooski, VT, U.S.A.). Wound Healing Assay. HUVEC were plated at a density of 2 × 10 4 cells/well on 96-well ImageLock tissue culture plates (Essen BioScience, Ann Arbor, MI, U.S.A.) and grown overnight to confluence. Monolayer cells were scratched with a WoundMaker (Essen BioScience) to create precise and reproducible wounds in all wells. The cells were then washed twice with serum-free ECM to remove floating cells and incubated in ECM culture medium supplemented with 1% FBS. Cells were treated with 10 μg/mL of each of 39 ELF. Plates were imaged using an IncuCyte Zoom instrument with a ACS Omega http://pubs.acs.org/journal/acsodf Article 10× objective and analyzed using the standard scan type. Three images per well were acquired, and scanning was performed every 2 h for 24 h. The migration ability of HUVEC was expressed as the density of the wound region relative to the density of the cell region (relative wound density (RWD)) using IncuCyte Software. Three independent experiments were performed. Transwell Migration Assay. The chemotactic motility of HUVEC was determined using a Transwell migration assay with an 8 μm pore size polycarbonate membrane Transwell (Corning, NY, U.S.A.) coated with 0.1% gelatin. Fresh ECM supplemented with 4 ng/mL vascular endothelial growth factor (VEGF; R&D Systems, Minneapolis, MN, U.S.A.) was placed in the lower chamber, and HUVEC (4 × 10 4 cells/well) were seeded in the top chamber. Then, cells were treated with 10 selected ELF extracts or compounds 1−3 for 24 h at 37°C with 5% CO 2 . After incubation, nonmigrated cells on the top surface of the membrane were gently scraped away with a cotton swab. The upper chambers were fixed and stained with a Diff-Quik kit (Sysmex, Kobe, Japan). The migrated cells were analyzed under a light microscope in five randomly selected fields. Each experiment was performed in triplicate. Quantitative Real-Time PCR. Total RNA of HUVEC treated by compounds 1−3 for 24 h were extracted using RNAiso Plus (TaKaRa) according to the manufacturer's instructions. A total of 3 μg of RNA of each treated group were reverse transcribed to cDNA using M-MLV reverse transcriptase kit (Invitrogen, Carlsbad, CA, U.S.A.). mRNA expressions were measured using SYBR green reagent (Enzynomics, Seoul, South Korea), and analyses were performed by a CFX instrument (Bio-Rad, Hercules, CA, U.S.A.). The list of primers used are mentioned in Table S2. Western Blotting. HUVEC were treated with 10 μM of 1 and 3 and 4.4 μM of 2 for 24 h, harvested, and lysed in lysis buffer. A total of 25 μg of proteins from each treatment group were separated by SDS-PAGE, transferred to a blotting membrane, and blocked by 5% skim milk for 1 h. Membranes were incubated with primary antibodies of Akt, p-Akt, mTOR, p-mTOR, and actin (Cell Signaling Technology, MA, U.S.A.) for 2 h at room temperature (RT) followed by the incubation with horseradish peroxidase-conjugated secondary antibodies (Thermo Fisher Scientific) for 1 h at RT. Protein bands were detected using chemiluminescence imaging (biomolecular imager, Amersham ImageQuant 800 Western blot imaging System) and measured by Multi Gauge 3.0. software. Relative density was calculated against the density of the actin bands. Statistical Analysis. All experiments were performed in triplicates. Data were expressed as means ± standard deviation (SD). All statistical analyses were performed using IBM Statistical Package for Social Science (SPSS) version 22. The statistical significant between two groups was compared using the Student's t test. Unless indicated otherwise, a p-value < 0.05 was considered significant. ■ ASSOCIATED CONTENT * sı Supporting Information The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsomega.3c00876. Figure S1, effect of 33 ELF extracts on HUVEC migration in the IncuCyte wound healing assay; Figure S2, compound 3 significantly suppressed the phosphorylation of proangiogenic proteins in HUVEC; Table S1, fifty-one ELF isolated from Korean lichens were screened for their effects on cell viability; and
2023-03-23T15:23:59.407Z
2023-03-21T00:00:00.000
{ "year": 2023, "sha1": "af2adb06f801f8cae46a498cf77ad92e77bd12a0", "oa_license": "CCBYNCND", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c00876", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd9a90bb98a8a72af247ac6503133c33a669a024", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
263049259
pes2o/s2orc
v3-fos-license
Influence of scar age, laser type and laser treatment intervals on adult burn scars: A systematic review and meta-analysis Aim The study aims to identify whether factors such as time to initiation of laser therapy following scar formation, type of laser used, laser treatment interval and presence of complications influence burn scar outcomes in adults, by meta-analysis of previous studies. Methods A literature search was conducted in May 2022 in seven databases to select studies on the effects of laser therapy in adult hypertrophic burn scars. The study protocol was registered with PROSPERO (CRD42022347836). Results Eleven studies were included in the meta-analysis, with a total of 491 patients. Laser therapy significantly improved overall VSS/POSAS, vascularity, pliability, pigmentation and scar height of burn scars. Vascularity improvement was greater when laser therapy was performed >12 months (-1.50 [95%CI = -2.58;-0.42], p = 0.01) compared to <12 months after injury (-0.39 [95%CI = -0.68; -0.10], p = 0.01), the same was true for scar height ((-1.36 [95%CI = -2.07; -0.66], p<0.001) vs (-0.56 [95%CI = -0.70; -0.42], p<0.001)). Pulse dye laser (-4.35 [95%CI = -6.83; -1.86], p<0.001) gave a greater reduction in VSS/POSAS scores compared to non-ablative (-1.52 [95%CI = -2.24; -0.83], p<0.001) and ablative lasers (-0.95 [95%CI = -1.31; -0.59], p<0.001). Conclusion Efficacy of laser therapy is influenced by the time lapse after injury, the type of laser used and the interval between laser treatments. Significant heterogeneity was observed among studies, suggesting the need to explore other factors that may affect scar outcomes. Introduction Pathological scarring, such as hypertrophic scars, has a significant impact on a patient's quality of life.Complications following pathological scarring include contraction, reduction in range of movement, pruritus, pain, and discomfort [1].In 2014, a literature review showed that 73% of patients with hypertrophic scarring experience pruritis and 68% experience pain [2].These complications are often long-term, with research suggesting that the impact on the body's function, particularly after a major burn, can last beyond two years [3]. Treatment of pathological burns scars varies, either cosmetically, conservatively, or surgically.Laser therapy is a conservative method of treatment that offers a minimally invasive and low risk approach for the treatment of pathological burns scars.Laser type is classified into ablative carbon dioxide (CO 2 ) lasers, non-ablative fractional lasers and pulse dye lasers (PDLs).Ablative CO 2 lasers are used to reduce scar erythema for an improved visibility by targeting both dermal and epidermal layers of the skin, whereas non-ablative and fractional photothermolysis lasers address the thickness and volume of the scar by selectively damaging the dermis [4].PDLs rely on a lower wave light frequency which is primarily absorbed by oxyhaemoglobin to improve scar vascularity and visibility [5].All forms of lasers play an increasingly important role in burn scar management.However, there is variation in the efficacy of the treatment that may depend on the type of laser used, wavelength of laser and particularly on optimal timing for initiating laser therapy [4,6]. The decision of how soon to begin laser therapy has depended upon scar maturation and other characteristics such as patient age, skin type, type of scar and co-morbidities.These factors are commonly used to predict treatment outcomes and prognosis [4].However, other important factors such as optimal timing for initiation of laser therapy, laser types and treatment intervals for laser therapy have also been known to affect treatment outcomes, yet there is extensive heterogeneity within the literature surrounding the influence of these factors on outcomes after laser therapy [7].Previously, optimal timing for laser therapy was once considered to be when the scar had reached full maturation.However, recent studies have suggested an association between early initiation and the decrease in symptoms, contractures, improvement in mobility and overall rehabilitation process, for example with the use of vascular devices in the months following burn or surgical injury [8,9].With evidence also suggesting that the incidence of adverse events of laser treatments is not affected by the age of scar at time of treatment [7], early laser treatment has become a potential method to minimise scar formation.Strengthening the evidence for factors that influence the efficacy of laser therapy would allow a more personalized and targeted treatment for the patient, depending upon scar maturation and patient characteristics, ultimately improving outcomes. Recent meta-analyses have shown the efficacy of laser therapy on burn scars [10][11][12][13].Although a positive outcome was observed in all studies, the individual studies only focused on one particular laser (CO 2 ) and observed significant heterogeneity in their data.No metaanalysis to date has considered the effects of optimal timing of laser therapy on burn scar outcomes in adults and thus this raises the possibility that this factor may be causing the heterogeneity. In this way, the aim of this study was to identify the true effect of laser therapy on burn scar outcomes (VSS/POSAS scores, vascularity, pliability, pigmentation and scar height) through a comprehensive meta-analysis, considering the influence of different times to initiate treatment, types of lasers, laser treatment interval, complications with laser therapy, and the controls used within studies.Through exploration of the effect of these factors, it will be possible to further optimise treatment protocols for laser therapy and provide personalised patient care. This study focused on the adult population only, owing to differences in the physiological and pathological response to burn injuries in adults and children and potential different responses to laser therapy [14,15]. Methods This review was reported according to the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) guidelines. Protocol and registration The study protocol was registered with PROSPERO (CRD42022347836). Eligibility criteria The PICOS inclusion criteria were: (1) human adult patients (>18 years of age) with any postburn hypertrophic scars; (2) undergoing interventions with laser therapy; (3) compared to themselves before treatment and/or a control group without laser therapy; (4) assessing objective scar measurement tools (e.g. via ultrasound guided measurement) and/or subjective Vancouver Scar Scale (VSS) / Patient and Observer Scar Assessment Scale (POSAS) scores, for pliability, pigmentation, vascularity, scar height (5), in retrospective, prospective or randomized control trial (RCT) studies.Only studies written in English or Chinese language were included.No date of publication restriction was applied. Exclusion criteria The exclusion criteria for this study included: acne scars, surgical scars, articles published solely in abstract form (conference abstracts), article reviews, literature reviews, case reports and animal studies.Case reports were chosen for exclusion due to the underpowered nature of the study. Information sources The databases accessed for the literature search included: PubMed, Google Scholar, EMBASE, Scopus, Cochrane Database of Systematic Reviews and University Library of York and Hull.All databases were accessed from 25 th May 2022 to database inception. Study selection All articles were downloaded onto Covidence, a programme used for primary screening and data extraction for researchers conducting standard intervention reviews.Duplicates were deleted and the remaining articles were screened by two authors independently following predefined criteria.Full text of included studies were retrieved and further analysed independently, and any discrepancies concerning the articles' inclusion/exclusion was resolved through discussion from all authors.Articles written in Chinese were translated into English for inclusion in the title and abstract screening. Data collection process Data extraction was completed by using a bespoke data extraction form.Data was extracted for the following categories: population (number of patients, age, scar age), intervention (laser type, number of treatments, treatment interval, scar assessment tools used), and outcomes of the study divided (overall VSS/POSAS scores, vascularity, pliability, pigmentation, scar height, complications).Two independent reviewers extracted the data from the studies and analysed the mean and standard deviation of before and after the 'early' and 'latent' period.Any discrepancies or disagreements with regards to data extraction were resolved through discussion with all authors. For the purposes of the systematic review, the following terms were defined: 'laser ' as a scar therapy utilising photothermal energy to target intra and extra-cellular structures within the scar tissue [16], all types of lasers were included-ablative, PDL, non-ablative.'Hypertrophic burn scars' were defined as pathological scarring due to major burns characterised by red, raised and rigid scar tissue that contracts and limits normal motion of the skin [17].The age of scar was categorised into 'early' or 'latent', with 'early' being less than and including 12 months old and 'latent' being more than 12 months old. Risk of bias in individual studies To determine the methodological quality and risk of bias of the included articles, full-text articles were assessed using the ROBINS-E tool for non-randomised studies of interventions and RoB tool for randomised controlled trials [18,19].These results were presented in Robvis format [20].Two independent reviewers assessed the risk of bias and any discrepancies between the results were resolved by a third reviewer. Statistical analysis The five meta-analyses, testing the effects of early and latent laser therapy using (1) overall scar improvement (assessed by VSS and POSAS in score points), (2) scar vascularity (score points), (3) scar pliability (score points), (4) scar pigmentation (score points), and (5) scar height (score points/nanometres) in burn scars of adult patients were performed using the Comprehensive Meta-Analysis (CMA) software version 3.3.070.The effect size was calculated based on the standard mean difference between before and after intervention (retrospective or prospective studies) or between differences in delta (before versus after) of control and intervention groups (RCTs).When there was no significant heterogeneity, fixed models were selected and when there was significant heterogeneity, random effects model was selected for analysis.Conservative pre-post correlations of 0.05 were assumed [21]. Subgroup analyses were conducted to explore confounding factors that could be influencing any heterogeneity in each of the five outcomes.The subgroup analyses considered the effects of characteristics of the study population, treatment methods and duration of the intervention on the main effects.The following subgroups were tested: Scar age (Early [<12 months] versus latent [>12 months] initiation of treatment), type of laser (ablative, PDL or non-ablative), interval length of laser treatment application (<4 weeks, 4-8 weeks, >8 weeks), presence or absence of complications reported (presence: bleeding, swelling, hyperpigmentation, hypopigmentation, pain, blisters, pruritus, erythema, seepage and absence: no complications) and use of control group (with or without a control group).When an included study did not fit the category of subgroup or did not report the information, the study was excluded from that specific subgroup analysis.For all analyses, the p-value < 0.05 was considered significant.The Egger test was used to test the publication bias considering the p-value < 0.05. Results A total of 2,955 papers were exported to Covidence software and were subject to inclusion and exclusion criteria to yield eleven papers that could be used for meta-analyses. Characteristics of the studies The eleven studies included into the meta-analysis had a varied publication date from June 2009 to April 2022.The studies utilised a combination of study designs; five were RCTs and six were prospective studies [22][23][24][25][26][27][28][29][30][31][32].A total of 491 participants were included in the 11 studies, and Tan et al. had the largest population size of 221 [29].The studies were undertaken in five countries, with China being the most common location.The demographics reported showed an average patient age of 33.6 years with a 1:2 ratio of men to women.The studies used various lasers for the treatment method.Ablative CO 2 lasers were the most common, used in six studies at a frequency of 10,600nm.PDL was used in two studies, with the remaining three studies using non-ablative fractional lasers.The treatment duration, treatment interval and number of sessions varied between studies.The studies mostly relied on the VSS or POSAS as an outcome measure.Table 1 shows the characteristics of the included studies. Quality of studies Six of the non-randomised studies scored an overall low risk of bias.Most prospective studies had some concerns with bias due to confounding.Five RCTs showed overall low risk of bias, and one with high risk.The RCT with the overall high risk was due to a high risk in one domain (bias arising from randomisation process).Figs 2 and 3 represents the risk of bias assessment for non-randomised studies and randomised studies respectively. Evidence synthesis Our results showed that laser therapy significantly reduced VSS/POSAS scores (Fig 4A Due to the presence of outliers in these meta-analyses, we tested the reliability of these results by analysis of one study removed, and the exact same mean and 95% CI were found for each of the five outcomes, reinforcing that no single study was impacting the overall results.There was no risk of publication bias for VSS/POSAS, pliability, pigmentation and scar height meta-analyses (2-tailed p-value of Egger test = 0.06, 0.13, 0.72, 0.11 respectively), however there was a significant risk of publication bias for the vascularity meta-analysis (2-tailed p-value of Egger test = 0.04).Table 2 shows the subgroup analyses for the outcomes tested. Although both early (<12 months since injury) and latent (>12 months since injury) laser therapy were efficient at improving all outcomes investigated, latent laser therapy was more beneficial for vascularity and scar height than early treatment initiation.Ablative laser was the only laser type tested for vascularity, pliability and scar height outcomes and it significantly reduced these outcomes.Non-ablative lasers did not reduce pigmentation, whereas ablative lasers reduced this outcome significantly.For VSS/POSAS scores, significant differences were observed between the three types of lasers tested, where PDL was the most effective, compared to ablative and non-ablative lasers. Shorter interval lengths between treatments were better than longer intervals for all the outcomes investigated, with the exception of pigmentation that had similar reduction for interval lengths of 4 to 8 weeks and >8 weeks.For VSS/POSAS scores, vascularity, pliability and scar height, a better response were seen for interval lengths of 4 to 8 weeks compared to >8 weeks and for VSS/POSAS scores, interval lengths of <4 weeks reduced scores more than intervals between 4 to 8 weeks.Although laser therapy improved all outcomes in individuals with and without complications such as blistering, pain, bleeding, the studies isolating patients without complications tended to show higher reduction of overall VSS/POSAS scores and vascularity than studies including patients with complications. Studies comparing the effects of laser within the same patient and comparing to an untreated area of scar as controls, tested only VSS/POSAS and pigmentation outcomes.Sensitivity analysis was conducted to investigate time-varying confounding which confirmed (Taudorf, 2015 [24]), whereas sensitivity analysis for these internally controlled studies (Haedersdal, 2009 [22]; Lin, 2011 [23]) on reduction of pigmentation did not lead to significant effects (-0.016 [-0.472; 0.440], p = 0.95) found in the overall analysis. Discussion The exact mechanism of photothermolysis lasers on hypertrophic burn scars is currently unknown [13], but the theory relies on allowing new collagen to form in a controlled manner by causing either a photochemical reaction or heating to scars that have formed due to abnormal healing processes with increased collagen and fibronectin synthesis, fibroblast proliferation and neovascularisation [4].Though the molecular and cellular mechanisms of scar formation for example through major involvement of matrix metalloproteinases and their inhibitors are well known, their effect and functions are not completely understood when they are induced by laser therapy.It is perhaps this lack of understanding that has led to several trials focussing on laser type, duration and optimal timing being conducted in an endeavour to minimise heterogeneity in outcomes [33,34].This meta-analysis aimed to address this heterogeneity by considering variables such as timing of treatment after injury, laser type, optimal spacing for laser intervention and complications.Laser therapy offers a novel short term conservative treatment for burn scars [4].Previous conservative methods, including silicone gel therapy and pressure garment therapy, lack extensive supporting evidence [35,36].For instance, silicone gel therapy is deemed 68% effective at reducing scar height whilst requiring high patient compliance and extensive treatment timelines [35].Efficacy for pressure garment therapy requires application of this therapy for 23 hours per day for a miniumin of six months.This is an unrealistic expectation for patients especially in warmer climates, with well recognised complications of dermatitis [36].Laser therapy on the other hand allows for minimal interaction for patients with health care in weekly sessions, whilst physiologically improving burn scars with minimal complications and evidence-based protocols [6]. In this analysis we included 11 studies, involving 491 patients that investigated five different outcomes of laser therapy on hypertrophic burn scars.This analysis was aimed to help clinicians and patients make evidence-based decisions particularly regarding optimal timing, type of laser and interval length of laser use when laser therapy is chosen as a method of scar management.The findings showed that laser remains an effective treatment for hypertrophic burn scars, and positive effects were observed when laser was used either before or after 12 months since injury.Wound healing occurs in three discrete phases of inflammation, proliferation, and remodelling [37] and balance of the three phases may allow wounds to heal without excessive fibrosis.For example, the inflammatory phase comprises the release of cytokines and chemokines, as well as recruitment of fibroblasts and macrophages to restore the skin barrier.The inflammatory stage proceeds to the proliferation stage which can persist up to six weeks [38].The remodelling phase occurs when the fibroblast differentiates into myofibroblasts that contract and decrease the wound size before entering the maturation phase that typically lasts until 12 months but has been known to mature beyond this time [37].Perturbation of collagen production and collagenase synthesis leads to disorganised bundles of collagen cross-linked tightly creating a hypertrophic scar [39,40].It may then be intuitive to use lasers to target this process of disorganised growth in its early stages.For example, in 2018, a systematic review showed positive results for reducing cutaneous scar formation through laser intervention at three months post injury.The authors found significant improvement of the use of lasers in the inflammatory phase (lasers were applied immediately after or during wound closure), proliferation phase (laser applied mainly at time of suture removal) and improvement in the remodelling phase.However, some of the results of studies did not always reach significance and the population studied did not include patients with hypertrophic burn scars [41].These results may well have influenced the adoption of early interventions with lasers in burns patients with hypertrophic scars, though our study does also support their use in more established scars. Significant reduction of vascularity and scar height was observed with latent laser therapy, while no significant difference was found between early and latent laser therapy particularly in VSS/POSAS scores.This may be attributed to recent evidence which has shown that hypertrophic scars take significantly more time to completely mature than previously believed [42,43].A study in 2019 showed that mean maturation time for patients <30 years old was 35.76 months, 34.64 months for 30-55 year old patients and 22.53 months for >55-year-old patients.This suggests that the hypertrophic burn scars that were considered latent in this analysis may have been scars that have not fully matured and thus should have been considered and analysed in the early group. Our subgroup analysis showed that laser type and the interval of laser use made a significant impact on the main results.The selection of laser depends on the principle that targeted tissue has a greater optical absorption at a specific wavelength compared to the surrounding tissue [4].The subgroup analysis showed that PDL showed the greatest effect in improving VSS/ POSAS scores.A recent retrospective study has shown the effectiveness of PDL, particularly in the early phases of wound healing, in optimising scar formation of hypertrophic burn scars [44].However, the population of this study were children with Fitzpatrick skin type III and IV.PDLs work by targeting haemoglobin in blood vessels, resulting in selective photothermolysis, and they are generally considered safer than ablative lasers but have less penetration depth.PDL has been known to help reduce vascularity to reduce erythema, pruritis, pigmentation, hypertrophy and neuropathic pain from hypertrophic scars and can therefore be useful in the early stages of wound healing when the scar is thinner and more vascular [45][46][47]. In contrast, not much is known on the optimal interval for laser therapy with the need for long-term studies to be published to determine proper follow-up intervals [3].Our results showed that shorter intervals helped significantly reduce VSS/POSAS scores, vascularity, pliability and scar height compared to intervals of >8 weeks.Recurrence is a main problem particularly with pathological keloid and hypertrophic burn scars with scar recurrence reported to present as early as two weeks and up to three years particularly following ablative laser therapy [48,49].Studies that used laser therapy at shorter intervals may have observed better outcomes owing to starting treatment before cellular and molecular processes for scar recurrence can occur. Finally, we investigated whether any complications, such as blistering, bleeding etc, affected the main results.Studies that did not report any complications post laser therapy saw significantly reduced VSS/POSAS scores.Although a significant difference between studies with and without complications was observed in only one outcome, it would seem that the absence of complications post laser therapy may be indicative of improved scar outcomes. The main limitation in this meta-analysis was the significant study heterogeneity.We have suggested the confounding factors that influence the main results, but other factors such as patient age, sex, skin type, co-morbidities and specific location of the burn scar on the body were not considered as they were not differentiated in the studies.Of particular note, the total number sessions was an important confounding factor that was not further analysed.This was due to the incomparability of results as most of the data provided was given as ranges by the individual studies.Another limitation is that laser interval and laser type subgroup analyses had limited data, with some of the results based on a single study.Analysis from a single study is not representative of the population and thus presents a selection bias.The small number of studies in these subgroup analyses also prevented further analysis of the data to isolate one outcome in a subgroup within another subgroup (e.g., comparing treatment interval outcomes within the types of laser treatments).It is important to note that subgroup analysis is a form of exploratory analysis with low level of evidence, as it is based on comparisons of various studies. Significant results for sensitivity analysis of controls within studies was only available for VSS/POSAS scores in this study with only one study being tested.More controlled studies comparing laser therapy on the same patient and same scar is required to confirm whether scar improvement observed before and after laser therapy was an effect of laser therapy rather than an effect of time.In light of the small number of studies found for subgroup analyses, this affirms the need for further research to confirm the specific hypotheses raised within the subgroup analysis.Specifically, the authors advocate the need for future studies to investigate outcomes of laser therapy through comparison of different initiation times, type of laser therapies, and treatment intervals as well as investigating the long-term effects of laser therapy on scar recurrence.As such, the true effect of laser therapy may be further understood and used to guide safe clinical practice. Conclusion Laser therapy is an effective method of management for hypertrophic burns scars, with either early or latent initiation.This perhaps suggests that initiation of laser therapy should be decided after consideration of the patients' factors and subsequently tailored.The type of laser and interval length between applications influences effectiveness whereby studies that used PDL observed the greatest improvement in VSS/POSAS scores and studies that used laser at shorter intervals observed the greatest improvement in VSS/POSAS scores, vascularity, pliability and scar height. Fig 1 presents this data in the flowchart of selection of the studies.Papers were excluded from the screening process if they were the wrong study design, comparator, patient population or intervention.
2023-09-29T05:08:21.526Z
2023-09-27T00:00:00.000
{ "year": 2023, "sha1": "eb1fe88f79899d1265d5a516117db76cdc100567", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "eb1fe88f79899d1265d5a516117db76cdc100567", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
226257749
pes2o/s2orc
v3-fos-license
Sickness absence and disability pension among swedish women prior to breast cancer relapse with a special focus on the roles of treatment and comorbidity ABSTRACT Objective We aimed to determine the longitudinal prevalence and the predictors of sickness absence (SA) and disability pension (DP) in breast cancer (BC) women who eventually developed relapse. Methods A total of 1293 BC women, who were ages 20–63 years, diagnosed between 1996 and 2011 and by 2016 had all developed relapse, were identified in Swedish registers and were followed from two years before to five years after their primary diagnosis, while they were relapse‐free. Annual prevalence of SA and DP was calculated. Logistic regression was used to estimate adjusted odds ratios (AOR) for long‐term SA (>30 days) at one (y1) and three (y3) years post‐diagnosis. Results Prevalence of long‐term SA was 68.1% in y1 and 16.3% in y5. Prevalence of DP progressively increased from 16.3% in y1 to 29.0% in y5. Predictors of long‐term SA included age <50 years (y1:AOR = 1.79 [1.39–2.29]), TNM stage III (y1:AOR = 1.54 [1.03‐2.31]; y3:AOR = 2.21 [1.32–3.72]), metastasis (y1:AOR = 1.64 [1.26–2.12]; y3:AOR = 1.51 [1.05–2.18]), comorbidity (y1:AOR = 2.41 [1.55–3.76]; y3 AOR = 4.62 [2.49–8.57]) and any combination of radiotherapy, chemotherapy and hormonal therapy (y1:AOR = 2.05–5.71). Conclusion Among BC women who later developed relapse, those who had higher stages of BC, had comorbidity and received neoadjuvant and/or adjuvant therapy were at significantly higher risk of needing long‐term SA after their diagnosis. of older treatments and introduction of new therapies have resulted in improved outcomes (Clarke et al., 2005;The Early Breast Cancer Trialists' Collaborative Group, 2018;Weitz et al., 2005). Despite this, between 20% and 30% of patients will develop loco-regional recurrence or distant metastases in the years following primary treatment (Cardoso et al., 2018;Patrick & Khan, 2015;Voinea et al., 2017). During primary BC treatment, most patients in Sweden take advantage of sickness absence (SA) benefits. The reported rates of women with BC in Sweden returning to work within two years post-diagnosis have ranged from 60% to 80% (Bouknight et al., 2006;Hedayati et al., 2013;Johnsson et al., 2007Johnsson et al., , 2009Kvillemo et al., 2017). However, compared to those without BC, women with BC have SA rates that remain higher for up to five years after their primary diagnosis, and they also have higher rates of receiving disability pension (DP) benefits during that time (Eaker et al., 2011;Hauglann et al., 2012;Torp et al., 2012). It is not surprising that women with BC have higher post-diagnosis SA and DP rates than their healthy counterparts. Oncological treatments for BC may cause both acute and long-term side effects. Along with the morbidity of the disease itself, these side effects can impair the physiological and psychological wellness of patients, leading to limitations in their abilities to execute daily activities and participate in social events (Campbell et al., 2012;Shapiro, 2018;Zaidi et al., 2017). In addition, long-term sequelae associated with BC and its treatment, such as anxiety and depression, fatigue, chronic pain, cognitive impairment and peripheral neuropathy, are known to reduce physical, mental and emotional capacity (Bjerkeset et al., 2020;Colombino et al., 2020;De Iuliis et al., 2015;Dumas et al., 2020;Hedayati et al., 2012;Landeiro et al., 2018;Lundh et al., 2014;Rivera et al., 2018;Wefel et al., 2014;Zomkowski et al., 2018). Deterioration in the sense of physical and emotional well-being, and limitations in the functional capacity of patients with BC, negatively affects their quality of life and ability to work (Zaidi et al., 2017). Several studies looking at post-diagnosis SA and DP in large cohorts of patients in Sweden with various stages of primary BC have been published (Chen & Alexanderson, 2020;Kvillemo et al. 2017;Lundh et al., 2014). However, we are not aware of any studies that have evaluated patients with BC who at some point in the future experienced a relapse (i.e. loco-regional recurrence or metastasis), focusing specifically on their patterns of use of SP and DP during the interval between their BC diagnosis and their relapse. Because about one in four patients with primary BC do in fact experience a relapse, and these patients are more likely to suffer additional disease-related symptoms and treatment morbidity, a better understanding of the pattern of use of SA and DP in this population would be valuable. In this study, we aimed to study the patterns over time of the prevalence of SA and DP in women in Sweden with primary BC who at some time later had a relapse, focusing on the period of time before they had their relapse. We restricted the study of each patient to the period of time that started two years before their primary diagnosis and ended no more than either five years after their primary diagnosis, or when they relapsed, whichever came first. We also aimed to estimate the impact of various demographic and clinical risk factors on the likelihood that patients in this population would need long-term SA or any DP benefits. | ME THODS This study complied with the Declaration of Helsinki and was approved by the regional ethics review board at Karolinska Institute (Dnr 2012/745-31). According to Swedish legislation, patients registered in national quality registers do not need to provide written informed consent; however, they are informed that their data will be included in registers and that they can opt-out at any time. | Study population This was a population-based prospective cohort study using data initially obtained from two Swedish registers: (i) the BC registry (RBC) for the Stockholm-Gotland healthcare region, which included data on patients who were diagnosed with primary BC from 1 January 1996 to 31 December 2007; and (ii) the National Quality Register for Breast Cancer (NKBC), which included data on patients from the Stockholm-Gotland region who were diagnosed with primary BC from 1 January 2008 to 31 December 2011. For the cohort obtained from these two registers, we then used the National Social Insurance Agency's Microdata for Analyses of Social Insurance (MiDAS) database to access SA and DP benefits data for the interval between 1 January 1994 (two years before any of the patients were diagnosed with BC) and 31 December 2016 (five years after any of the patients were diagnosed with BC). Data linkage for patients was made possible by the unique national identification number assigned to each resident in Sweden at birth or when establishing permanent residency. We used the RBC and NKBC to obtain information about patient age, BC diagnosis date and tumour characteristics, type of treatment, follow-up (alive or deceased; relapsed or not) and date and type of relapse (loco-regional recurrence or metastasis). When compared to the Swedish Cancer Registry, to which it is obligatory to report all new cancer cases, the two registers that we used have been reported to capture 98% of women with BC in Sweden (Emilsson et al., 2015;Löfgren et al., 2019). We then used the MiDAS database to obtain information about whether SA and/or DP benefits were received, any time between 1994 and 2016, along with the dates those benefits were received and whether the benefits were full or partial. | Study design We included in the study all women in the RBC and NKBC databases from the Stockholm-Gotland healthcare region who were diagnosed with primary BC between 1 January 1996 and 31 December 2011 had TNM stages 0 to III, were between the ages of 20 years and 63 years at the time of their diagnosis and had complete SA and/or DP benefit data available in the MiDAS database extending from two years before to five years after their primary BC diagnosis. Based on these criteria, 1,293 patients qualified for inclusion in the study. The study patients were then followed for at least five years after their diagnosis or until 31 December 2016. All patients were included in the SP and DP calculations during the interval from two years before to the date of their primary BC diagnosis. Then, patients remained part of the SP and DP prevalence calculations and risk factor regression analyses as during the period that they were relapse-free, had not turned 65 years old and had not died. | Demographic and clinical characteristics For each patient, we recorded data about age at primary BC diagnosis, calendar year of diagnosis, type of neoadjuvant and/or adjuvant oncological treatment (e.g. radiotherapy, chemotherapy, hormonal therapy, unspecified treatment, no treatment and/or missing treatment data) and date and type of relapse (loco-regional or metastasis). We used any SA more than 30 days during the 12 months before primary BC diagnosis as a surrogate for patients having a comorbidity. The TNM classification system was used for tumour staging (Sobin et al., 2011), but if any T, N or M data were unavailable, tumour stage was designated as missing. | Sickness absence (SA) and disability pension (DP) benefits The Swedish Social Insurance Agency (SSIA) grants SA benefits to those 16 years or older who belong to the workforce and have reduced work capacity due to a disease or injury that is specified in a medical certificate (Swedish Ministry of Health & Social Affairs, 2010). The employer usually provides reimbursement for the first 14 days of SA; then, the SSIA provides reimbursement after that (Swedish Ministry of Health & Social Affairs, 2010). If an employee is unable to work after 14 days, the SSIA will grant an SA benefit consisting of full (100%) or partial (75%, 50%, or 25%) reimbursement of lost earnings. Those whose work capacity is considered permanently reduced by at least one-quarter are entitled to receive full (100%) or partial (75%, 50% or 25%) DP benefits. | Outcomes The two outcomes investigated were SA benefits and DP benefits. For each patient, we identified the benefits received at any point between two years before and five years after the primary BC diagnosis, up until 31 December 2016 or until they turned 65 years old, relapsed or died, if one of those occurred earlier. We calculated SA and DP net days by multiplying the level of benefit received (i.e. 25%, 50% or 100%) by the total number of SA or DP days. SA net days were then grouped into the following categories: 0, 1 to 30, 31 to 90, 91 to 180 and more than 180 net days. We defined post-diagnosis long-term SA as SA longer than 30 net days. DP net days were dichotomised as either 0 or more than 0, with the latter indicating a part-time or full-time disability. | Statistical methods Results for variables with skewed distributions are presented as medians with interquartile ranges (IQR). Annual SA and DP net day results from two years before diagnosis to five years after diagnosis were calculated and are presented as means with standard deviations. Annual prevalence of patients in each SA and DP net day category was calculated and is presented as frequencies and proportions. During each of the five years of follow-up after the diagnosis of BC, patients were censored (i.e. removed from prevalence and risk calculations) if they: (i) turned 65 years old (because they transitioned into the old-age pension system), (ii) died or (iii) or were diagnosed with a relapse (because the aim of the study was to assess the prevalence of and risk factors for SA and/or DP during the period of time when patients were relapse-free). As a result, the population denominators used for these calculations steadily declined over the post-diagnosis years. Univariable and multivariable logistic regression analyses were performed to estimate the crude odds ratio (OR), adjusted odds ratio (AOR) and 95% confidence interval (CI) of the primary outcome variable, for each demographic and clinical characteristic group. To perform these analyses, we dichotomised the SA net days as either up to 30 days or longer than 30 days, and we used SA longer than 30 net days, indicative of long-term SA, as the primary outcome variable. We did separate regression analyses for the first and third years post-diagnosis. For the adjusted models, age at BC diagnosis and SA net days during the year prior to BC diagnosis was included as continuous variables. In the regression analysis for the outcome of long-term SA (longer than 30 net days) during the first year after the diagnosis of BC, age, TNM stage and SA net days during the year prior to diagnosis were adjusted for all other variables, except for type of relapse (which is already captured within TNM stage). Also, type of oncological treatment was only adjusted for age, because of power limitations. Finally, type of relapse was adjusted for all other variables, except for TNM stage (because of its similarity to type of relapse). All 1293 patients were available for the first-year regression analysis. In the regression analysis for the outcome of long-term SA (longer than 30 net days) during the third year after the diagnosis of BC, patients were excluded if during the previous two years they turned 65 years old, died, experienced a relapse or received any DP benefits. This resulted in 618 patients being available for the third-year regression analysis. In this analysis, age was adjusted for all other variables, except for type of relapse. TNM stage was adjusted for age and SA net days during the year prior to diagnosis. Type of relapse was adjusted for all other variables, except for TNM stage. SA net days during the year prior to diagnosis were adjusted for age. Finally, only crude ORs were presented for type of oncological treatment, because of power limitations. Statistical significance was defined at the 5% (p ≤ 0.05) level. The statistical analysis was performed using SPSS, version 25. | RE SULTS The median age of all patients was 51 (IQR 43 to 57) years. By the Table 1. The annual prevalence rates of the patients in each SA and DP net day category are listed in Table 2 (Table 2). Conversely, from two years pre-diagnosis to five years post-diagnosis, the proportion of patients on DP for at least a day steadily increased each year, from 13.8% (179 of 1293) to 29.0% (91 of 314), respectively. | Risk factors for long-term sickness absence (SA) For the first year post-diagnosis, the risk of having long-term (more than 30 net days) SA was significantly higher for those patients 50 years old or younger compared to those over 50 years old (AOR = 1.79; 95% CI, 1.39-2.29); who were diagnosed with stage III BC compared to stage I (AOR = 1.54; 95% CI, 1.03-2.31); who eventually developed metastasis compared to loco-regional recurrence (AOR = 1.64; 95% CI, 1.26-2.12); and who had more than 30 days | DISCUSS ION In a cohort of patients with primary BC stages I to III, who were evaluated when they were relapse-free, the prevalence of longterm SA (longer than 30 days) was 68.1% during the first year after diagnosis, and then, it progressively declined until it reached 19.4% during the fifth year, never returning to the pre-diagnosis level of 11.6%. Throughout each of the first four years after diagnosis, the majority of patients with long-term SA actually received it for more than 180 days. In contrast to SA, the prevalence of DP increased over the duration of the study, so that by end of the study period 29% of the analysed patients were receiving a DP. One year after the diagnosis of BC, the factors that were predictive of long-term SA were age younger than 50 years, high were the strongest predictors for SA and DP at one and three years post-diagnosis. However, their study differed from ours in that 39.3% TA B L E 2 Net sickness absence (SA) and disability pension (DP) days received by female patients before and after diagnosis of primary breast cancer, 1 January 1996 to 31 December 2011, Stockholm-Gotland Region, Sweden. Before Breast Cancer Diagnosis After Breast Cancer Diagnosis Year −2 n (%) Year +4 d n (%) Year +5 e n (%) In their study cohort, the prevalence of long-term SA (longer than 30 days) was 61.2% during the first year post-diagnosis and 20.6% during the third year post-diagnosis, and it eventually returned five years post-diagnosis to 10.8%, which was the level seen before the women were diagnosed with BC. However, once again, only 37.7% of the women in their study had a high disease stage (II through IV), compared to 52.3% of our patients who had a high stage (II and III). Given that our study consisted of a selected cohort with a higher proportion of patients with high-stage BC, it is not surprising that we found a higher prevalence of long-term SA (e.g. 68.1% at one year, 29.1% at three years and 19.4% at five years post-diagnosis) than they did. This might relate to the fact that patients with higher stages of BC are more likely to receive intensive oncological treatments, have treatment-related sequelae and experience psychological distress, when compared to patients with lower stages (Eaker et al., 2011;Kvillemo et al., 2017;Lundh et al., 2014). And, the differences between our study and theirs would probably have been even greater had not over half the women in our study been diagnosed with BC prior to 2001 and received less toxic polychemotherapy (cyclophosphamide, methotrexate and fluorouracil [CMF]) than the anthracycline-and taxane-based regimens that were used in later years (Anampa et al., 2015). F I G U R E 1 Sickness absence (SA) and disability pension (DP) net days among female patients with loco-regional recurrence or metastasis after diagnosis of primary breast cancer (BC), 1 January 1996 to 31 December 2011, Stockholm-Gotland Region, Sweden. Net days calculated by multiplying level of benefit received (i.e. 0%, 25%, 50% or 100%) by total number of SA or DP days benefit received. Annual net days of SA (diamond) and DP (triangle) from two years before diagnosis to five years after diagnosis presented in the line graph as means with standard deviations. Number of patients excluded from analysis per year (because of local recurrence or metastasis, death or turning 65 years old during previous year) shown in boxes. For each year, total number of patients analysed and proportion of patients with over 30 net days of SA and over one net day of DP in that year are shown in descriptions along x-axis. At least two previous studies have also confirmed our finding that the proportion of patients with long-term SA escalated dramatically during the first year after the BC diagnosis and that this proportion then steadily declined annually during the five years post-diagnosis (Bjerkeset et al., 2020;Kvillemo et al., 2017). However, unlike others, we found that the prevalence of long-term SA never did return to the pre-diagnosis level (Johnsson et al., 2007(Johnsson et al., , 2009Kvillemo et al., 2017). Once again, this is most likely the result of the intensive oncological treatments, treatment-related sequelae and psychological distress experienced by the large proportion of patients with highstage BC in our cohort (Eaker et al., 2011;Kvillemo et al., 2017;Lundh et al., 2014). We used a pre-diagnosis SA of more than 30 days in the year prior to the diagnosis of BC as a surrogate for comorbidity, and we found that comorbidity was a significant predictor of long-term SA at both one and three years post-diagnosis. In Sweden, to certify that a patient is qualified to receive full or partial SA benefits, a clinician is required to complete a medical certificate that identifies one or more diagnoses (with ICD code) that may reduce the capacity for work (The Swedish Ministry of Health & Social Affairs, 2010). Consequently, SA is considered a reliable indicator of the presence of one or more significant comorbidities (Kivimaki et al., 2003;Marmot et al., 1995). Our findings fit with the current understanding of the role played by comorbidity in both the use of post-diagnosis SA and the delayed ability of patients to return to work after the diagnosis and treatment of BC. Indeed, multiple studies have shown that comorbidity, manifested as long-term pre-diagnosis SA, is predictive of long-term SA, reduced functional capacity and inability to return TA B L E 3 Crude and adjusted odd ratios of long-term (more than 30 net days a ) sickness absence (SA), among 1293 total female patients during first year after primary breast cancer (BC) diagnosis, 1 January 1996 to 31 December 2011, Stockholm-Gotland Region, Sweden to work after a primary BC diagnosis (Chen & Alexanderson, 2020;Kvillemo et al., 2017;Lundh et al., 2014). Furthermore, others have reported a strong association between comorbidity and long-term SA among patients in general (Kivimaki et al., 2003;Marmot et al., 1995). It has even been documented that clinician certification of a health condition severe enough to miss work can be a powerful predictor of mortality (Kivimaki et al., 2003;Marmot et al., 1995). Based on our findings and those of others, comorbidity certainly appears to be a barrier to a timely resumption of functional capacity and return to work after BC treatment has been completed. Nevertheless, there are a number of other factors that may also be involved in determining the amount of SA taken by patients in Sweden, including low levels of education, not being born in Sweden, perception of work situation, level of motivation to return to work, supportiveness of the workplace, BC tumour stage and types of BC treatment (Bouknight et al., 2006;Kvillemo et al., 2017;Johnsson et al., 2007;Johnsson et al., 2010;Nilsson et al., 2013;Torp et al., 2012). Interestingly, women's attitudes about returning to work and other work-related factors were reported in one study to explain up to half of all SA taken (Johnsson et al., 2007). These findings suggest that SA is a complex phenomenon and that it is influenced by a variety of factors, some of which were not included in the registers that we had access to. In our study, we observed a small steady post-diagnosis increase in DP prevalence in our cohort. In the first post-diagnosis year, 16.3% of patients were on DP, and by the fifth year, 29.0% were on DP. Others have noted the same phenomena, though reporting that DP increased over the first four years post-diagnosis, and then TA B L E 4 Crude and adjusted odd ratios of long-term (more than 30 net days a ) sickness absence (SA), among 618 total female patients b during third year after primary breast cancer (BC) diagnosis, 1 January 1996 to 31 December 2011, Stockholm-Gotland, Sweden. showed a slight decline down to 23.4% in year five (Kvillemo et al., 2017). We had also hoped to report on the impact of demographic and clinical risk factors on DP. However, the prevalence of DP in our cohort was too low to adequately power the statistical analysis of which factors were significant predictors of DP. | Strengths and limitations Our study results contribute to the existing body of knowledge about SA and DP for patients with primary BC in Sweden. Our findings add depth to the understanding of factors that influence SA after a diagnosis of primary BC. High female employment rates and complete coverage of SA and DP by insurance in Sweden, and the use of data from high-quality Swedish registers with minimal dropouts make the internal validity of the study strong (Lundh et al., 2014;Sjövall et al., 2012). In addition, although the accuracy of the diagnoses used for SA and DP in Swedish registers has not been extensively investigated, one study has reported that the diagnoses used for SA were highly accurate when compared with the diagnoses listed in medical records (Ludvigsson et al., 2016). Another strength of this study is that when doing annual prevalence calculations, we censored patients who were no longer at risk for SA or DP as a result of death, turning 65 years of age or developing loco-regional recurrence or metastasis during follow-up. These strengths suggest that our findings can be generalised to women who have been diagnosed with loco-regional recurrence or metastasis after primary BC and who live in countries with comparable employment frequencies and SA and DP benefits. Our study has some limitations. Despite the rigorous routines used by the SCR and NKBC to obtain data about patients in Sweden with BC, we found that almost 30% of the patients in our study lacked complete information about their BC TNM stages, confirming findings reported in a separate validation study (Löfgren et al., 2019). However, we found that those with missing TNM stage information in our study did not have increased odds of long-term SA during the first and third years post-diagnosis, so the absence of this information did not likely bias our results in that direction. Finally, although the use of pre-diagnosis SA of more than 30 days as a surrogate for comorbidity allowed us to identify this as a potential predictive factor for long-term SA in patients with BC and relapse, a study using specific comorbidity diagnoses will be necessary to confirm our findings and determine whether certain comorbidities are more predictive than others. | Implications for research and practice According to the Social Insurance Code in Sweden, patients must have an active disease, specified in a medical sickness certificate, in order to qualify for SA benefits (The Swedish Ministry of Health & Social Affairs, 2010). Although consultations for sickness certification are part of everyday clinical practice for oncologists, well-established policies regarding collaboration with and referrals to other healthcare professionals involved in the sickness absence certification process are lacking (Bränström et al., 2014). Given our findings that comorbidity and high-stage BC increased the risk that women would need long-term SA after their diagnosis, a cohort of women who have both high-stage BC and comorbidities should be studied prospectively to validate our findings. In addition, an effort should be made to implement a structured process to improve the collaboration between general practitioners and oncologists during the follow-up of women with high-stage BC who have comorbidities and are of working age. These women should receive more intensive medical care and rehabilitation during and after completion of their cancer treatment. Furthermore, depending on local expertise and facilities, these patients should be referred to a social worker, nurse practitioner or other qualified healthcare professional to assist them with a smooth return to work after treatment for primary BC. | Conclusions Women with BC who later develop relapse appear to be a unique group. In particular, those with higher stages of BC, who had comorbidity or who received neoadjuvant and/or adjuvant therapy were at significantly higher risk of needing post-diagnosis long-term SA. In this group, the prevalence of long-term SA was highest during the first year post-diagnosis and steadily decreased over the next five years, but never returned to pre-diagnosis levels. These women should receive more intensive medical care during and after completion of their cancer treatment, to help address the adverse effects of treatment and to assist with a smooth return to work. Future studies using Swedish national registers to evaluate specific comorbidity diagnoses and criteria used to grant SA and DP would be beneficial. | DATA AVAIL AB LE ON REQUE S T DUE TO PRIVAC Y/E THI C AL RE S TRI C TI ON S The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
2020-11-06T14:07:23.595Z
2020-11-05T00:00:00.000
{ "year": 2020, "sha1": "f66a416a7611d99e19ae81442d41e1ae268aa1cd", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ecc.13353", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "6b0fc988f152a0e739b32bd2e4b4df5b15adc73c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
7592775
pes2o/s2orc
v3-fos-license
Multilevel lumbar transverse process fractures in a professional association football player: a case report We present a case of multilevel lumbar transverse process fracture in a professional association football player, incurred after a fall from height during competitive play. Traditionally associated with high impact trauma in the general population, this injury is relatively rare in the context of professional football where it is more likely to be associated with lower impact trauma. We outline our experience of mechanism of injury, treatment options and recovery time serving as a guide for fellow clinicians when treating this condition in practice. In this particular case, the return to play time was 68 days. INTRODUCTION Transverse process fracture (TPF) is a rare injury in sport, most often associated with either direct trauma or violent muscular contraction. TPF in the general population has been reported relatively widely in the literature [1][2][3][4][5]. Conversely, coverage of TPF in the context of sport is sparse, especially with respect to the factors informing the sports physician in practice, namely mechanism of injury (MOI), treatment and recovery times. Indeed, it is unlikely that our understanding of TPF injury in the general population can be directly applied to TPF in sport. We present our experience of MOI, treatment and recovery in a case of multilevel lumbar TPF in a professional football player, incurred after a fall from height during competitive play. CASE REPORT A 31-year-old professional footballer was playing in an English Premier League tie. During the course of the match, the player attempted an acrobatic 'overhead-kick' clearance during which he landed heavily on his left side and lower back (Fig. 1 ). The player was unable to carry on due to on-going lumbar pain and was substituted soon after. After substitution, lower back and left flank pain persisted, associated with some difficulty in breathing. Initial examination revealed diffuse left lateral lumbar pain to palpation. Abdominal examination was normal, with no evidence of tenderness over the renal angles or spleen. Cardiovascular, respiratory and neurological examinations were all normal as were bedside observations. Specifically, there was no evidence of saddle anaesthesia. No urinary or bowel-related problems were evident and urinalysis revealed no evidence of haematuria. Indeed, routine blood tests were reassuringly normal. An initial differential diagnosis comprised: (i) Soft tissue contusion/muscle strain (ii) Rib fracture (iii) Lumbar spine bony injury (iv) Pneumothorax (v) Renal trauma (vi) Splenic trauma (vii) Other intraperitoneal visceral injury A 1-day post-trauma MRI spine scan revealed oedema, suggestive of left-sided TPF of L2 and L3 vertebrae. At the radiologist's recommendation, computed tomography (CT) of the spine was undertaken on Day 2 post-trauma. This confirmed left-sided TPF at levels L2 and L3, with some anterior displacement of the fracture at L2 (Fig. 2). The player was excluded from training, treated conservatively with rest as previously described in the literature [6] and prescribed oral analgesia for symptomatic relief ( paracetamol, ibuprofen and codeine). In addition, diazepam was prescribed as a muscle relaxant after the development of lower back muscle spasm 2 days post-trauma. At this time, the player also developed transient paraesthesia of the lateral aspect of his left foot-an area incongruent with the level of the TPF. This was precipitated by massage while lying prone on the treatment table and resolved spontaneously once the spasm had been treated. The player returned to low-level physical activity (gym-based cycling) at Day 21. He returned to outdoor training at Day 57 and return to play (RTP) time was 68 days post-trauma. The recovery was otherwise uneventful. To date, the player has not experienced any further morbidity associated with this injury and continues to play regular professional sport. DISCUSSION TPF has traditionally been associated with high-energy direct trauma or violent muscular contraction, often in the road traffic accident context [1,3,4]. TPF in the general population is more commonly complicated with visceral injury with or without nerve root injury. While TPF can occur in the athlete, it is probably associated with a lower energy MOI, rarely complicated and commonly associated with a relatively swift recovery [1]. Currently, there is a paucity of research in sport-related TPF with the existing literature taking the form of epidemiological studies or case reports. These are detailed as follows. Dutson [7] outlined a case of TPF of L1 in a trainee association footballer, associated with direct blunt trauma to the players back from a goalkeeper's knee. The fracture was complicated by traumatic transverse colon rupture requiring a stay in intensive care and colostomy. This was reversed 12 weeks post-trauma. RTP data were not specified. Brynin and Gardiner [8] detailed a single case of lumbar TPF at L2 and L3, confirmed on CT. The injury was precipitated by a 'spear' in the back during an American football game. The player was precluded from contact sport with an RTP of 4 weeks without an adverse event. Bali et al. [9] reported a case of multiple displaced lumbar TPF (L1-5) in a cricket bowler. The player presented with chronic lower back pain with no obvious precipitant. The authors hypothesized that the fractures occurred after repeated small stresses on the spine associated with fast-bowling. Gertzbein et al. [10] conducted a retrospective epidemiological study of thoracic and lumbar fractures in skiers and snowboarders over a 5-year period and found 43 instances of isolated TPF accounting for 29% of all fractures reported. The authors postulated that these occurred secondary to avulsion forces from intense muscle spasm on impact from a fall. Recovery time data were not provided. Finally, Tewes et al. [1] reviewed 29 cases of lumbar TPF in the American NFL (national football league) and found an average RTP time of 3.5 weeks. Rest was the most common management approach. Average RTP was further broken down based on the total number of fractures and was 16 days with 1 TPF, 19 days with 2 TPFs and 36 days with 3 TPFs. However, this trend was not statistically significant (P = 0.133). Upon RTP, most players wore flak jackets/padded wrap. The single re-injury reported occurred in a player wearing a flak jacket. The aetiology of injury was identified as 'impact' in 93% and 'torsion' in 7% of players. In terms of complications, five players suffered back spasms, akin to the present case. Additionally, one player sustained a visceral injury in the form of a kidney contusion. TPF in the general population is associated with high-energy trauma compared with relatively low-energy MOI in the sports context. Our understanding of TPF in this context is limited. This case report intends to add to our understanding in terms of MOI, investigation, management and recovery time. This will clearly be invaluable information to the sports physician in practice when managing this relatively rare injury in the high-stakes professional sports context. Here, the player was treated conservatively with rest, oral analgesia and muscle relaxant. Return to low-level physical activity was achieved at Day 21, outdoor training at Day 57 and return to competitive play at Day 68. To date, the player in question has had no long-term adverse sequelae associated with this injury.
2016-05-14T08:24:52.389Z
2015-05-01T00:00:00.000
{ "year": 2015, "sha1": "b41c26063324010f4e256b75a9cfe6b85a60aa1c", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/omcr/article-pdf/2015/5/288/4329693/omv037.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b41c26063324010f4e256b75a9cfe6b85a60aa1c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118419722
pes2o/s2orc
v3-fos-license
Mass degeneracy of the heavy-light mesons with chiral partner structure in the half-skyrmion phase We explore the mass splitting of the heavy-light mesons with chiral partner structure in nuclear matter. In our calculation, we employed the heavy hadron chiral perturbation theory with chiral partner structure and the nuclear matter is constructed by putting skyrmions from the standard Skyrme model onto the face-centered cubic crystal and regarding the skyrmion matter as nuclear matter. We find that, although the masses of the heavy-light mesons with chiral partner structure are splitted in the matter-free space and skyrmion phase, they are degenerated in the half-skyrmion phase in which the chiral symmetry is restored globally. This observation suggests that the magnitude of the mass splitting of the heavy-light mesons with chiral partner structure can be used as a probe of the phase structure of the nuclear matter. Although the nuclear matter properties are difficult to access, it is a crucial and an interesting object to study them in both particle and nuclear physics because they are critically concerned with such issues as the equation of state (EoS) relevant to the compact-star matter and the chiral symmetry breaking/restoration in dense matter( see., e.g., Ref. [1] and references therein). Among all the approaches to the nuclear matter, skyrmion crystal is such one in which the nuclear matter properties are studied by putting skyrmions onto the crystal structure and regarding the skyrmion matter as baryonic matter [2](see also Ref. [3] and references therein). By changing the crystal size, the density effect enters. For example, in the face-centered cubic (FCC) crystal [4,5] adopted in this paper, ρ = 4/(2L) 3 with ρ and L being the nuclear matter density and crystal size, respectively. The advantage of the skyrmion crystal approach to nuclear matter is that both the nuclear matter and medium modified hadron properties can be treated in a unified way [6]. In the skyrmion crystal approach, when we reduce the crystal size, or, equivalently, increase the nuclear matter density, the nuclear matter undergoes a phase transition from skyrmion phase to half-skyrmion phase in which there is a skyrmion configuration with a half baryon number at each crystal vertex [7]. And people found that, when the skyrmions are put onto the FCC crystal at low density, in the half-skyrmion phase at high density, the crystal vertices at which half-baryons are concentrated form a cubic crystal [4,5]. The order parameter which charactorizes this phase transition is the space average of the quark-antiquark condensate qq which vanishes in the half-skyrmion phase. Note that although the space average of the quark-antiquark condensate vanishes in the half-skyrmion phase, chiral symmetry is still locally broken since the pion decay constant in the baryonic matter f * π which charactorizes the chiral symmetry breaking does not vanish [8] and the quark-antiquark condensate is locally non-zero [9]. At this moment, properties of the half-skyrmion phase are not well-known except those pointed above. Since in Skyrme model, there exists a well-known spinisospin correlation, in Ref. [10], we proposed to study the medium modified mass spectra of the ground states of the heavy-light mesons to probe the structure of the spinisospin correlation in the nuclear matter constructed from the FCC crystal skyrmion matter and chiral density wave nuclear matter. It was shown that the spin-isospin correlation generates a mixing among the heavy-light mesons carrying different spins and isospins, and that the structure of the mixing reflects the pattern of the correlation, i.e. the remaining symmetry. Furthermore, it was found that the magnitude of the mass modification provides information of the strength of the correlation. In this work, we focus on the the mass spectra of the heavy-light mesons in the half-skyrmion phase. Since the half-skyrmion phase is characterized by the vanishing of the space average of the quark-antiquark condensate, or "chiral condensate", it is convenient to use the heavy-light meson fields with the chiral partner structure. Here, we regard the charmed heavy-light mesons with spin-parity quantum numbers J P = (0 − , 1 − ) and J P = (0 + , 1 + ) as chiral partners to each other [11] which should be degenerated when the chiral symmetry is restored [12,13]. In our calculation, we take the heavy quark limit of the heavy-light mesons, so that, in the rest frame of the nuclear matter, they are at rest. In such a case, only the space averages of the relevant fields affect to the mass spectra, which enables us to study the global structure of the medium. What we find in this paper is the following: By using the Lagrangian written up to the terms including one derivative, due to the symmetry structure of the FCC crystal and the arrangement of the nearest two skyrmions to yield the strongest attractive interaction, the mass splitting between the chiral partners is proportional to φ 0 ∝ qq so that they are degenerate in the half-skyrmion phase. In this sense, the medium modified magnitude of the mass spitting of chiral partners can be regarded as a probe of the phase structure of the skyrmion matter. We write the heavy-light meson doublets in the chiral basis, which, at quark level, are schematically written as H L,R ∼ Qq L,R . In the present work, we only couple the heavy-light meson fields to the pion field U (x). Under chiral transformation, the pion field U (x) and heavy-light meson fields H L,R transform as Then, up to the one-derivative terms, the effective Lagrangian which preserves the heavy quark symmetry and SU (2) L × SU (2) R chiral symmetry can be constructed as where v µ is the velocity of heave-light mesons, and g A1 , g A2 are real parameters. The chiral fields H L,R relate to the heavy-light meson doublets H and G with quantum numbers (0 − , 1 − ) and (0 + , 1 + ), respectively, through In terms of the physical states, the H and G doublets are expressed as Then, we can rewrite the effective Lagrangian (2) in terms of H and G fields as From this Lagrangian, we see that, in the matter-free space, the ∆ M term accounts for the mass difference between G and H doublets whose masses in the vacuum are estimated by the spin-averaged ones as Although using the present data we can fix the combination g A1 + g A2 through decay D * → Dπ [14], we do not want to specify its value here since it will be shown later that neither g A1 term nor g A2 term modifies the spectrum. To explore the density dependence of the mass difference between G and H doublets, we use the skyrmion crystal approach [8] by putting skyrmions onto the FCC crystal and regarding the skyrmion matter as baryonic matter. In such an approach, the matter affects on the heavy-light meson in the medium through functions of the space-averaged classical configurations of the light meson fields. For example, for a quantity X, its matter effect enters through where 2L denotes the crystal size. In this work, we will construct the skyrmion matter by using the standard Skyrme model and, following Ref. [6], take f π = 93 MeV and e = 4.75 which are the empirical values to reproduce the pion dynamics. In our calculation, the medium modified G and H doublet masses are defined by the poles of the medium modified two-point functions of G and H doublets at the rest frame v µ = (1, 0) with zero residual momentum limit for the external line. Equivalently, this means that one just needs to replace the light meson fields in the Lagrangian (5) with the space-averaged ones. Since the space-averaged light meson fields depend on the matter density, the density dependence of the heavy-light me-son masses can thus be obtained. In terms of the spaceaveraged quantities, the effective Lagrangian (5) which is responsible for the medium modified heavy-light meson masses can be written as Note that in this expression, the terms mixing G and H doublets disappear at the rest frame since the G and H doublets have the opposite parity, and in the strong processes and also in our skyrmion crystal approach, the parity should be preserved. For convenience, we next symbolically write the pion field U as with constraint (φ 0 ) 2 + (φ a ) 2 = 1. The parametrization (10) tells us φ 0 ∝ qq . Due to the parity conservation, we can conclude In terms of φ α , (α = 0, 1, 2, 3) we have where Therefore the heavy-light meson masses are modified by quantities φ 0 , ∂ i φ i and T i . Before making the numerical simulation, we first analyze some properties of ∂ i φ i and T i based on the symmetry structure of FCC crystal and the arrangement of the nearest two skyrmions to yield the strongest attractive interaction. In the crystal, due to the periodical structure, we can expand φ α as 1 φ 0 = a,b,c β abc cos(aπx/L) cos(bπy/L) cos(cπz/L), 1 Here, different from Refs. [5,15], we make the Fourier expansion of the fields φ α , (α = 0, 1, 2, 3) which have the same structures that their corresponding unnormalized onesφ α . The Fourier coefficients α and β are constrained by (φ 0 ) 2 + (φ a ) 2 = 1. Due to the FCC structure and the arrangement of the nearest two skyrmions to yield the strongest attractive interaction, the modes appearing in above equation are restricted as follows [6]: (F1) a, b, c are all even numbers or odd numbers and if h is even, then k, l are restricted to odd numbers while if h is odd, then k, l are restricted to even numbers. By using definition (8), from expansion (14), one can easily check We next consider T i . From the Fourier expansion (14) one can get where the restriction (F1) has been used. A similar argument leads to We then finally conclude Then, the effective Lagrangian (9) is reduced to where the density affects the heavy-light meson masses through φ 0 , or, equivalently, qq . This conclusion agrees with that obtained in Ref. [16] in the matter-free and zero temperature space. We plot in Fig. 1 the crystal size L dependence of the heavy-light meson masses. From this figure we see that, in the low density region (large L), there are two splitted lines with the upper black line denotes the modified G doublet mass M * G and the lower red-dashed line denotes the modified H doublet mass M * H and with the increase of the matter density, the mass splitting becomes smaller upto a critical density n 1/2 at which the skyrmion phase goes to half-skyrmion phase. In the half-skyrmion phase, due to the vanishing of φ 0 ∝ qq = 0, the H and G doublets become degenerated. We want to say that, since in the present case there is no spin-isospin correlation inside G doublet or H doublet in Lagrangian (19), as pointed in Ref. [10], there is no mixing inside these doublets and no mass splitting between the mesons inside them. In this paper, by using the mass splitting of the heavylight mesons with chiral partner structure, we investigated the symmetry pattern of the half-skyrmion phase through the G and H doublets. To write down the effective Lagrangian, we used the chiral basis for light quarks inside the heavy-light mesons which is convenient for constructing the effective Lagrangian with chiral partner structure. In our calculation, we only consider the pseudoscalar meson, pion, effect on the heavy-light mesons and the Skyrme model is only constructed from pion, i.e., the standard Skyrme model. Our result explicitly reveals that, in the half-skyrmion phase, due to the vanishing of the space averaged quarkantiquark condensate, the H and G doublets which are regarded as chiral partners to each other, have the same masses. In this sense, the medium modified mass splitting of H and G doublets can be used as a probe of the existence of the half-skyrmion phase. It was recently found that in a temperature system the degeneracy of chiral partners is due to the chiral symmetry restoration [13]. However, in our present cold dense system, although the chiral partners are degenerated in the halfskyrmion phase, chiral symmetry is not restored in this phase. In the present exploration, we only included the pion effect. The effects of the heavier resonances, such as ρ, ω, σ and so on which are essential for nuclear matter properties [8,17], on the heavy-light meson spectrum will be reported elsewhere.
2014-12-08T06:12:03.000Z
2014-12-08T00:00:00.000
{ "year": 2014, "sha1": "ece192022ebc6feafbfdd13524308e2d32d39b26", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1412.2462", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ece192022ebc6feafbfdd13524308e2d32d39b26", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5897364
pes2o/s2orc
v3-fos-license
Neurog1 and Neurog2 coordinately regulate development of the olfactory system Background Proneural genes encode basic helix–loop–helix transcription factors that specify distinct neuronal identities in different regions of the nervous system. In the embryonic telencephalon, the proneural genes Neurog1 and Neurog2 specify a dorsal regional identity and glutamatergic projection neuron phenotype in the presumptive neocortex, but their roles in cell fate specification in the olfactory bulb, which is also partly derived from dorsal telencephalic progenitors, have yet to be assessed. Given that olfactory bulb development is guided by interactions with the olfactory epithelium in the periphery, where proneural genes are also expressed, we investigated the roles of Neurog1 and Neurog2 in the coordinated development of these two olfactory structures. Results Neurog1/2 are co-expressed in olfactory bulb progenitors, while only Neurog1 is widely expressed in progenitors for olfactory sensory neurons in the olfactory epithelium. Strikingly, only a remnant of an olfactory bulb forms in Neurog1−/−;Neurog2−/− double mutants, while this structure is smaller but distinguishable in Neurog1−/− single mutants and morphologically normal in Neurog2−/− single mutants. At the cellular level, fewer glutamatergic mitral and juxtaglomerular cells differentiate in Neurog1−/−;Neurog2−/− double-mutant olfactory bulbs. Instead, ectopic olfactory bulb interneurons are derived from dorsal telencephalic lineages in Neurog1−/−;Neurog2−/− double mutants and to a lesser extent in Neurog2−/− single mutants. Conversely, cell fate specification is normal in Neurog1−/− olfactory bulbs, but aberrant patterns of cell proliferation and neuronal migration are observed in Neurog1−/− single and Neurog1−/−;Neurog2−/− double mutants, probably contributing to their altered morphologies. Finally, in Neurog1−/− and Neurog1−/−;Neurog2−/− embryos, olfactory sensory neurons in the epithelium, which normally project to the olfactory bulb to guide its morphogenesis, fail to innervate the olfactory bulb. Conclusions We have identified a cell autonomous role for Neurog1/2 in specifying the glutamatergic identity of olfactory bulb neurons. Furthermore, Neurog1 (and not Neurog2) is required to guide olfactory sensory neuron innervation of the olfactory bulb, the loss of which results in defects in olfactory bulb proliferation and tissue morphogenesis. We thus conclude that Neurog1/2 together coordinate development of the olfactory system, which depends on tissue interactions between the olfactory bulb and epithelium. Background The olfactory system is the part of the central nervous system that is responsible for detecting and processing odors. In vertebrates, the olfactory system consists of three major components: the olfactory epithelium (OE), the olfactory bulb (OB), and the olfactory cortex. Odor molecules are initially detected by olfactory sensory neurons (OSNs) in the OE, which project their axons to the OB, where odor signals are refined and enhanced before being relayed to the piriform/olfactory cortex, where signal processing and odor perception occurs. The OB is a ventroanterior protrusion of the cerebrum that serves as an intermediate processing center for olfactory signals. It is comprised of projection neurons and interneurons, each with distinct embryonic origins. Mitral and tufted cells are glutamatergic projection neurons that arise from dorsal telencephalic (that is, pallial) progenitors between embryonic day (E) 11 and E13 in mouse [1][2][3]. At E13.5, pallial progenitors also give rise to glutamatergic juxtaglomerular cells, which function as excitatory interneurons [1]. Later, at~E14.5, inhibitory OB interneurons, including periglomerular cells and granule cells, begin to differentiate in the lateral ganglionic eminences (LGEs) of the ventral telencephalon, migrating tangentially into the OB [4][5][6]. Smaller numbers of interneurons are also derived from the ventricular zone (VZ) of the OB [7], and from subependymal progenitors lining the lateral ventricles throughout life [8,9]. Development of the OB and OE are intimately intertwined. The OE is populated by OSNs that send pioneer axons to infiltrate the primordial OB beginning at~E11.5 in mouse [10,11]. Signals derived from pioneer OSNs are thought to reduce relative rates of cell proliferation in the rostral telencephalon, resulting in OB evagination and tissue morphogenesis [11], events that depend on Fgfr1 signaling [12]. There is also evidence that OSN innervation influences neuronal migration in the OB, as revealed by Dlx5, Fezf1 and Arx mutations, all of which display defects in OSN innervation that are accompanied by the generation of a smaller OB and aberrant interneuron migration [13][14][15][16]. The proneural genes Neurog1 and Neurog2 encode basic helix-loop-helix transcription factors that specify a dorsal regional identity and glutamatergic neurotransmitter phenotype in the neocortex [17][18][19]. Mitral, tufted and juxtaglomerular cells are labeled in Neurog1 and Neurog2 lineage traces, indicative of a pallial origin for these OB neurons [1,20]. While Neurog1 mutants have been reported to develop a smaller OB [21], the underlying cellular defects have not been characterized, and the role of Neurog2 in OB development has yet to be assessed. Moreover, while there is a partial loss of OSNs in Neurog1 −/− OEs [22,23], it is not known whether the remaining OSNs differentiate normally. Here we find that Neurog1/2 are required in a redundant fashion to specify the identities of glutamatergic OB neurons, including mitral and juxtaglomerular cells. Conversely we show that only Neurog1 is required for OB morphogenesis and to promote the differentiation of OSNs and their subsequent innervation of the OB. Neurog1/2 thus coordinately regulate development of the olfactory system. Results Neurog1 and Neurog2 are co-expressed in glutamatergic lineages in the developing olfactory bulb The proneural genes Neurog1 and Neurog2 are coexpressed in dorsal telencephalic (that is, pallial) progenitors [18,19,24], including those that give rise to glutamatergic neuronal lineages in the neocortex and OB [1,20]. To begin to assess how Neurog1 and Neurog2 might function together during OB development, we first compared their expression profiles at three key time points: E11.5, prior to the onset of OB differentiation; E12.5, when OB morphogenesis has initiated and mitral cell projection neurons are differentiating, and E13.5, when the first juxtaglomerular cells are born [1][2][3]25]. At E11.5, Neurog1 transcripts were detected in only a few cells in the VZ of the dorsal telencephalon, including in the primordial OB at the rostral-most edge ( Figure 1A-A"). In contrast, Neurog2 was expressed throughout the E11.5 pallial VZ, including in the presumptive OB ( Figure 1B-B"). By E12.5 and at E13.5, when the OB is visible as a morphological protrusion [11,26], the number of neocortical and OB VZ cells expressing Neurog1 steadily increased ( Figure 1C-C", E-E"), while Neurog2 expression remained widespread throughout the neocortical and OB VZs ( Figure 1D-D", F-F"). Notably, at all stages analyzed, Neurog1 was also widely expressed throughout the basal OE ( Figure 1A-A",C-C",E-E"), as previously documented [22], whereas Neurog2 expression was limited to a small, ventromedial OE domain (shown at E12.5; Figure 1D"). Immunostaining at E13.5 confirmed that Neurog1 and Neurog2 proteins were indeed co-expressed in pallial progenitors, including in the presumptive neocortex, as previously demonstrated [24], and in the developing OB ( Figure 1G). Recent long-term and short-term fate-mapping studies have indicated that Neurog1 [20] and Neurog2 [1] are expressed in all glutamatergic neuronal lineages in the OB, including mitral and tufted cell projection neurons and juxtaglomerular cells in the glomerular layer (GL). To determine to what extent Neurog1 and Neurog2 were expressed in the same or different OB lineages, we used a Neurog2GFP knock-in (KI) allele (Neurog2 KI ) to perform short-term GFP-lineage tracing of Neurog2-expressing cells and their progeny [24]. The vast majority (if not all) Neurog1-positive ( Figure 1H) and Neurog2-positive ( Figure 1I) VZ progenitors in the OB co-expressed GFP, suggesting that Neurog1 and Neurog2 are indeed coexpressed within the same OB lineage(s). GFP expression also persisted in Neurog2 +/KI OB cells migrating out of the VZ, including those cells that had stopped expressing Neurog1 and Neurog2, allowing the fate of these cells to be assessed with molecular markers ( Figure 1J,K,L). GFP + cells in the mantle layer of the E13.5 Neurog2 +/KI OB co-expressed Tbr1 ( Figure 1J,J') and Tbr2 ( Figure 1K,K'), markers of dorsally-derived, glutamatergic neurons [27,28], as recently reported [1]. In contrast, GFP + cells did not express the ventralspecific regional marker Dlx2 in E14.5 Neurog2 +/KI embryos ( Figure 1L,L'). These data demonstrate that Neurog1 and Neurog2 are largely co-expressed in pallial progenitors, including those that give rise to Tbr1 + and Tbr2 + glutamatergic neurons in the developing OB ( Figure 1M). In contrast, only Neurog1 is expressed to a significant extent in OE lineages ( Figure 1M), raising the question of how these proneural genes coordinately regulate development of the olfactory system ( Figure 1N). OB morphogenesis and lamination are disrupted in Neurog1 −/− and Neurog1/2 −/− embryos To determine whether Neurog1 and Neurog2 are required for OB development, we used a loss-of-function approach, analyzing Neurog1 [29] and Neurog2 GFPKI [24] single and double null mutants. In E18.5 wild-type ( Figure 2A) and Neurog2 KI/KI mutant ( Figure 2C) embryos, the OB was visible as a distinct morphological protrusion of the ventroanterior brain. In comparison, the OB was much smaller in Neurog1 −/− embryos ( Figure 2B), and a morphologically distinct OB was not apparent in Neurog1 −/− ; Neurog2 KI/KI double mutants (Neurog1/2 −/− ; Figure 2D). To examine OB development at the cellular level, we first monitored GFP expression from the Neurog2 KI allele, which serves as a short-term lineage trace of mitral, tufted and juxtaglomerular lineages [1]. In E18.5 double heterozygotes and Neurog2 KI/KI and Neurog1 −/− null mutants (the latter maintained on a Neurog2 KI/+ background), GFP-labeled cells were detected in the OB VZ and developing mitral cell layer (MCL). In Neurog1 −/− OBs, GFP + cells in the glutamatergic OB lineages were disorganized and formed a less distinct MCL (Figure 2E,F,G). Strikingly, in sections through the Neurog1/2 −/− double-mutant forebrain, an OB-like structure (OBLS) with a central ventricle that was surrounded by GFP + cells was detected in an aberrant location in the ventrolateral brain ( Figure 2H). To further characterize the laminar organization of the proneural mutant OBs, E18.5 sagittal sections were stained with H & E. In H & E-stained wild-type ( Figure 2I,I') and Neurog2 KI/KI ( Figure 2K,K') mutant OBs, a distinct VZ, granule cell layer, MCL, GL and outer nerve layer (ONL) were apparent. In contrast, most of the post-mitotic neuronal layers were indistinct in the E18.5 Neurog1 −/− OB ( Figure 2J,J') and Neurog1/2 −/− OBLS ( Figure 2L,L'), although a VZ and granule cell layer were discernible in both mutants. We thus conclude that Neurog1 is required for proper growth and lamination of the OB, whereas Neurog1 and Neurog2 are together required for overall OB morphogenesis. We set out to identify the underlying cause(s) for the morphological and laminar defects in these proneural mutants. Defects in the migration of glutamatergic neurons in Neurog1 −/− OBs and migration and differentiation in Neurog1/2 −/− OBLSs The disruption of lamination in E18.5 Neurog1 −/− OBs and Neurog1/2 −/− OBLSs suggested that the neuronal subtypes that populate these layers may not differentiate properly. To test this, we first examined glutamatergic OB lineages, which are derived from Neurog1-expressing and Neurog2-expressing pallial progenitors, including projection neurons (mitral and tufted cells) and interneurons (juxtaglomerular cells) (see above, and [1,20]). To label projection neurons in the MCL, we used a panel of dorsal telencephalic-specific markers, including NeuroD6, Tcfap2e, Nrp1, NeuroD1, Reelin, Tbr1 and Tbr2 ( Figure 3 and data not shown). Notably, Tcfap2e also labels OB progenitors, and is one of the few definitive markers of an OB identity as it is not also expressed in neocortical lineages [30], unlike the rest of the markers we employed. To unambiguously identify the OBLS in Neurog1/2 −/− double mutants, the anterior olfactory nucleus (AON), which lies between the neocortex and OB, was used as a landmark. In E18.5 Neurog2 KI/+ embryos, the AON was labeled by GFP (data not shown), indicating that it is also derived from Neurog2expressing pallial progenitors. In all E18.5 Neurog1/2 single and double mutants, the AON expressed GFP (data not shown), Neurod6 ( Figure 3A,B,C,D) and Tbr1 (data not shown), indicating that AON development is not grossly perturbed by the loss of these proneural genes. In the main OB, expression of NeuroD6, Tcfap2e, Tbr1 and Tbr2 was detected in the OB VZ and MCL in E18.5 wild-type and Neurog2 KI/KI null embryos ( Figure 3A,C,E, G,I,K,M,O). In contrast, NeuroD6, Tcfap2e, Tbr1 and Tbr2-expressing cells were generated, but were disorganized in E18.5 Neurog1 −/− OBs, occupying ectopic positions in the outermost portion of the OB, where a mitral cell-deficient GL would normally form ( Figure 3B,F,J,N). Strikingly, NeuroD6, Tcfap2e, Tbr1 and Tbr2 expression was also detected in the aberrantly localized OBLS in E18.5 Neurog1/2 −/− embryos, although the number of Tcfap2e-positive cells was markedly reduced ( Figure 3D, H,L,P). Neurog1/2 are thus required for the lamination of MCL projection neurons in the OB, and may together be required for the differentiation of these cells. We next asked whether Neurog1/2 were required for the differentiation of glutamatergic juxtaglomerular cells in the GL, which includes external tufted and short axon cells that are labeled by vesicular glutamate transporter 1 (vGlut1) and vGlut2 [1,31,32]. In E18.5 wild-type ( Figure 3Q,U) and Neurog2 KI/KI ( Figure 3S,W) OBs, vGlut1 labeled a large number of juxtaglomerular cell bodies and their projections, while vGlut2 expression was confined to the ONL in the periphery of the GL. In Neurog1 −/− OBs, vGlut1 and vGlut2 staining was strongly reduced in the presumptive GL, and an ectopic cluster of vGlut1/2-labeled cells aggregated in the dorsal OB ( Figure 3R,V). Similarly, while scattered vGlut1/2immunoreactive cells were detected throughout the Neurog1/2 −/− OBLS, a distinct GL was not evident in these embryos ( Figure 3T,X). We thus conclude that glutamatergic mitral and juxtaglomerular cells are born in normal numbers in Neu-rog2 KI/KI and Neurog1 −/− single-mutant OBs, but these cells migrate inappropriately and fail to take up their correct positions in the Neurog1 −/− MCL and GL. In contrast, fewer glutamatergic neurons are born in the Neurog1/2 −/− OBLS, and these cells also migrate aberrantly. Neurog1 is upregulated in Neurog2 −/− olfactory bulbs The lack of an apparent defect in the Neurog2 −/− OB (at least at the morphological level and in glutamatergic lineages) was surprising given that fewer glutamatergic neurons are generated in Neurog2 −/− single-mutant neocortices. We previously attributed the Neurog2 −/− neocortical phenotype to a downregulation of Neurog1 expression in dorsomedial telencephalic domains, such that Neurog2 −/− and Neurog1/2 −/− embryos are equivalent (that is, both lack Neurog1 and Neurog2 expression) in this part of the developing neocortex [18]. We therefore asked whether Neurog1 expression was similarly lost in the presumptive OB region of Neurog2 −/− embryos. Strikingly, we found that Neurog1 was instead upregulated in the Neurog2 −/− rostral telencephalon (presumptive OB) at both E11.5 ( Figure 4K,L) and to a lesser extent at E13.5 ( Figure 4M,N). In contrast, Neurog1 expression was reduced throughout most of the remainder of the Neurog2 −/− dorsal telencephalon, as previously documented [18]. These data are consistent with the idea that Neu-rog1 may compensate for the loss of Neurog2 in the developing OB. Relative rates of OB proliferation are elevated in Neurog1 −/− and Neurog1/2 −/− OBs Beginning at~E12.5, the OB is first evident as a distinct rostral protuberance of the telencephalon [11,26]. In our analysis of glutamatergic neuronal markers, we observed a shortening of the proximal-distal telencephalic axis in Neurog1 −/− mutants as early as E13.5, while a morphologically distinct OB was not evident in Neurog1/2 −/− mutants at any stage analyzed (between E12.5 and E18.5; data not shown). At these early stages, the driving force of OB morphogenesis is thought to be a reduction in proliferation at the rostral edge of the telencephalon, which results in the neocortex ballooning out while the presumptive OB is left behind [11,26]. To determine whether aberrant patterns of proliferation contributed to the morphogenetic defects observed in Neurog1 −/− and Neurog1/2 −/− OBs, dividing S-phase progenitors were labeled with a 30-minute BrdU pulse and labeled progenitors were then enumerated in fields of equal size in the presumptive neocortex (dorsal telencephalon) and OB ( Figure 5A to K). The presumptive OB was identified at these early stages as the midpoint of the telencephalic continuum surrounding the lateral ventricles. Specifically, the OB is flanked by dorsal and ventral telencephalic domains, both of which have distinct morphological features, and the borders of which were precisely identified by BrdU co-labeling with Tbr2 (dorsal) or Dlx2 (ventral). To confirm that the OB/dorsal telencephalon proliferation ratios were not altered in Neurog1 or Neurog2 single mutants because of a defect in the neocortex (as opposed to OB), we also compared the ratios of BrdU + cells in the dorsal versus ventral telencephalon ( Figure 5L,M,N). Note that neither Neurog1 nor Neurog2 are expressed in the ventral telencephalon, so proliferation rates should not be altered in this domain in mutants (serving as an internal control). Consistent with the lack of a defect in neocortical cell proliferation in Neurog1/2 single and double mutants, at both E11.5 ( Figure 5M) and E13.5 ( Figure 5N), the ratios of BrdUlabeled ventral versus dorsal telencephalic progenitors were similar in all genotypes (P >0.05 for all pairwise comparisons against wild-type). We thus conclude that prospective OB progenitors fail to reduce their relative proliferation rates in Neurog1 −/− and Neurog1/2 −/− mutants, probably contributing to the observed OB morphogenesis defects. To further characterize proliferation defects in early OB development, we examined the spatial arrangement of BrdU-labeled S-phase progenitors in the E13.5 VZ with respect to differentiating mitral cells. Early-born mitral cells migrate radially from the OB VZ, using radial glia as a scaffold, while later-born mitral cells shift to a tangential pattern of migration, coursing through the intermediate zone of the OB in close proximity to tangentially oriented axons of early-born mitral cells [10,33]. Consequently, mitral cells generated at E10 show a bias towards dorsomedial positions, while tangentially migrating cells born at E12 preferentially accumulate in ventrolateral domains. In E13.5 wild-type OBs ( Figure 5O) and Neurog2 KI/KI OBs ( Figure 5Q), Tbr2 + mitral cells had migrated throughout the mantle layer of the OB, lining the OB surface along the entire dorsal-toventral axis, but were less abundant in a central zone at the rostral tip. In E13.5 Neurog1 −/− OBs ( Figure 5P), the distribution of Tbr2 + cells was altered, such that a Tbr2deficient zone at the rostral tip was not observed, suggestive of early defects in cell migration. These migratory defects were more severe in E13.5 Neurog1/2 −/− OBLSs, in which a distinct gap was evident between the BrdU-labeled progenitor zone and the Tbr2 + mantle layer ( Figure 5R). Migration defects are thus evident as early as E13.5 in Neurog1 −/− and Neurog1/2 −/− OBs. Defects in the differentiation and migration of olfactory bulb interneurons in Neurog1 −/− , Neurog2 KI/KI and Neurog1/2 −/− mutants In the embryonic neocortex, Neurog1 and Neurog2 regulate a binary fate decision, promoting a dorsal regional identity and glutamatergic neurotransmitter phenotype while repressing an alternative ventral, GABAergic neuronal identity [18,19]. We thus speculated that the reduction in glutamatergic neuronal number in the Neurog1/2 −/− OBLS may be due to a similar fate switch. To test this, E13.5 embryos were labeled with Dlx2, which together with Dlx1 is required for the generation of almost all GABAergic and dopaminergic interneurons in the OB [13,34,35]. While Dlx2 was widely expressed in the mantle zone of the E13.5 ventral telencephalon, only a few Dlx2 + cells had infiltrated the wild-type ( Figure 5S) and Neurog1 −/− ( Figure 5T) OBs at this stage. In contrast, Dlx2-labeled neurons were abundant in the E13.5 Neurog1/2 −/− OBLS ( Figure 5V), lying directly adjacent to the BrdU-labeled progenitor zone in the VZ, and filling the gap between the Tbr2 + and BrdU + zones. Some Dlx2 + cells were also detected in ectopic sites in the Neurog2 KI/KI OB ( Figure 5U). Interneurons thus appeared to be generated at the expense of glutamatergic neurons in the Neurog1/2 −/− OBLS, and possibly also in Neurog2 KI/KI OBs. In the neocortex, the ventralization of Neurog2 KI/KI and Neurog1/2 −/− progenitors arises due to the increased expression of Ascl1 [18,19], a proneural gene that is required for the generation of GABAergic neurons in the ventral telencephalon [36,37], and a subset of periglomerular cells in the embryonic OB [35] and adult OB [38]. Ascl1 expression was also upregulated in the E13.5 OB VZ in Neurog2 KI/KI and Neurog1/2 −/− embryos ( Figure 6A,B,C,D), consistent with a similar mechanism underlying the misspecification of OB neurons. To further analyze the ectopic differentiation of OB interneurons, E18.5 OBs were analyzed for the expression of Dlx1, which labels OB progenitors and postmitotic granule and periglomerular cells in the granule cell layer and GL, as well as glutamate decarboxylase 1 (GAD1), which labels all GABAergic OB interneurons in the granule cell layer and GL [39], calretinin, which labels most granule cells and a subset of periglomerular cells [6], and TH, which labels dopaminergic periglomerular cells ( Figure 6E to T) [6]. In E18.5 Neurog1 −/− OBs, a distinct GL was not evident, and instead, neurons labeled with Dlx1, GAD1, and calretinin and TH were scattered throughout the mantle zone of the OB ( Figure 6F,J,N,R). In E18.5 Neu-rog2 KI/KI OBs, the GL was clearly marked by Dlx1, GAD1, and calretinin, but a scattering of ectopic interneurons labeled by these markers was also detected between the MCL and GL ( Figure 6G,K,O). While TH + cells were not located in ectopic sites in E18.5 Neurog2 KI/KI OBs, they formed a less compact layer ( Figure 6S). Finally, in Neu-rog1/2 −/− OBLSs, there was a striking expansion of Dlx1, GAD1, calretinin and TH expression domains, which spread out radially from the VZ of the OBLS to reach the pial surface of the brain ( Figure 6H,L,P,T). In Neurog2 KI/KI OB and Neurog1/2 −/− OBLSs, therefore, a subset of pallial progenitors that should give rise to glutamatergic OB projection neurons are misspecified, instead differentiating into GABAergic interneurons. In contrast, neuronal misspecification defects are not observed in Neurog1 −/− OBs, although the migration of GABAergic OB neurons is strikingly perturbed. Olfactory sensory neurons fail to innervate the olfactory bulb in Neurog1 −/− and Neurog1/2 −/− embryos At first glance, the defective migration of OB interneurons in Neurog1 −/− and Neurog1/2 −/− embryos was unexpected, given that these proneural genes are not expressed in OB interneuron lineages [20]. However, several studies have indicated that OSN innervation is required for OB interneuron migration [13][14][15][16], in addition to controlling the proliferation of OB progenitors [11]. Defects in OB interneuron migration could thus be non-cell autonomous in Neurog1 −/− and Neurog1/2 −/− double mutants. Consistent with this model, Neurog1 is expressed in OE progenitors, where it is required for the differentiation of a subset of OSNs at early stages of development [22], although innervation patterns were not examined. To determine whether OSN innervation was indeed perturbed in the absence of Neurog1 function, we monitored the expression of growth-associated protein 43 (GAP43) and olfactory marker protein (OMP), which mark both the cell bodies and axonal projections of immature (GAP43) and mature (OMP) OSNs [11,46]. In coronal sections through E18.5 wild-type ( Figure 8A,E) and Neurog2 KI/KI ( Figure 8C,G) OBs, GAP43-labeled and OMP-labeled OSN axons emanated from the OE, traversing the cribriform plate to penetrate the ONL, where they wrapped the entire periphery of the OB. In contrast, in E18.5 Neurog1 −/− (Figure 8B,F) and Neu-rog1/2 −/− ( Figure 8D,H) embryos, GAP43 and OMP labeled a fibrocellular mass (FCM) that did not penetrate the OB. Only a small amount of GAP43 and OMP expression was observed surrounding caudal regions of the Neurog1 −/− OB, suggesting that very few OSN axons innervated the mutant OB ( Figure 8B,F). As a side note, the term FCM was first coined to describe the extratoes (that is, Gli3 −/− ) olfactory phenotype, and refers to an amorphous bundle of OSN axons that fail to extend and penetrate the OB [47]. To assess OSN innervation along the entire rostrocaudal axis, we also examined sagittal sections of E18.5 Neurog1 −/− (Figure 9B,F) and Neurog1/2 −/− ( Figure 9D,H) embryos with calretinin (data not shown), GAP43 ( Figure 9A,B,C,D) and OMP (Figure 9E,F,G,H), revealing that defects in OSN axon innervations of the OB were observed at all levels. OSNs express one of~1,200 odorant receptors (OR) in mice, dictating the type of odor they will respond to, with OSNs that express the same OR targeting the identical glomerulus in the OB [48][49][50]. Notably, the specificity of OSN targeting depends on ORs, which are functionally required to establish a glomerular topographic map in the OB [51][52][53]. To determine whether OR expression was maintained in Neurog1/2 −/− OSNs, we examined the expression of three different ORs (L45, M72, P2) that direct the innervation of distinct glomeruli [54,55]. In coronal sections through E18.5 wildtype OBs ( Figure 8I,M,Q) and Neurog2 KI/KI OBs ( Figure 8K,O,S), L45, M72 and P2 transcripts were detected in OSN axon bundles that had innervated the OB, concentrating in the ventromedial ONL. In contrast, in E18.5 Neurog1 −/− embryos ( Figure 8J,N,R) and Neurog1/2 −/− embryos ( Figure 8L,P,T), L45, M72 and P2 were expressed in OSN axons that accumulated in a FCM outside the OB. Neurog1 −/− and Neurog1/2 −/− OSN axons therefore failed to penetrate the OB, even though they continued to express ORs. Taken together, these data show that Neurog1 −/− and Neurog1/2 −/− mutant OSNs fail to innervate the OB, despite their expression of several markers of differentiated OSNs. Neurog1 is thus required to promote OSN axonal extension into the ONL of the OB. Finally, we investigated whether apoptosis may contribute to the small decline in OSN numbers in Neurog1 −/− and Neurog1/2 −/− mutants by analyzing the expression of activated caspase 3, a marker of apoptosis. In E14.5 wild-type OEs ( Figure 10Q), Neurog1 −/− OEs ( Figure 10R) and Neurog2 KI/KI OEs (Figure 10S), only a few scattered activated caspase 3-positive cells were detected, whereas in Neurog1/2 −/− embryos ( Figure 10T) there was a notable increase in activated caspase 3 immunolabeling in the OE. Apoptosis thus occurs at elevated levels in the Neurog1/2 −/− OE only, despite Neurog2 not being expressed in the vast majority of OE progenitors. Strikingly, the increase in OE apoptosis in double mutants phenocopies the OE defects observed upon bulbectomy [59], suggesting that the Neurog1/2 −/− OBLS may fail to provide trophic signals to the OE, as discussed further below. Discussion The olfactory system consists of the OB, OE and olfactory cortex, which together are responsible for detecting and processing odors ( Figure 10U). Here we provide mechanistic insights into how the development of these olfactory structures is coordinated. We first demonstrate that Neurog1 and Neurog2 function redundantly and in a cell autonomous fashion to specify the glutamatergic neuronal identity of OB projection neurons and juxtaglomerular cells, while suppressing an alternative interneuron fate. In contrast, only Neurog1 is required to regulate OSN innervation of the OB, defects in which can perturb the proliferation rate of OB progenitors, and the migratory routes of OB neurons ( Figure 10V,W). In summary, Neurog1 and Neurog2 play an integral role in coordinately regulating development of the olfactory system, regulating cell fate specification in the OB and OSN differentiation and axonal targeting in the OE. Neurog1/2 promote a glutamatergic neuronal identity in the olfactory bulb Glutamatergic mitral, tufted and juxtaglomerular cells are derived from dorsal telencephalic progenitors, as revealed by Neurog1 [20] and Neurog2 (present study and [1]) lineage tracing. Accordingly, we found that fewer glutamatergic OB neurons are generated in the absence of Neurog1/2 function. Nevertheless, a subset of mitral and juxtaglomerular cells differentiate in the Neurog1/2 −/− OBLS, suggesting that other genes compensate for the loss of proneural function. Candidate transcriptional regulators that may promote the differentiation of glutamatergic OB neurons in the absence of Neurog1/2 include the cortical selector genes Pax6 [3,60] and Lhx2 [61], both of which are also required for the differentiation of subsets of glutamatergic neuronal lineages in the OB. Consistent with a potential compensatory role for Pax6 in the OB, in the embryonic neocortex, we previously demonstrated that Neurog1/2 are required for the first wave of neurogenesis (<E14.5), whereas Pax6 drives the second wave (>E14.5) [19]. At first glance, the presence of OB defects in Neurog1 −/− and not Neurog2 −/− single mutants might suggest that these two transcription factors have distinct functions. However, we show here that Neurog1 is upregulated in the presumptive OB of Neurog2 −/− single mutants, probably compensating for the loss of Neurog2. We thus suggest that Neurog1 and Neurog2 are for the most part functionally redundant in the developing OB. Consistent with this idea, only in Neurog1/2 −/− double mutants are severe defects in OB development observed. OB and neocortical projection neurons differ, yet both arise from adjacent pools of dorsal telencephalic progenitors. How does neuronal diversification occur? One possibility is that OSN-derived or OEC-derived signals alter the cell-fate specification functions of Neurog1/2. Consistent with this idea, at~E11 when mitral cells begin to differentiate, OSN pioneer axons infiltrate the primordial OB [10,11], as do OECs, which wrap OSN axons [57,[62][63][64][65]. How might OSNs/OECs influence the cell-fate specification properties of Neurog1/2 in the OB? OSNs secrete Fgf8 to noncell-autonomously reduce OB progenitor cell proliferation [11][12][13][14][15][16], while OECs produce an unknown chemoattractant that guides OB neuronal migration [66]. One possibility is that the activation of downstream signaling pathways in the OB triggers a change in the cell-fate specification properties of Neurog1 and Neurog2. For instance, modification by Neurog1/2 by phosphorylation might result in the capacity of these proneural genes to turn on the expression of genes such as Tcfap2e, which is specifically expressed in OB lineages, a possibility that will be investigated in the future. Neurog1/2 control a binary choice between excitatory and inhibitory lineages in the olfactory bulb In the region of the dorsal telencephalon that will become the neocortex, Neurog1/2 regulate a binary fate choice between dorsal, glutamatergic versus ventral, GABAergic neuronal fates [18,19]. Consequently, in Neurog2 KI/KI and Neurog1/2 −/− embryos, neocortical progenitors and their neuronal derivatives are misspecified, acquiring a dorsal LGE-like identity [19]. Notably, the dorsal LGE is the ventral telencephalic progenitor zone from which most OB interneurons arise during embryogenesis, including granule cells and periglomerular cells [13,35,43,44,67]. Consistent with the expansion of a dorsal LGE-like progenitor pool in Neurog2 KI/KI and Neurog1/2 −/− embryos, several interneuron markers were ectopically expressed in the mutant OBs/OBLSs. OB interneuron differentiation is regulated by multiple transcription factors, including Ascl1 and Dlx1/2, which control distinct differentiation pathways [6,35,68]. Here we found that Ascl1 and Dlx1/2 are both upregulated in the Neurog2 KI/KI OB and Neurog1/2 −/− OBLS from E13.5 of development, as previously reported in the neocortex [18]. Additional transcription factors required for the differentiation of subsets of OB interneurons were also ectopically expressed in the Neurog2 KI/KI OB and Neu-rog1/2 −/− OBLS, including Sp8 [41], Pax6 [44,69] and Er81 [70]. By monitoring interneuron marker expression in GFP-labeled OB cells derived from the Neurog2 lineage, we were able to show that the ectopic Sp8expressing, Pax6-expressing and Er81-expressing interneurons in Neurog2 KI/KI and Neurog1/2 −/− OBs were derived from pallial progenitors that had undergone a fate switch, as opposed to an increase in the migration of OB interneurons. Neurog1/2 thus play a similar role in regulating a binary fate switch between an excitatory glutamatergic neuronal phenotype versus inhibitory interneuron phenotype in both the OB (present study) and neocortex [18,19]. Neurog1 regulates OB tissue morphogenesis, proliferation and lamination by controlling OSN innervation of the OB We show here that Neurog1/2 −/− embryos have severe defects in OB morphogenesis, forming an aberrantly localized OBLS in the ventrolateral brain. Interestingly, similar OB morphological defects are also observed in Pax6 mutants [3,60] and Lhx2 mutants [71], cortical selector genes that are required to specify dorsal telencephalic regional identities [72,73]. In contrast, the morphogenetic defects observed in the Neurog1 −/− OB are more modest, with a reduction in OB size and aberrant lamination of the GL and MCL. Given that the driving force for OB morphogenesis is thought to be a reduction in proliferation in the presumptive OB at the rostral tip of the telencephalon, which is left behind as surrounding neocortical territories expand [11,12], we examined proliferation in Neurog1/2 mutant embryos. We found that proliferation rates do not decline in the presumptive OB versus neocortex in either Neurog1 −/− or Neurog1/2 −/− embryos, probably accounting at least in part for the inability of the OB to protrude outwards. Nevertheless, differences in proliferation alone cannot explain why the OBLS morphogenesis defects in Neu-rog1/2 −/− embryos are so much more striking than those observed in Neurog1 −/− OBs. We speculate that the added OB neuronal specification defects observed in Neurog1/2 −/− embryos (present study), Pax6 −/− embryos [3,60] and Lhx2 −/− embryos [71], which are not observed in the Neurog1 −/− OB, may alter neuronal migratory routes, hence influencing the aberrant positioning of the OBLS. Several studies have suggested that the normal reduction in proliferation of presumptive OB versus neocortical progenitors is induced by the innervation of the OB by OSN axons [11]. The first pioneer OSN axons innervate the OB at E11, when OB morphogenesis first begins, but it is not until E13 to E15 that a sizeable number of OSN axons enter the OB, first innervating the ONL and later infiltrating the GL, where they make synaptic contacts with mitral cell dendrites [11,64,[74][75][76]. Consistent with these studies, we found that Neurog1 −/− and Neu-rog1/2 −/− OSN axons do not innervate the OB, instead terminating prematurely in a FCM. The FCM formation and lack of OB innervation in Neurog1 −/− embryos is also strikingly similar to the phenotypes observed in Arx, Fezf1 and Dlx5 mutants, which also develop a smaller OB with aberrant MCL and GL lamination [13][14][15][16]. However, Dlx5, Arx and Fezf1 are not expressed in pallial lineagesbut rather in subpallial and/or OSN lineages, where they control the differentiation and/or migration of OB interneurons through cell autonomous and nonautonomous mechanisms [13][14][15][16]. Strikingly, the abnormal formation of the MCL and GL in Neurog1 −/− OBs more closely resembles phenotypes observed following the mutation of genes that are expressed in OSN lineages and prevent innervations of the OB, including Fezf1, Dlx5 and Klf7 [16,77,78]. Why do Neurog1 −/− OSNs fail to innervate the OB? The basal lamina surrounding the brain is remodeled at E14.5 to allow OSN axon penetration, an event that depends on canonical Wnt signaling [79] and matrix metalloproteinases. In Dlx5 mutants, the defects in OSN penetration of the OB may be related either to defective differentiation of OSNs, which similar to Neurog1 −/− OSNs also express markers of differentiated neurons, or in the frontronasal mesenchyme, which also expresses Dlx5 [13]. In Fezf1 mutants, the removal of this basal lamina has been shown to rescue the OSN phenotype, resulting in OSN penetration of the OB. Other possibilities include the loss of a chemoattractant activity in the Neurog1 −/− OB itself. While we did not identify any defects in the expression of the neurotrophin receptors or ligands in the OB or OE of Neurog1/2 −/− , Neurog1 has been shown to regulate the OB expression of prokineticin 2 (PK2) [21], a secreted proteins that binds G-protein coupled receptors. Notably, PK2-deficient mice phenocopy the Neurog1 −/− OB defects, at least in part because PK2 functions as a chemoattractant for OB interneurons born in the ventral telencephalon [80]. Future work will be required to determine whether PK2 also functions as a chemoattractant for OSN axons, and to determine whether Neurog1 function is required in the OB and/or OE for OSN innervation of the OB. Neurog1 is required for olfactory sensory neuron innervation of the olfactory bulb Previous analyses of the Neurog1 −/− OE revealed that fewer OSNs express a subset of mature neuronal markers at early developmental stages (E12.5), including the pan-neuronal marker SCG10, suggestive of a block in differentiation [22]. However, these defects are only partial, as other OSN markers, such as Ebf1 and Lhx2, are expressed at normal levels in E12.5 Neurog1 −/− OSNs. Here we examined the differentiation of Neurog1 −/− and Neurog1/2 −/− OSNs at a later developmental stage (E18.5), revealing only a minor reduction in the expression of mature OSN markers, including SCG10, GAP43, OMP and the OR genes L45, M72 and P2. The expression of mature OSN markers in the Neurog1 −/− OE may be due in part to the maintained expression of Lhx2, which is required to initiate OE differentiation [61], or Six1, which functions upstream of Neurog1 to regulate OSN differentiation [81]. In addition to the ability of the OE to influence OB development, it has conversely been suggested that the OB can influence the OE. Indeed, bulbectomy results in a loss of OSN marker expression and increased apoptosis in the OE [59]. In this regard it is interesting that in Neurog1/2 −/− OE there is an increase in apoptosis that is not observed in the Neurog1 −/− OE. At first glance, this is surprising, as Neurog2 is only expressed in a small dorsomedial domain of the OE (present study), whereas Neurog1 expression is widespread [22]. While Neurog2 cannot rescue the OSN innervation defects observed in Neurog1 −/− embryos, we cannot rule out the possibility that Neurog2 initiates the expression of survival signals in the OSN, thereby compensating for the loss of Neu-rog1 in the OE. However, given the limited expression domain of Neurog2, we do not believe that this is the case. Instead we suggest that the Neurog1/2 −/− OBLS is deficient in a trophic signal that is an essential survival signal for OSNs in the OE. While we investigated whether the neurotrophins might be contributing to the death of the OSNs, no defects in Ntrk receptor or ligand expression in the OE was observed in the Neurog1/2 mutants, suggesting that other factors must be involved. Conclusions In this article we find that both Neurog1 and Neurog2 are expressed in OB progenitors, where they function redundantly to specify the identities of glutamatergic OB neurons, including mitral and juxtaglomerular cells. Conversely we show that Neurog1 is required to promote OSN innervation of the OB, and consequently influences OB proliferation and morphogenesis. We thus conclude that the proneural genes Neurog1 and Neurog2 coordinately regulate development of the olfactory system by regulating proliferation, cell fate specification, neuronal migration and axonal innervation. Animals and genotyping The generation of Neurog1 and a Neurog2 GFP KI null allele was previously described [24,29]. Double heterozygous mice carrying null alleles of Neurog1 and Neurog2 KI were maintained on a CD1 background and males and females were crossed to generate embryos. Mating was confirmed via vaginal plugs, with mouse embryos being staged by considering the plug date as E0.5. Embryos were genotyped as previously described [19,24]. All animal procedures were approved by the University of Calgary Animal Care Committee (Protocol # AC11-0053) in agreement with the Guidelines of the Canadian Council of Animal Care (CCAC). Histological staining Whole E18.5 heads were placed in Bouin's fixative and processed for paraffin sectioning as previously described [92]. Sections were deparaffinized in three xylene washes for 3 minutes each, followed by rehydration in a decreasing ethanol series (2× 100%, 2× 95% and 2× 80%) for 3 minutes each. Slides were then immersed in water for 5 minutes, before staining in hematoxylin for 3 minutes. The slides were then rinsed in water for 2 minutes, and stained in eosin for 30 seconds. Slides were then dehydrated in 3-minute ethanol washes in an ascending series (2× 80%, 2× 95% and 2× 100%). Finally, the tissues were incubated in xylene overnight, and mounted in Permount SP15-100 Toluene Solution (Fisher Scientific). Statistical analysis Composite photomicrographs of the entire OB were used to count immunoreactive cells from a minimum of three embryos and three sections per embryo. Graphs and statistical tests were generated with GraphPad Prism Software version 5.0 (GraphPad Software Inc., La Jolla, CA, USA). Error bars represent the standard error of the mean. Statistical significance was determined using oneway analysis of variance and a post-hoc Tukey's test.
2016-05-12T22:15:10.714Z
2012-08-20T00:00:00.000
{ "year": 2012, "sha1": "623fb28d73eb14ffc14ef8e030aa1b96867c2609", "oa_license": "CCBY", "oa_url": "https://neuraldevelopment.biomedcentral.com/track/pdf/10.1186/1749-8104-7-28", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "623fb28d73eb14ffc14ef8e030aa1b96867c2609", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
233184128
pes2o/s2orc
v3-fos-license
Building Resilience with Aerobic Exercise: Role of FKBP5 Both preclinical and clinical studies have pointed that aerobic exercise, at moderate doses, is beneficial at all stages of life by promoting a range of physiological and neuroplastic adaptations that reduce the anxiety response. Previous research about this topic has repeatedly described how the regular practice of aerobic exercise induces a positive regulation of neuroplasticity and neurogenesis-related genes, as well as a better control of the HPA axis function. However, limited progress has been carried out in the integration of neuroendocrine and neuroplastic changes, as well as in introducing new factors to understand how aerobic exercise can promote resilience to future stressful conditions. Resilience is defined as the ability to adapt to stress while maintaining healthy mental and physical performance. Consistent findings point to an important role of FKBP5, the gene expressing FK506-binding protein 51 (FKBP51), as a strong inhibitor of the glucocorticoid receptor (GR), and thus, an important regulator of the stress response. We propose that aerobic exercise could contribute to modulate FKBP5 activity acting as a potential therapeutic approach for mood disorders. In this sense, aerobic exercise is well known for increasing the growth factor BDNF, which by downstream pathways could affect the FKBP5 activity. Therefore, our manuscript has the aim of analyzing how FKBP5 could constitute a promising target of aerobic exercise promoting resilient-related phenotypes. AEROBIC EXERCISE AS A HEALTHY AVENUE TO PROMOTE RESILIENCE Overwhelming evidence exists that lifelong exercise is associated with a longer healthspan, whereas physical inactivity is the fourth leading contributor to death worldwide [1]. Among its benefits, aerobic exercise causes not only positive effects on physical health but also on psychological well-being. Hence, those subjects who perform exercise regularly suffer from less depression [2], anxiety [3] and cognitive impairments [4]. Similar results have been found in preclinical studies in which, although running exercise is comparable to other forms of stress in terms of corticosterone release, it induces patterns of neuronal activity that correspond to predictable, controllable, and rewarding stimuli, in contrast to negative stressors, such as social isolation or electric shocks [5]. factor expression) changes that could promote enhanced neuroplasticity and may be capable of buffering the detrimental effects of chronic stress [6]. Hence, the increase of local and systemic expression of growth factors, notably the brainderived neurotrophic factor (BDNF) has been commonly associated with improvements in cognitive functioning, as well as in anxiety and depression-related behaviors [7]. Accordingly, the ability of aerobic exercise to enhance BDNF release and function in the synapse promotes dendritic spine integrity and activates other cellular pathways is a cornerstone for brain processes that are necessary to repair and reorganize neuroplasticity altered during the course of mood disorders [8,9]. In addition, other growth factors, such as the insulin-like growth factor-1 and the vascular endothelial growth factor, have been shown to play an important role in BDNF-induced effects on neuroplasticity, as well as to exert neuroprotective effects of their own contributions to the beneficial effects of exercise on the brain [10]. On the other hand, exercise appears to have a blunting effect on the hypothalamic-pituitary-adrenal axis (HPA axis) and the sympathetic nervous system. This blunting impact on stress responsiveness seems to contribute to reducing emotional, physiological and metabolic reactivity, as well as increase positive mood and psychological well-being [6]. Finally, another biological target of aerobic exercise is the immune system. Thus, higher physical activity has been associated with lower inflammatory cytokine responses to a mental stressor, along with a greater parasympathetic control [11]. Moreover, regular exercisers also showed attenuated leucocyte trafficking and adhesion molecule expression in response to a mental stressor compared with less physically active individuals [12]. In summary, these findings are consistent with the concept of physiological toughening as a mechanism by which regular exercise can improve stress tolerance by optimizing neuroendocrine and physiological responses [6]. Obviously, the expression of growth factors and neuroplasticity are promising avenues of research with the potential to elucidate the mechanisms of how aerobic exercise works. Concerning this, new lines of research have started to focus on the relationship between the FK506-binding protein 5 (FKBP5) gene and BDNF because both are expressed and affect the functioning of brain areas, such as the hippocampus, amygdala and the prefrontal cortex (PFC), involved in the control of the stress response [13,14]. Genetic studies revealed associations between stressful life events and alterations in the HPA axis that were mediated, in part, by gene × environment interactions involving FKBP5 and BDNF polymorphisms [9,15,16]. Consequently, would it also be possible that interactions between genes and eustresors, such as aerobic exercise, could improve the HPA functioning and promote resilient-related behaviors by epigenetic mechanisms? The present CN perspective has the aim of analyzing the synergistic effects of BDNF and FKBP5, as a still unknown target of aerobic exercise, in promoting resilience to cope with stressful situations. THE UNKNOWN ROLE OF FKBP5 PROTEIN IN THE POSITIVE EFFECTS INDUCED BY AEROBIC EXERCISE In the past decade, FKBP5 (OMIM 602623) has emerged as a promising genetic candidate for investigations of vulnerability to mood and anxiety disorders owing to its involvement in regulating the sensitivity of GR [17]. Elevated FKBP5 levels lead to a decreased negative feedback regulation of the HPA axis and GR resistance, which is probably responsible for a dysregulated stress response [18]. Besides, the expression of FKBP51 correlates with plasma BDNF levels in depressed patients [19]. More precisely, the inhibition of GR negatively affects BDNF-induced TrkB phosphorylation and its downstream signaling pathways, whereas a short activation of GR is associated with the long-lasting BDNF-delivered mechanisms required for memory consolidation [20]. Additionally, and as we mentioned before, BDNF is involved in the regulation of synaptic plasticity by pre-and post-synaptic mechanisms. Potential downstream targets of BDNF are the Synapsins, a family of presynaptic phosphoproteins, which affect the proportion of vesicles that are available for release [21]. Several studies have found that BDNF increases Synapsin phosphorylation, thus enhancing the availability of vesicles and facilitating neurotransmitter release [22,23]. Interestingly, the presynaptic vesicle protein Synapsin has shown to be an important molecule candidate to modulate FKBP5 reducing the stress responsiveness [24]. It has been found that the expression of FKBP51 and Synapsin is regulated in opposite directions not only in the mouse PFC, but also in the PFC of schizophrenic patients, who are generally known for exhibiting an altered stresscoping behavior [24]. On the other hand, a recent study revealed a critical role of FKBP51 in mBDNF secretion and suggested the involvement of mBDNF in the performance of stress-coping behavior after the administration of the antidepressant S-ketamine [25]. Specifically, and contrary to our expectations, these authors found that the enhancement of BDNF in the extracellular space after S-ketamine administration was absent in FKBP51 deficient mice. This effect could be possible if we consider that this protein plays a double role in mediating responses to stimuli with both positive (eustresors) and negative (stressors) characteristics [26]. Likewise, the antidepressant effect of paroxetine was related to an enhancement of both BDNF and FKBP5 [19]. Nevertheless, although the mechanism by which FKBP5 is able to modulate mBDNF levels is still unknown, it has been proposed that its interaction with NMDA receptors, as well as with inhibitory synapses in brain regions such as the hippocampus, could affect the neuronal activity and consequently BDNF levels [27,28]. Regarding the direct modulation of FKBP5 by aerobic exercise, we have found in the literature that is a research field scarcely explored. A recent study has found an increase in the gene expression of FKBP5 in relevant limbic areas (e.g. mPFC, insular cortex and hippocampus) after a protocol of wheel-running [29,30]. It is possible that the enhancement of FKBP5 after running is induced by glucocorticoids (GCs) increase owing to the stressful but positive nature of the aerobic exercise. Previous studies have found that FKBP5 expression can be produced by GCs and has been shown to be a very accurate measure of GR regulation and signaling constituting an appropriate marker of HPA flexibility [31,32]. Thus, when GCs enter the cytoplasm, they bind to the GR-chaperone complex, favoring the exchange of FKBP5 for FKBP4, which allows GR translocation to the nucleus and promotes the transcriptional activity of many genes involved in the feedback regulation of the HPA axis. Hence, the greater release of GCs caused by the exercise is compensated by the increase of FKBP5 complexes whose exchange to FKBP4 favors the regulation of the stress axis. Therefore, the increased expression of FKBP5 mediated by GCs is considered as an ultrashort, intracellular negative feedback loop that regulates intracellular GR sensitivity [33] (Fig. 1). In addition, to understand the paradoxical increase of GCs by exercise, it has also been described that GCs released into the blood eventually reach the mPFC, elevating dopamine release, upregulating BDNF [34] and inducing control and coping-related behaviors [35,36]. Additionally, exercise has rewarding effects related to the dopaminergic striatal circuitry activation and the induction of stress resistance [37]. In contrast, chronic stress and depression have been associated with an overall reduction in dopamine neurotransmission in areas such as PFC, VTA and nucleus accumbens [38]. In this sense, region-specific effects have been reported by previous research on FKBP5. For example, mice lacking the Fkbp5 gene show stress-induced decline in synapsin expression in the prefrontal cortex but not in the hippocampus, and selective Fkbp5 silencing in the amygdala was shown to confer resilience to restraint stress exposure [32]. CONCLUSION Most people are confronted with stressful situations at some point in their lives and do not developmental disorders as a result. This ability to deal with and overcome adversity involves the complex construct of resilience. Several resilience-promoting avenues have been described, being the performance of regular physical activity one of them. It exerts antidepressant and anxiolytic-like effects by toughening the physiological and neuroendocrine mechanisms involved in the negative feedback of the HPA axis. Hence, we propose a scarcely explored pathway mediated by increased exerciseinduced GCs and BDNF, which through its action on the FKBP5 chaperone could result in the transcription of genes involved in resilient behavior to cope with future stressors. Thereby, studies with animal models carrying mutations targeting BDNF-sensitive GR phosphorylation sites could be an adequate approach to analyze the physiological and behavioral importance of these modifications, as well as the pleiotropic effects of FKBP5 depending on the stressor applied. CONSENT FOR PUBLICATION Not applicable. FUNDING None. CONFLICT OF INTEREST The authors declare no conflict of interest, financial or otherwise. Fig. (1). Hypothetical mechanisms of the action of stress and exercise in the BDNF-and FKBP5-mediated GR phosphorylation signaling pathway. Aerobic exercise increases BDNF expression, which in turn promotes GR phosphorylation through the serine residues by mitogenactivated protein kinase (MAPK) signaling pathway [39]. Hence, exercise could boost MAPK signaling enhancing the level of the activated form of this transcription factor as previous studies have found after only a week of voluntary running [40]. In consequence, activation of a TrkB-MAPK pathway could trigger GR phosphorylation and the expression of genes involved in resilience-related neuroplasticity. (A higher resolution / colour version of this figure is available in the electronic copy of the article).
2021-04-09T06:19:11.948Z
2021-04-08T00:00:00.000
{ "year": 2021, "sha1": "7a7152b63242ab2db1a416e690ce58e77180b89c", "oa_license": "CCBYNC", "oa_url": "http://ddfv.ufv.es/bitstream/10641/2676/1/1.-%20Building%20resilience%20with%20aerobic%20exercise%202021%20Current%20Neuropharmacology.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c906df35e4e5e92c23a7abe7c47197b251af7136", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
148922644
pes2o/s2orc
v3-fos-license
Screening for Psychopathology Using theThree Factors Model of the Structure of Psychopathology: A Modified Form ofGAIN Short Screener The goal of this paper is to develop a valid and reliable screening tool for mental health that is based on empirically and conceptually valid structure of psychopathology. Recently several studies of the structure of psychopathology found a general factor and three specific factors: internalizing, externalizing and thought disorder. We adapted the previously validated GAIN Short Screener to include the thought disorder that was not included in its original version and further developed its internalizing subscale. We conducted an exploratory and confirmatory factor analysis of the new adapted measure and produced 20 items screening tool that parsimoniously represents the three factors. The adapted screener and its subscales were found to have good reliability, stability, structural validity in two Egyptian and Polish samples. Additionally, all its subscales significantly correlated with different trauma types and with cumulative trauma, and negatively with self-esteem. The new adapted measure is the first that is based on robust scientific evidence of the structure of psychopathology and can be used in a broad scope of settings. Introduction There is a lack of valid and reliable screening tool for psychopathology that is How to cite this paper: Kira, I. A., Shuwiekh, H., & Kucharska, J. (2017).Screening for Psychopathology Using the Three Factors Model of the Structure of Psychopathology: A Modified Form of GAIN Short Screener. Psychology, 8, 2410-2427.https://doi.org/10.4236/psych.2017.814152based on robust conceptual and empirical evidence of the structure of psychopathology.There were no empirically validated conceptual models behind most of the existing measures.Most screening measures for psychopathy targeted either specific disorder or general psychopathology.Most of the measures that screen for general psychopathy either utilized the diagnostic criteria of mental disorder (e.g., Harvard trauma questionnaire) or targeted the general psychopathology in aparticular population (e.g., refugees) (e.g., cumulative trauma disorders in refugees, Kira et al., 2012).World Health Organization WHO (Beusenberg, Orley, & World Health Organization, 1994) developed a self-reporting questionnaire of 20 questions (SRQ-20) as a screening tool to detect common mental disorders (CMD).Several versions of the Self-Reporting Questionnaire (SRQ) were used in screening and research.SRQ is not based on empirical or theoretical analysis of the structure of psychopathology.It includes only symptoms related to anxiety and depression.The mood, neurotic and psychotic disorders are also common and there is a noticeable overlap of symptoms of depression, anxiety, fatigue, or somatic complaints in CMD.However, different versions added other items that represented psychotic symptoms (e.g., Youngmann et al., 2008).One of the measures that widely used with refugees and torture survivors is Harvard trauma questionnaire (HTQ) (e.g., Mollica et al., 1992).HTQ may be a useful tool for measuring some syndromes, but not designed to be a comprehensive screening tool for psychopathology.The same critique that targeted early SRQ versions applies to HTQ, as it does not measure, for example, dissociation psychosis and other mental health syndromes especially present in multiply traumatized populations (Kira et al., 2012). Co-morbidity of mental disorders is commonly found in clinical and epidemiological studies (e.g., Kessler, Chiu, Demler, & Walters, 2005;Angold, Costello, & Erkanli, 1999).Research suggests the existence of a general psychopathology factor, which is associated with high risk of developing a broad range of internalizing, externalizing and psychotic mental disorders (e.g., Lahey et al., 2012).In one study, a general latent factor based on repeated assessments of psychiatric symptoms over a 20-year period explained on average 42% of the disorders variance (Caspi et al., 2014;Carragher, Krueger, Eaton, & Slade, 2015). In another large multi-ethnic adult sample, a general factor was estimated to explain between 29% and 67%, depending on the diagnosis (Kim & Eaton, 2015). The general psychopathology factor was associated with lower IQ, higher negative affectivity, and lower effortful control (Neumann et al., 2016).Importantly, the general psychopathology factor showed a significant Single nucleotide polymorphism (SNP) heritability of 38% (Neumann et al., 2016).Most of the studies above used DSM oriented scales; however, the general psychopathology factor was also replicated in studies using problem scales/items in general population samples (Laceulle, Vollebergh, & Ormel, 2015;Murray, Eisner, & Ribeaud, 2016).These new advances in discovering the component of psychopathology structure gave us an opportunity to develop psychopathology screening tool for Psychology adult and adolescent from various populations, based on the robust empirically validated conceptual model of psychopathology that represent its three main factors: Internalizing, externalizing and thought disorder.To develop such a measure that represents the three factors, we previously adapted and utilized GAIN-Short Screeners (GAIN-SS) (Dennis, Chan, & Funk, 2006) in several studies.GAIN-SS was developed initially to screen for psychopathy in adults and adolescent and includes measures for externalizing, internalizing and addiction, but does not add a measure of thought disorder.GAIN-SS is a screener that identifies clients (adults and adolescents) who are likely to have mental health disorders, issues with crime/violence, and issues with substance use.In the first adaptation of the measure, we added items to the internalizing section that are related to posttraumatic stress disorder symptoms.The original version of internalizing subscale did not include different PTSD symptoms.We added a subscale for psychoticism and dissociation using items from psychoticism/dissociation subscale of cumulative trauma disorder scale (Kira et al., 2012). The measure in its initially adapted from included the three primary components of psychopathology: Internalizing, externalizing and thought disorder (psychoticism) (e.g., Caspi et al., 2014;Laceulle, Volleberge, & Ormel, 2015).The initially adapted measure included 32 items (Internalizing: nine questions, Externalizing and Substance Abuse: 14 items, thought disorder or psychoticism: nine items) (see Appendix 1).The participant is asked to indicate if the behavior (or feeling) happened in the past month (scored 4), or occurred in the last 2 -3 months (scored 3), or in the last 3 -12 months (scored 2), or the last year or more (scored 1), or never happened (scored 0).High scores indicate potentially higher symptoms in these areas. The authors utilized the version that has been previously adapted in several studies (e.g., Kira, Shuwiekh, & Bujold-Bugeaud, 2017;Kira, Shuwiekh, Kucharska, Abu-Ras, & Bujold-Bugeaud, 2017;Kucharska, 2017) and proved to be useful, reliable and valid.The goal was to further develop, refine and evaluate its psychometric properties.Accordingly, we are assessing this previously adapted screener that measure the three factors identified as the specific components of psychopathology to make it a more parsimonious and focused screening tool. The goal is to trim the previously adapted version of the GAIN Short screener. We initially deleted two items from the externalizing subscale to make it more parsimonious.We conducted our current analysis of the left 30 items. Research Questions 1) Does the adapted version of GAINS screener have adequate reliability and stability? 2) Does it have good construct heuristics being associated with different trauma types and with cumulative trauma, and negatively associated with self-esteem? 3) Does exploratory and confirmatory factor analysis support the structural validity of the three specific factors of internalizing, externalizing and thought disorder in two Western (Polish), and non-Western (Egyptian) samples? The First Sample (The Egyptian Sample) 1) Participants the same measures that we will describe in the following section. The Measures Used in the Two Studies In addition to the modified version of the GAINS screener, the used measures included the following measures: The Cumulative Trauma Scale CTS-S (short form) is a measure based on the development-based trauma framework (DBTF) (e.g., Kira, 2001;Kira, Ashby et al., 2013;Kira, Fawzi, & Fawzi, 2013;Kira, Lewandowski et al., 2008;Kira, Lewandoski, Chiodo, & Ibrahim, 2014;Kira, Omidy, & Ashby, 2014).DBTF identifies and measures different dimensions of individual development that may be affected by stress and traumatic stress (i.e., attachment, personal, collective and role identities, and interdependence).The CTS-S is a 32-item instrument that measures cumulative trauma regarding the occurrence, frequency, type, and participant denotes that she/he has experienced the traumatic event, then he/she is asked to describe her/his appraisal of its effect on a 7-point Likert-type scale (1 = extremely positive; 7 = extremely negative).CTS-S includes two general subscales for cumulative trauma dose: occurrence and frequency of experience, and Psychology two appraisal subscales: negative and positive appraisal.Four subscales for each of the trauma types can be obtained. The alpha for the scale of cumulative trauma occurrence was .88 in the Egyptian data and .91 in the Polish data.The measure was used to test if the adapted GAINShort Screener and its sub-tests will be significantly associated with different trauma types. The Rosenberg self-esteem scale (RSES) is a 10-item scale that measures global self-esteem (Rosenberg, 2015).Each item rated on a 4-point Likert-type scale from strongly agree to disagree strongly and scored from 0 to 3. The scale divided into five positively worded and five negatively worded statements.The RSES has been translated and adapted to various languages including Arabic. Rosenberg reported good psychometrics for the scale and its reliability ranging from .85 to .88.In previous Arabic samples, alpha was .75.Test-retest using an independent sample of 35 males with four weeks interval yielded excellent stability coefficient of .983.In the Egyptian study, its alpha was .72,In the Polish study, its alpha reliability was .78.The measure was used to test if the adapted GAINShort Screener and its sub-tests will be negatively associated with self-esteem. Translation into Polish Procedures Self-esteem scale: the Polish adaptation, was published in 2008 (Dzwonkowska, Lachowicz-Tabaczek, & Laguna, 2008), the scale has good reliability and validity and is widely used in Poland.For the other scales: first certified Polish transla-Psychology tors translated the tools into Polish, then were back-translated into English, a third expert compared the initial and final English versions.No significant differences found in the case of discrimination scales and authoritarianism scale. Minor differences found in the cumulative trauma scale and Gain externalizing scale, but the third expert decided that the items have the same meaning as the words used have a similar semantic field. Translation into Arabic Procedures Some of these measures have been previously translated into Arabic and proved to have adequate reliability and validity in Arabic clients in previous studies, as will be briefly described when introducing them in the measures section.We translated the other measures (modified GAIN, and F scale) into Arabic.The committee that translated the measures consisted of three bilingual professionals who conducted the forward translation and two different bilingual professionals who contributed to the reverse translation.The translations compared, and the differences discussed until a consensus reached on the final version by the committee. Statistical Analysis Strategy The data were analyzed utilizing IBM-SPSS 22 and Amos 22 software.We split the Egyptian sample into two sub-samples (N = 261 each).We conducted exploratory factor analysis (Principal axis factoring method) of the Adapted GAINShort screening items in the first Egyptian sub-sample.We conducted exploratory (on the first sub-sample) and confirmatory factor analysis (on the second sub-sample).Because internalizing, externalizing and thought disorder are assumed to be correlated with a higher second-order factor, we conducted an oblique rotation.We used the scree test (Cattell, 1966) and parallel analysis (O'Connor, 2000) to help determine the number of factors.Confirmatory factor analysis was conducted on the resulted in three factors.Following Byrne's (2012) recommendations, the criteria for good model fit were a non-significant (χ2), (χ2/d.f.> 2), comparative fit index (CFI) values > 0.90, and root-mean-square error of approximation (RMSEA) values < 0.06.We investigated the reliability of the sub-scales with the Cronbach's alpha.To test its predictive validity, we conducted a zero-order correlation to explore the linear relationships between the measured constructs. Reliability In Correlations Externalizing, Internalizing, and thought disorder subscales correlated significantly with all trauma types and with cumulative traumas.All the three subscales correlated negatively with self-esteem.The three subscales were highly correlated.Table 1 provides the zero-order correlations between the mentioned variables in the Egyptian sample.Similar correlations between the three subscales and cumulative trauma, different trauma types were found in the Polish sample. The correlation results provide initial evidence of predictive validity and the Construct Heuristics of the subscales. Structural Validity Exploratory factor analysis of both the Egyptian sub-sample and the Polish sample yielded three factors with all items loaded significantly on the first factor (before rotation) which may validate the one-factor solution of psychopathology obtained in previous studies.The Oblimin rotation produced three clear-cut factors that represented the three constructs (thought disorder, internalizing and externalizing (Table 2, see also Appendix 1).The three factors accounted for 48.29% of the variance.The first factor loaded on thought disorder items and accounted for 33.67% of the variance.The second factor loaded on internalizing items and accounted for 10.06% of the variance.The third factor loaded on externalizing items and suicidality and accounted for 4.56% of the variance.We deleted the items that have less than .40loadings as well as those that have cross-loading (10 items), and reanalyzed the remaining 20 items.The results indicated a clean three factors solution with a different order (Table 3, see also Note: Bolded items are either have loadings less than .4 on the factor, or have cross-loading and were deleted in the second-factor analysis.Appendix 2).The analysis yielded three factors accounted for 54.64% of the variance.In this analysis the first factor included the items related to externalizing and accounted for 35.89% of the variance.The second factor included the items related to internalizing and accounted for 12.22% of the variance.The third factor included the items related to thought disorder and accounted for 6.54% of the variance. Additionally, we conducted confirmatory factor analysis on the second Egyptian sample and the Polish sample.For the Egyptian sample, the three factors, twenty items structure fitted adequately (Chi Square = 322.117,d.f.= 159, p = .000.CFI = .932,RMSEA = .063). Figure 1 includes the results of the confirmatory factor analysis for GAIN-20 Short Screener in The Egyptian sample.The confirmatory factor analysis using the Polish sample did not fit adequately in the initial analysis.However, the modification indices strongly suggested moving the suicidality item from externalizing items to internalizing items. Conclusion and Future Directions We conclude that the modified GAIN-20 is a structurally valid tool for screening of mental health based on the rigor scientific evidence of the structure of psychopathology.The measure subscales have good reliability and stability in both Western and non-Western populations.The goal of this study was to adapt the GAIN Short Screener to measure the three factors that were found in the scientific studies of the structure of psychopathology.The objective was to establish a parsimonious screening tool that may be used to screen for psychopathology in adults and adolescents.We emphasized that the significance of current study lies in the fact that it is the first that provided a tool for mental health screening that is based on the scientific evidence of the structure of psychopathology Limitations While the current study was an important first step, it has several limitations. For example, the fact that different traumatic stressors significantly correlated with the three subscales does not provide specific predictive validity information of the subscales.To establish their predictive validity, clinical samples should be used.Our samples were college students and it was difficult to establish the predictive validity of the sub-scales using non-clinical samples.Additionally, the externalizing subscales can be reconstructed to include more diverse items to be tested in future studies.The current study is an initial step in devising more accurate screening tool based on the scientific evidence of the structure of psychopathology.An expanded well-funded study may be needed to develop it further to increase the representativeness of its items of all aspects of psychopathology and to establish it as the standard screening measure for psychopathology in the field.Regardless, the modified GAIN-20 screener, in its current form is a valid and reliable tool to screen for psychopathology based on the rigor scientific evidence of its structure. Appendix 1 Adapted GAIN Short Screener (A-GAIN-SS-30) for Internalizing, Externalizing, and Thought Disorder (The first modified version). The following questions are about common psychological, behavioral, and personal problems.These problems are considered significant when you have them for two or more weeks when they keep coming back, when they stop you from meeting your responsibilities, or when they make you feel like you can't go on. After each of the following questions, please tell us the last time, if ever, you had the problem by answering whether • It was in the past week (5) • It was in the past month (4), • 2 to 3 months ago (3), • 4 to 12 months ago (2), • 1 or more years ago (1), • Never (0). 1.When was the last time that you had significant problems with: a) Feeling very trapped, lonely, sad, blue, depressed, or hopeless about the future?4 3 2 1 0 b) Sleep trouble, such as bad dreams, sleeping restlessly, or falling asleep during the day? 4 3 2 1 0 c) Feeling very anxious, nervous, tense, scared, panicked, or like something bad was going to happen? 4 3 2 1 0 d) Becoming very distressed and upset when something reminded you of the past? 4 3 2 1 0 e) Thinking about hurting self, ending your life or committing suicide? 4 3 2 1 0 f) Having unexpected or disturbing memories?4 3 2 1 0 g) Trying to avoid reminders of painful past events?4 3 2 1 0 h) Jumping or being very frightened by sudden loud noises? 4 3 2 1 0 i) Feeling out of touch with surrounding? 4 3 2 1 0 2. When was the last time that you did the following things two or more times?a) Lied or conned to get things you wanted or to avoid having to do something 4 3 2 1 0 b) Had a hard time listening to instructions at school, work, or home 4 3 2 1 0 c) Had a hard time waiting for your turn 4 3 2 1 0 d) Bullied or threatened other people 4 3 2 1 0 e) Started physical fights with other people 4 3 2 1 0 f) Took something from a store without paying for it?4 3 2 1 0 g) Lost temper 4 3 2 1 0 h) Easily irritated 4 3 2 1 0 i) Failed to respect those who may represent authority 4 3 2 1 0 j) Get irritated, to the extent you do not care about safety 4 3 2 1 0 3.When was the last time that: a) you spent a lot of time either getting alcohol or other drugs, using alcohol or other drugs or recovering from the effects of alcohol or other drugs (e.g., feeling sick)? 4 3 2 1 0 Psychology 5 4 3 2 1 0 5) Were a bully or threatened other people.5 4 3 2 1 0 6) Thinking about hurting self, ending your life or committing suicide 5 4 3 2 1 0 7) Took something from a store without paying for it?5 4 3 2 1 0 B. When was the last time that you had significant problems with?8) Feeling very anxious, nervous, tense, scared, panicked, or like something bad was going to happen 5 4 3 2 1 0 9) Sleep trouble, such as bad dreams, sleeping restlessly, or falling asleep during the day 5 4 3 2 1 0 10) Becoming very distressed and upset when something reminded you of the past 5 4 3 2 1 0 11) Trying to avoid reminders of painful past events 5 4 3 2 1 0 12) Feeling very trapped, lonely, sad, blue, depressed, or hopeless about the future 5 4 3 2 1 0 13) Having unexpected or disturbing memories 5 4 3 2 1 0 C. When was the last time that you? 14) Felt as if you are almost two different people? 5 4 3 2 1 0 15) Felt that you do not have enough control over your responses and reactions 5 4 3 2 1 0 16) Felt apathetic, with no emotion 5 4 3 2 1 0 17) Felt people/enemies are following you any place you go 5 4 3 2 1 0 18) Felt you are in two or more different places in the same time 5 4 3 2 1 0 19) Get irritated, to the extent you do not care about safety 5 4 3 2 1 0 20) Seeing or hearing things that no one else could see or hear 5 4 3 2 1 0 the questionnaire to participants in Arabic from September to November of 2015.While the questionnaires administered to males and females were different, all have the same measures utilized in the current analysis and were combined in one data set.The participation was voluntary.Each participant was informed about the general goals of the study and signed informed consent to participate.Each person took between 40 -50 minutes to complete the questionnaire.The Institution Review Board of authors' institution approved the research as part of a cross-cultural study of gender relations among college students.included combined two data sets (one included males and the other females only) of 467 college students from two Polish cities: Warsaw and Wrocław.Females were 59.3% of the sample.Study assistants recruited participants via opportunity sampling at university campuses during the breaks between classes.The age range in the sample is 18 -34, M = 22.39,SD = 2.81.All participants were residents of one of the cities and enrolled as students at the Psychology time of the data collection.Participants signed an informed consent and received no compensation for their participation in the study.2) Procedure Research team administered the questionnaire in Polish, from January to February 2016.Each participant informed about the general goals of the study and signed informed consent to participate.Each person took between 40 -50 minutes to complete the questionnaire.While the questionnaires administered to males and females were different, all have the same measures utilized in the current analysis and were combined in one data set.The two data sets included negative and positive appraisals.The test is intended to measure at least seven major trauma types.They include collective identity trauma (3 items), personal identity trauma (6 items), survival trauma (6 items), attachment trauma (2 items), secondary trauma (7 items), achievement traumas (2 items) and gender discrimination (2 items).Collective identity trauma includes trauma related to exposure to war and torture and discrimination based on race, ethnicity, or national origin.Personal identity trauma includes trauma related to sexual abuse, rape, incest, and being robbed.Attachment trauma comprises abandonment by parents.Survival trauma includes car accidents, life-threatening illnesses, and natural disasters.Achievement or role identity trauma is intended to measure traumatic stressors related to attainment of life goals like success in school or business.Secondary trauma includes trauma related to having witnessed a traumatic event occurring to another individual or group and affecting social interdependence.Gender discrimination includes gender discrimination by parents (family) and gender discrimination by society and institutions.Gender discrimination items are worded to apply to both genders.In response to each item on the measure, participants are instructed to indicate their experience with a traumatic event on a 5-point Likert-type scale (0 = never; 4 = many times).If a Figure 1 . Figure 1.Confirmatory factor analysis for GAIN-20 short screener in the Egyptian sample. Table 1 . Zero order correlations between the modified GAINS' subscales, self-esteem and different trauma types in the Egyptian sample. Table 2 . Factor structure of the modified GAIN-30 screener in the Egyptian sub-sample. Table 3 . Factor structure of the modified GAIN-20 screener in the Egyptian sub-sample.
2019-05-11T13:06:29.318Z
2017-12-05T00:00:00.000
{ "year": 2017, "sha1": "e57c128f736e580d06f58af1542ff0dff09bb5fe", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=81114", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0484e9dc994eb161e620d0641f592cfe874c5768", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
91582788
pes2o/s2orc
v3-fos-license
Food-Web Structure and Functioning of Coastal Marine Ecosystems : Alvarado Lagoon and Adjacent Continental Shelf , Northern Gulf of Mexico Departamento de Pesquerías y Biología Marina, Instituto Politécnico Nacional, Centro Interdisciplinario de Ciencias Marinas, Avenida Instituto Politécnico Nacional s/n, Col. Playa Palo de Sta. Rita Sur, Postal Box 592, La Paz, Baja California Sur, ZP 23000, México Centro de Investigaciones Biológicas del Noroeste, Avenida Instituto Politécnico Nacional No. 195. Col. Playa Palo de Sta. Rita Sur, Postal Box 128, La Paz, Baja California Sur, ZP 23096, México Facultad de Estudios Superiores Iztacala, Universidad Nacional Autónoma de México, Laboratorio de Ecología. Avenida de Los Barrios No. 1, Los Reyes Iztacala, Tlalnepantla, Estado de México, ZP 54090, México CONACYT-Facultad de Ciencias del Mar, Universidad Autónoma de Sinaloa. Paseo Claussen s/n, Mazatlán, Sinaloa, ZP 82000, México INTRODUCTION Coastal ecosystems are highly productive, vulnerable, and particularly for tropical coasts, are greatly diverse with respect to both species and habitats.Displayed food webs highly complex, characterized for the extraordinary interchange of species from ecosystems interconnected.A noteworthy progress is the being to have recognized that both the structure and function of the trophic network are dynamic properties of the system.Food web describes species interaction and is an important part of community structure [1 -5]. Currently, the assessment of what are the processes and mechanisms that determine the natural communities structure, it is an area under intense research [6 -18].Although the processes involved in regulation of communities such as predation, competition, mutualism, parasitism, etc., are long known in the field of ecology, the relative incidence of these processes in the community structure regulation, remains as hypothesis in most cases, especially in natural communities. The network analysis has established that the natural communities' structure is not a random result [6,9,19,20], but is the result of the precise combination of several processes.Many of these regulatory processes are emergent from the interaction between three or more species (high order interactions, indirect effects, interactions modifications), so it cannot be observed or inferred by studying the interaction between species pairs [9, 21 -25].In this context, studies based on trophic interactions are the first step in the development of hypotheses about the homeostatic feedback of natural communities, consequently; the comparative analysis of the diet composition, allows building food webs schemes that serves as a basis for proposing diverse hypotheses on the regulation and dynamics of the system under study. In other hand, the study of the mechanisms that determinate the structure and dynamics of marine communities under fisheries exploitation is particularly relevant because the importance of the adequate management of exploited populations, even so, it is common the extensive use of analytical approaches [26 -30]. Furthermore, the large-scale fisheries emergent around the world over the last thirty years has caused a shift in how to approach the management, resulting in the gradual incorporation into the handling of variables not directly related to the stocks under exploitation (e.g. the influence of other species that can modify the abundance of commercial species by interespecific relationships as predation or competition); that means the incorporation of those variables into dynamic models of fish stocks [31]. The understanding of food web changes is one of the major issues of modern ecology [32 -37].By analyzing how groups are assembled and develop, would give an insight into the organization of biological ecosystems.In particular, one of the main focuses of food web theory is to understand how structural properties change with the scale of the system [38,39].Different methodological approaches have been proposed to quantify the magnitude of the relationships between species or species groups, and to assess the relative importance of each component has on the overall system maintenance [40], The most common approach employed in the last four decades, biomass balance, Ecopath models [41,42].Habitat damage and mishandling of resources could increase mortality along trophic web [43 -46], and modify the relationship between functional groups [47,48]. Particular attention has been paid to the development of ecosystem models (mass-balance, Ecopath) that synthesize the trophic interaction patterns of a particular food web.These models have been widely accepted by the international scientific community, having been applied to more than 150 ecosystems around the world [41,42].Ecopath is a theoretical approach that encourages the development of trophic models of aquatic ecosystems through mass-adjusting; also, permits examination of various aspects of the subsequent food web network.Input data required includes estimates of biomass, production, consumption, diet and harvests for each group considered [49]. The study of ecological networks has centered around their inward structural and / or functional characteristics (e.g.biological process, inter or intra relationships, food connections).A noteworthy progress of those lines of investigation has been the acknowledgement that food webs are closely related to the whole system´s dynamic.However, few efforts have been made to study the structure and association of interconnected marine waterfront biological communities, which the exchange of energy (matter) through trophic flows is recognized. On the other hand, commercial fishing catches have decreased substantially in recent years in south-central Gulf of Mexico [50].As a result, human impacts on the local environment are of increasing concern [51,52].In this sense, Ecopath models represent a modelling approach alternative to evaluate changes in the coastal marine ecosystems being able to be structural or functional.Direct and indirect effects of species on others into the system can also be explored, as well as the overall functioning of the ecosystem.However, it is important to note that our knowledge on direct and indirect effects of species, is limited and it is therefore necessary to develop a greater number of models to find ecosystem attributes that can be used as biological reference points, similar to those used in models conventionally used in fisheries biology [53]. The main activity in Mexican waters is the shrimps trawl fishery, whit a large bycatch most of the times consisting of juveniles of commercially important teleostean fishes such as croakers, pompanos, snappers, groupers, etc., most of which is discarded.In last years, fisheries production has declined [50] besides, some biological parameters as the average size of individual fish [54].Also, data presented by Abarca-Arenas et al. [55,56] on bycatch of shrimp fishery boats trawlers and observations by Cházaro-Olvera et al. [51] suggest a marked increase in the abundance of by-catch and portunid crabs in the shrimp by-catch in the Gulf of Mexico. In this study we develop a trophic model to characterize the structure and function of two coastal marine ecosystems, Alvarado lagoon (Mexico) and the adjacent continental shelf, both considered important areas for penaeid shrimps and demersal fish.These ecosystems are characterized to interchange biota and, therefore, matter and energy through a feeding relationship, since they are connected through an artificial mouth in the northern at Camaronera lagoon and by a natural mouth in the southern part of the system.In order to address this objective, we examined 66 functional groups based on estimates of biomass, production, consumption, diet and harvest for each group, using Ecopath model of biomass balance that aid to synthesize the trophic interaction patterns of a particular ecosystem.This approach ensures a better description of the trophic relationships, energy flows, and transfer efficiency of the food web. Study Area Alvarado lagoon is a medium coastal lagoon of about 17 km along the north western central Gulf of Mexico, it is composed of four minor lagoons: Alvarado, Tlalixcoyan, Buen País and Camaronera (Fig. 1).The exchange of water masses with the adjacent sea is through an artificial mouth in the northern of the Camaronera lagoon (channel of 40 meters wide with two tubes with a diameter of 2 meters each) and through a natural mouth (shipping channel 0.45 km.wide) in the southern part of the system.Alvarado lagoon is shoal with two meters in depth in average.Within the lagoon the rivers Papaloapan, Blanco and Acula converge, which release masses of fresh water seasonally [57 -59].A great interaction with adjacent system is recognized, which contributes to its high biological productivity.Since 2003 Alvarado lagoon system was recognized as a RAMSAR site and is believed to sustain the biggest population of manatees (Trichechus manatus) in Veracruz State [60].Adjacent continental shelf of Alvarado, is located in front of the coastal plain of Veracruz, Mexico (Fig. 1).The environment of the continental shelf is influenced by fresh water from nearby rivers (i.e., the Papaloapan, Coatzacoalcos, and Panuco) that drain into several coastal lagoons and estuaries [61].One of the largest is the Alvarado lagoon, which includes an adjacent platform composed primarily of clay and sand [51,62].These particular hydrobiological conditions explain the elevated levels of organic material and nutrients reported for the zone [63].In recent years, the continental shelf off Alvarado has been subjected to considerable environmental stress resulting from human settlement along the coast (increasing the discharge of wastewater) and a variety of economic activities undertaken in the region (i.e., fishing, the transportation and extraction of crude oil, and recreational activities) that have caused habitat fragmentation [64. 65].Three distinct and well-defined seasons are recognized in the study area: a hot, dry spring (March-May); a hot, rainy summer (June-September); and the period between October and February, which is characterized by strong northerly winds (> 80 kmh-1), limited precipitation (20-60 mm), and cooler temperatures (< 22° C). Trophic Model of Biomass Balance The Ecopath model of biomass balance [41, 66 -69] is based on the assumption that the production of a given group of prey (i) is equal to the biomass lost via fishing or exportation, predation (natural mortality), or other sources of mortality.Biomass balance can be expressed using the following equation: where B i is the biomass of functional group 1 during a particular period for i = 1…n functional groups; P/B is the biomass production rate, which is equal to the total instantaneous mortality rate (Z) at equilibrium [70]; and EE is the ecotrophic efficiency (portion of production that is consumed, fished, or exported).Y is the catch per unit of time and space (Y i = F i B i , where F i is the instantaneous mortality rate due to fishing), B j is the predator (j) biomass, Q/B is the consumption/ biomass rate, and DC ij is the portion of the diet of a given predator (j) occupied by a particular type of prey (i). For each component included in the biomass-balance model, the following data must be included: production/ biomass (P/B), consumption/ biomass (Q/B), portion of habitat area occupied by the group, biomass of the entire habitat area (tkm 2 ), diet composition, and mortality due to commercial fishing.The construction of the model does not require that all parameters be input for all groups or trophic components.Ecopath relates the production of a given group to the remaining groups via the alimentary components, permitting estimation of any missing parameters.This process is based on the assumption that the production of a particular group ends up in some part of the system. Functional Groups The model is composed of 66 functional groups (Annex S1): marine mammals (two groups), marine birds (one group), fish (38 groups), crustaceans (five groups), mollusks (six groups), polychaetes (two groups), echinoderms (one group), other invertebrates (meiobenthos, two groups), zooplankton (two groups), primary producers (four groups), detritus (two groups), by-catch shrimp (one group).Table 1 summarizes the data for each of the parameters input for each group: commercial catch (Y, tkm 2 year -1 ), biomass (B, tkm 2 year -1 ), production /biomass (P/B, year -1 , equal to the total mortality, Z), consumption/ biomass (Q/B, year -1 ), Ecotrophic Efficiency (EE), and diet.Annex S2 presents the parameters used to estimate the Q/B for the groups of fish, while Annex S3 is the adjusted diet matrix. Pedigree of the Input Data A model's pedigree is a summary of the uncertainty related with the information sources [41,42].A qualification (confidence) can be assigned to each data point input in the model (B, P/B, Q/B, catch, DC), based on the source of that data.For each input data that we use in a given model, a choice can be made to describe the kind of data used (e.g.sampling based, high precision; sampling based, low precision; approximate or indirect method; guesstimate; from other model; estimated by Ecopath), and thus the confidence we can have in these data.By calculating the confidence of each input data point, the model's global pedigree can be calculated as the average of the individual values [48].The global pedigree value can be used for comparison with other models [69].A model's pedigree is a measure of its quality based on the trustworthiness of the input data.The pedigree index P is calculated based on the following formula: where, lij is the pedigree index for model group i and parameter j, n is the total number of functional groups. Model Statistics Biomass-balance model uses several different statistics to describe the structure of the ecosystem in energetic terms, including total flows, consumption flows, respiration flows, exportation, detritus, and net primary production.Ecopath estimates two global indices: 1) the omnivory index, which represents the average diet breadth of the consumers based on the average consumption of each consumer, and 2) the connectancy index, which estimates the proportion of the number of trophic ties with respect to the total number of possible connections.Moreover, Ecopath includes a suite for estimating the average trophic level of the commercial takes. The trophic structure was aggregated into a Lindeman spine, an analysis of discrete Trophic Levels (TL) sensu Lindeman [71] and proposed by Ulanowicz [72].In this routine, the system was aggregated into a linear food chain where import (on TL I only), consumption by predators, export, flow to the detritus, respiration, and throughput were calculated for each TL.The detritus compartment was separated from primary producers to show the amount of energy that is flowing through it.These flows were also represented by means of a flow diagram showing the trophic interactions between all groups within the ecosystem.The transfer efficiency is defined as the fraction of the total flows at each trophic level that is either exported or transferred to other TLs through consumption.The mean TE is calculated as a geometric mean from the TE in Trophic Levels (TL) II-IV [73]. Finn's Cyclying Index The Finn's Cyclying Index (FCI) [74] is the fraction of the ecosystem's throughput that is recycled.This index utilizes the Leontief matrix to assess the amount of material cycling within an ecosystem, and is calculated as: Where the proportion of Total System Throughput (TST) which represent recycled flow, and z"ii = the total flow from i which returns to i (without recycling i in route) over all pathways of all lengths.The FCI varies from zero (no cycling) to 1 (full cycling), an is also an indicator of system´s maturity [75,76]. Exploitation Status of the Fishery The fisheries impacts were assessed by analyzing the mean trophic level of the catch (TLC), the exploitation rates (F/Z), the relative consumption of total production representing the proportion of total production that is consumed within the system by all the functional groups, Fishing mortalities (F), the Gross Efficiency of the fishery (GE, catch/net primary production), and the percentage of Primary Production Required (PPR) to evaluate the sustainability of fisheries [77,78]. Mixed Trophic Impact This analysis allows the estimation of the relative impact of a change in the biomass of the one group on other components of ecosystem, under assumption that the diet composition remains constant [79].Two components without this kind of relation would have zero impact on each other [80]. In the mixed trophic impact analysis approach, the positive effect that a prey (i) has on its predator (j) can be shown as: where, k represents all the of j and g ij ranging from 0 to 1 (Leontief, 1951). Conversely, the negative impact of predator upon its prey [81] is given by where, m represents all the predators of the prey species i. Key Species Index We also calculated the key species index (KS) [82] in order to identify the most ecologically relevant species in the system.That is, the functional groups or species with a disproportionately high global effect on the biomass.Because every impact can be quantitatively positive or negative, a new measure of the overall effects must be determined for each species or functional group (Ɛ j ) using the following mathematical equation: where m ij corresponds to the elements of the MTI matrix and quantifies the direct and indirect effects (affecting) species or group i has on any (affected) group k of the food web.However, the effect of the change in a group's biomass on the group itself (i.e., m ij ) is not included.The contribution of biomass from every species or functional group with respect to the total biomass of the network was estimated using the following equation: where p j is the portion of biomass of each group B i with respect to the sum of the total biomass B k .Therefore, to balance the overall effect and the biomass.The Keystone index (KS i ) for each species or functional group was calculated as KS j = log[Є j (1-p j )]; which integrate the two previous equations.This index assigns high values of functional keystonesess to those variables (species) or functional groups that have low biomass and high overall effect. Maturity Indices Several network analysis indices are also produced by Ecopath, which are useful for determining an ecosystem's structure, maturity, and stability [74,83].These indices are Total System Throughput (TST), Ascendency (A), system capacity (C), and system overhead, which is based on ascendency and capacity.We also estimated the flows from primary producers and detritus.Ascendency represents a measure of the average mutual information.That is, the uncertainty associated with the route a given unit of biomass (or energy) follows within the system based on the total possible routes available.The development capacity is the upper limit of the ascendency measure, and can be calculated as: where H is defined as the statistical entropy, calculated as: where Q i is the probability that a particle of energy will pass through I in terms of the total flows of the ecosystem [84,85].The surplus is the difference between the ascendency and the development capacity [84,85]. Model Balance and Pedigree The input values and those estimated under the model's assumption of biomass balance are shown in Table 1.The model's pedigree index was PI= 0.47. Structure of the Trophic Web and Analysis of Flows The trophic level of the ecosystem's components fluctuated between 1 and 4.18 (Table 1).The ecosystem's apex predators (TL > 4) include, Trichiuridae (TL = 4.02), Synodontidae (TL = 4.08) and coastal sharks (TL = 4.18).Several groups of fish (Sphyraenidae, Serranidae), marine mammals, and marine birds as well as the cephalopods (squid and octopus) occupied higher trophic levels (>3.5).Primary producers, detritus, and the bycatch of shrimp fishing had TL = 1.Ecotrophic efficiency values varied from 0.003 for marine mammals to 0.99 for shrimp penaeid (lagoon).Ecotrophic efficiency for the majority of groups was less than 0.75 (Table 1).2) illustrates a Lindeman spine with trophic chain and showing the trophic level II (TLII) consumptions is higher by detritus chain (D) than primary production (P) in a ratio 18:1 (D:PP).Most of the level II flows can be attributed to zooplankton (primary dominant consumers) and meiobenths and polychaets (dominant detritophages)).Flows in trophic level III derive from herrings and polychaets (lagoon).In the highest trophic levels, flows may be attributed to Synodontidae, and Muraenidae. System Bioenergetics The average transfer efficiency in the ecosystem was 13%.The average trophic level of commercial takes was 2.80, with penaeid shrimps shelf (TL = 2.63) and penaeid shrimp lagoon (TL = 2.95) being the most highly exploited fishing resources. Analysis of Biomass, Flows, and Commercial Landings Biomass was largely concentrated in the lower trophic levels and attributable to both pelagic and benthic groups like benthic primary producer (lagoon), meiobenthos (shelf), herrings (shelf), zooplankton (shelf) and phytoplankton (shelf and lagoon) (Table 1).The Alvarado lagoon and continental shelf adjacent ecosystem's energy budget can be broken down as follows: flows to consumption (42.1%), respiration (22.1%), and detritus (35.6%) (Table 1).Export flows and commercial catch contributed < 0.1% of Total System Throughput flow.First trophic levels (1a ~ 2.5) are strong negatively related with respiration and production; whereas in groups of higher trophic level, this trend remains but is not as marked (Fig. 3).The magnitude of the y-intercept reflects the magnitude of energy expenditure into the ecosystem.Moreover, the slopes (bP = -4.46 and bR = -4.48)indicate that production and respiration decrease proportionally as trophic level increases.The groups with the highest production and respiration rates (energy expenditure) are zooplankton (shelf and lagoon), meiobenthos (shelf), polychaetes (shelf), herrings (shelf) and meiobenthos (lagoon).Consumption was dominated by zooplankton (shelf and lagoon), meiobenthos (shelf and lagoon), polychaetes (shelf), and herrings (shelf). Flow and Biomass Indicators Statistics flows and biomass indicators for the Alvarado lagoon-adjacent continental shelf model are shown in Table 2. Total primary production and total biomass rate (it does not include detritus) was relatively high (TPP /TB = 15.27).The total primary production and total respiration rate (TPP/TR) was nearly 1.65.The percentage of recycled flows in the ecosystem is greatly reduced when those derived from detritus are excluded (from 9.62% to 1%).Recycled flows, expressed using Finn's (74) index, account for 9.63% of the total flows.The average length of the recycling route, or the average number of groups recycled flows passed through, was 4.5.The average omnivory index was 0.23. Mixed Trophic Impacts (MTI) and Key Species Index MTI index shows how biomass rise of a given functional group affects the abundance of other groups.For example, marine mammals (dolphins) and sharks have an adverse effect on nearly every group in the system (e.g.marine mammals, marine birds, batoids, filefish, etc.) and a positive effect on very few groups (sand bass and catfish).The MTI also indicated that zooplankton had positive impact on pelagic groups and an indirect positive impact on shark, because coastal shark largely feed on medium pelagic while zooplankton constitutes a major portion in the diet of medium pelagic group.However, zooplankton showed significantly negative impact on themselves which may be due to the presence of a large proportion of carnivore zooplankton.It is noteworthy that life histories of common zooplankton organisms (e.g.copepods) reveal that zooplankters are herbivorous only at juvenile stages, while they are frequently omnivores or carnivores during adult stage.A moderate negative impact on phytoplankton by zooplankton also indicated the presence of smaller amount herbivore zooplankton in the ecosystem.The positive impact of detritus was evident on most of the functional groups and this point to the importance of detritus in the Alvarado lagoon-adjacent continental shelf ecosystem, especially groups living in benthic environment (i.e.snappers, grunts, groupers, shrimps, and other crustacean) and cephalopods.There was a significant positive impact on detritus since other crustacean (mostly crabs and shrimps) and cephalopods fed largely on them.However, the Ecopath showed an indirect positive impact of zooplankton on shark and was likely to diet selectivity of shark, who fed on pelagic groups (herrings, jacks, mackerels, needlefish) and demersal group (croacker, mullidae, snappers, grunts), while zooplankton represents a major portion of their diet.Among the fish groups, demersal species showed negative impact on most of the groups.Most fish groups had very minimal or less impact on themselves either positive or negative.But, all other functional groups at the lower trophic level except detritus had a negative impact on themselves, showing competition for same resources within the group.However, detritus had neither positive nor negative impact on itself in the Alvarado lagoon-adjacent continental shelf ecosystem.Based on the results is not possible to observe any kind of control of the food web, topdown or bottom-up (Fig. 4). Maturity Indices The Development Capacity (CD) was 41,068 flowbits, while the ascendency (A) was 11,029 flowbits.Ascendency is an indicator of the amount of information in the system as well as a proxy of the development capacity (the upper limit of the ascendency).Thus, the ratio between the two (A/CD) reflects the present state of the ecosystem, which is currently at 26.8% capacity (developing).The overhead (O) was 30,038 flowbits, with 1.46 bits per information content. DISCUSSION The Ecopath model presented here synthesizes the biological and ecological information available for coastal marine ecosystems coupled: Alvarado lagoon, Mexico and continental shelf adjacent.Several of the species included in the model use both systems to complete some of the stages in their life cycles, whether for reproductive purposes, breeding, protection or feeding [86 -88] the exchange of biomass of these species between one system and another is clear, evidencing the need to make a nested model like the one presented here in a first approach.The model used here offers important information regarding the ecosystem's structure, function, and energy flows, providing a means for comparison with other ecosystems in terms of the system's energy base and maturity indicators, following Odum [75,89,90]. Based on the results of this study, detritophages and non-primary producers are responsible for the transfer of energy toward higher levels.However, the ecotrophic efficiency of detritus was relatively low (EE detritus shelf = 0.42, EE detritus lagoon = 0.47).This could be interpreted as the excessive production of detritus such that only a small proportion is consumed within the system and therefore, exportation and accumulation may occur. A similar pattern has been observed for upwelling systems, where primary producers tend to have low ecotrophic efficiency ratings [91 -93].They produce such large quantities that little of this biomass is effectively used by the other trophic levels. (Table 3 Most functional groups making up the trophic structure of the system occupied intermediate trophic levels (II-III), a situation common previously reported for other continental shelf ecosystems [94 -96].This pattern can be attributed to a strong reliance on primary producers [95,97]. All ecosystems, whether aquatic or terrestrial, generally include four basic functional groups (autotrophs, primary consumers, secondary consumers, and decomposers).Energy and materials flow from autotrophs to degraders via a web of biotic interactions and trophic relations.The production of biomass at each trophic level, the magnitude and velocity with which energy and materials flow through the web, and the complexity of that web vary.However, the majority of the energy and materials are concentrated in the autotroph group, decreasing as they flow towards primary consumers, secondary consumers, and degraders [98].Some previous works have reported structures dominated by functional groups from higher trophic levels [95,99].Others, have reported that different organisms inhabiting coastal environments uses resources from both marine and terrestrial sources.These sources vary by location, and it could include energy from detritus and nutrients (discharged amid their disintegration or absorption) from both local and external sources (macrophytes and phytoplankton) as well as fresh water and estuarine phytoplankton, macrophytes and terrestrial sediments [100 -102]. The ecotrophic efficiency of most groups was less than 0.5 (Table 1), suggesting they are not exposed to high mortality rates due to predation or high rates of exploitation due to fishing, or rather that the present exploitation rates do not affect their biotic potential.Functional groups considered species forage (small or intermediate-sized pelagic o demersal species), including Mullidae, meiobenthos, and other crustaceans, had ecotrophic efficiencies > 0.90.Excluding macrophytes (shelf) the majority of the groups with low ecotrophic efficiencies (< 0.1) were apex predators in the trophic web.The low ecotrophic efficiency of some benthic groups can be attributed to their abundance (polychaetes, B = 6.28 tk -2 year -1 ; other crustaceans, B = 3.48 tk -2 year -1 ; bivalves, B = 2.98 tk -2 year-1; echinoderms, B = 2.29 tk -2 year -1 ) and the reduced levels of predation they are subjected.Particularly, meiobenthos play a relatively more important role in the transfer of energy, maybe result of their higher renovation rates [62].As a result, meiobenthos metabolic requirements (secondary production and respiration) may be greater than those required by macrofauna, particularly in ecosystems where the ratio of macrofauna biomass to meiofauna biomass is less than, 5:1 [103,104].Thus, the importance of studies regarding the role of meiofauna has been recently acknowledged [62, 105 -109], as their role in the benthic trophic web is considered analogous to that of zooplankton in pelagic systems [110]. Interestingly, zooplankton are responsible for approximately 80% of phytoplankton mortality yet they have a relatively high ecotrophic efficiency (EE = 0.77).The reason is that this group serves as forage for several components of the trophic web.Thus, zooplankton have an important role in controlling transfer of energy, facilitating the flow of the elevated production of phytoplankton toward higher trophic levels [111].These in turn prey intensively on zooplankton (e.g., the cephalopod L. pealei, and some teleosts groups such as Clupeidae and Gerreidae). The spine Lindeman diagram (Fig. 2) shows that the low levels quantitatively dominate flows and biomass in the system, the role of detritus as the primary source of energy, which stands out.This finding is in agreement with that reported by several other works [95,96] for other Gulf of Mexico ecosystems.In contrast to Odum's findings [75], the predominance of the detritus route does not appear to reflect the ecosystem´s maturity.Together, the indicators of omnivory index (SOI = 0.233), connectivity (CI = 0.125), and ascendency (A = 11,029 flowbits ≈ 26.9%), suggest that the system is a developing ecosystem. The more intensive use of detritus may be related to: ocean circulation patterns created by cyclonic disturbances in the area (which facilitate the re-suspension and confinement of sediments and nutrients), nutrients from water discharged by the various rivers in the region (i.e., Coatzacoalcos, Papaloapan, Blanco, Acula), elevated phytoplankton production (a considerable proportion of which flows toward detritus), benthic biomass (composed of several detritovores), and the resuspension of sediments by trawl fishing [58,62,63,112]. The Lindeman spine (Fig. 2) shows biomass (fluxes) that each component obtain from previous trophic level, also shows the biomass (energy) leaving by other processes as respiration or export, and the net production passed on to the next higher level.The average transfer efficiency (13%) is consistent with the proposed by Lindeman [71] and support for Christensen and Pauly [113] for several coastal marine ecosystems.Most of the output comes from the lowest trophic levels (~95%), while the remaining 5% of the flows derive from the highest trophic levels. The TPP/TB rate was relatively high, possibly indicating that the ecosystem is in an advanced state of eutrophication (Table 3).Similar values have been observed for other systems with clear signs of environmental deterioration.For example, Barausse et al. [114] reported a TPP/TB ratio equal to 14.5 for the Adriatic Sea, while Heymans et al. [92] published a TPP/TB ratio equal to 16.2 for the northern Benguela upwelling ecosystem.Our interpretation of the TPP/TB (Table 3) for the Alvarado lagoon-continental shelf adjacent ecosystem as evidence of eutrophication is in accordance with Caso et al. [115] and Guentzel et al. [58] finding that the Gulf of Mexico displays varying levels of eutrophication.However, it is important to exercise caution when using this indicator as it is influenced by the functional groups employed to build the model. Fig. (5). Keystoneness for the functional groups of the Alvarado lagoon and adjacent continental shelf model.For each functional group, the keystone index is reported against overall effect.Overall effects are relative to the maximum effect measured in the trophic web.The species are ordered by decreasing keystoneness.The TPP/TR rate is also used to assess damage related to human activities.In the first phases of ecosystem development, the TPP/TR should be > 1 as production exceeds respiration; in contrast, systems subjected to organic contamination should have ratios < 1, but if there is a balance between energy production and maintenance costs in mature systems, then the ratio is close to 1 [73,89].The SOI also is one of the highest reported for the Gulf of Mexico [94].If we consider the SOI a measure of the feeding strategies displayed by a given system, we can infer that the trophic web of the study area is composed of functional groups (tropho-species) with relatively broad trophic niches or, alternatively, by components with a certain degree of feeding plasticity, permitting them to adapt to food resource variability.Several works have presented contrasting theories regarding whether omnivores stabilize or destabilize trophic webs; some studies, and the supporting empirical evidence, suggest that omnivores destabilize the trophic web [116 -120].Other theoretical studies including the strength of trophic interactions suggest that omnivory may have a stabilizing effect when trophic connections are not strong [121,122].Still other studies suggest that some trophic webs undergo structural changes as a result of invasion by exotic species [123] and exploitation by higher trophic levels [46,124]. On the other hand, the connectance index was lower than the average value reported for other coastal ecosystems in Mexico (Table 2).This suggests that the complexity of the system trophic web is probably quite highly interconnected, considering that it only represents 12% of the maximum number of possible connections. The mixed trophic impacts and key species indices suggest that the groups with the greatest influence on the system belong to lower trophic levels (meiofauna, detritus, phytoplankton, and zooplankton), significantly affecting groups of fish and invertebrates (trophic level II-III).However, based on the results is not possible to observe any kind of control of the food web, top-down or bottom-up.The importance of benthic diversity that has been underlined in numerous studies [62,125]; it is clearly observed in this study, the benthos in the Alvarado Lagoon represents a major trophic resource, and plays an important role in the biogeochemical budget of such as shallow system.Finally, it is also important to mention the role of surplus phytoplankton production and detritus, which is exported to the adjacent continental shelf, through the trophic web of different shared functional groups [126 -129]. Except for coastal sharks and marine mammals (shelf), predator functional groups (mainly teleostean and cephalopods species) do not seem to significantly impact their prey, suggesting that higher trophic levels do little to move biomass toward the interior of the ecosystem.Before they were fished intensively, apex predators like coastal sharks, rays/skates, groupers and snappers, were present in greater quantities in the Gulf of Mexico [130 -133].Thus, they presumably had a greater impact on the trophic web, this suggestion requires further data supporting.Results presented here suggests that the ecosystem is dominated by small, fast growing organisms that are resistant to anthropogenic effects, including meiofauna, small demersal fish (Mullidae), and some cephalopods species (Octopus spp.and L. pealei).These organisms have biological traits (high renovation rates) that allow them to recover quickly under considerable fishing pressure and moderate eutrophication, compared to longer living organisms with slower metabolisms.Moreover, these organisms are able to increase their numbers quickly in response to predation. The ascendency system is in line with values reported for other areas of the continental shelf of the Gulf of Mexico, i.e., Campeche Sound [95].The ascendency/ capacity ratio, considered a measure of organization and efficiency, may also be seen as a measure of ecosystem maturity and an indicator of the system's ability to resilience to perturbation [84,134].The A/DC ratio is similar to the value reported for other continental shelves [94].Lower values indicate that a given ecosystem is immature and better able to resist external perturbations [135,136].However, the A/C ratio should be considered with care as some authors have found it to be negatively correlated with maturity [76,137].Maturity indices suggest that the system is a developing and relatively stable system that can continue to resist human (primarily fishing) or natural impacts without substantial modification of its structure and function, and that relatively few ecosystem components will be greatly affected in the near future.However, over the long term, if commercial fishing takes exceed the biotic potential of species of commercial interest these, and other groups not targeted by extractive activities, may decline. Finally, we should emphasize that our model was constructed using information available from published sources, thus, the results may change as more and better data become available, and as our methodological techniques improve.Although our study did not include dynamic simulation, we are confident that the model presented here will serve as the basis for identifying gaps in available data and highlighting new areas for investigation.Also it is important to recognize that based on our results we recommend working with models that couple (nested) subsystems, because individual functioning of each one influences the operation of the other, in terms of trophic flows in both directions (interchange of species or stages), which can have substantial implications for the sound management of resources with a ecosystem approach.Moreover, this model may be used for dynamic and spatial simulations to consider the simultaneous use of resources as well as a variety of economic practices like fishing, tourism, and crude oil extraction, restoration, etc. Fig. ( 1 ) Fig. (1).Study area.Continental shelf of the southwest Gulf of Mexico showing the main commercial shrimp fishery area (shadow area). Table 1 . Input parameter estimates and Ecopath mass-balance solution for Alvarado lagoon and adjacent continental shelf model.Values in regular type were derived from local data or literature sources.Bold values were calculated by Ecopath.Italic values for ecotrophic efficiency were estimated by the user to allow Ecopath to estimate the biomass required.Y = Catch, P/B = Production/Biomass, Q/B = Consumption/Biomass, EE = Ecotrophic Efficiency, TL = Trophic Level, P/Q = Production/ Consumption, R/B = Respiration / Biomass, R/A = Respiration / Assimilation, P/R = Production / Respiration, FD = Flux to Detritus, NE = Net Efficiency, OI = Omnivory Index.Data in bold were estimated by the Ecopath with Ecosim model. Fig. ( Fig. (2) illustrates a Lindeman spine with trophic chain and showing the trophic level II (TLII) consumptions is higher by detritus chain (D) than primary production (P) in a ratio 18:1 (D:PP).Most of the level II flows can be attributed to zooplankton (primary dominant consumers) and meiobenths and polychaets (dominant detritophages)).Flows in trophic level III derive from herrings and polychaets (lagoon).In the highest trophic levels, flows may be attributed to Synodontidae, and Muraenidae. Fig. ( 2 ). Fig. (2).The spine Lindeman diagram.Shows the trophic aggregation in nine discrete trophic levels for the Alvarado lagoon and adjacent continental shelf model.
2019-01-19T16:33:57.445Z
2018-12-31T00:00:00.000
{ "year": 2018, "sha1": "d69dee72690154e25a0db6bd8d99fee18a0700a7", "oa_license": "CCBY", "oa_url": "http://openfishsciencejournal.com/VOLUME/11/PAGE/73/PDF/", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "d69dee72690154e25a0db6bd8d99fee18a0700a7", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
7748193
pes2o/s2orc
v3-fos-license
The Changes of Angiogenesis and Immune Cell Infiltration in the Intra- and Peri-Tumoral Melanoma Microenvironment Malignant melanoma (MM) urgently needs identification of new markers with better predictive value than currently-used clinical and histological parameters. Cancer cells stimulate the formation of a specialized tumor microenvironment, which reciprocally affects uncontrolled proliferation and migration. However, this microenvironment is heterogeneous with different sub-compartments defined by their access to oxygen and nutrients. This study evaluated microvascular density (MVD), CD3+ lymphocytes (TILs) and FOXP3+ T-regulatory lymphocytes (Tregs) on formalin-fixed paraffin-embedded tissue sections using light microscopy. We analyzed 82 malignant melanomas, divided according to the AJCC TNM classification into four groups—pT1 (35), pT2 (17), pT3 (18) and pT4 (12)—and 25 benign pigmented nevi. All parameters were measured in both the central areas of tumors (C) and at their periphery (P). A marked increase in all parameters was found in melanomas compared to nevi (p = 0.0001). There was a positive correlation between MVD, TILs, FOXP3+ Tregs and the vertical growth phase. The results show that MVD, TILs and FOXP3+ Tregs substantially influence cutaneous melanoma microenvironment. We found significant topographic differences of the parameters between central areas of tumors and their boundaries. Introduction Cutaneous malignant melanoma (CMM) is highly aggressive with poor prognosis and high resistance to therapy. Further, prognosticators remain controversial and are generally based on the evaluation of the mitotic rate, regression, tumor-infiltrating lymphocytes (TILs) and growth phase [1]. Hence, there is an urgent need to identify new markers with more reliable predictive values than traditional clinical and histological parameters. Currently, potential reliable markers are a theme of intensive research. In malignant melanoma, like other solid cancers, tumor-stroma interactions that involve complex multiple cellular and molecular factors substantially affect their biological behavior [2,3]. Interactions between melanoma cells and other cell types in the microenvironment are mediated by endocrine and paracrine communication or through direct contact via cell-cell and cell-matrix adhesion, gap or tight junctional intercellular communication. Within a tumor, there are subcompartments with different microenvironmental milieus defined by their access to oxygen and nutrients. Therefore, different cancer cells within a tumor face different microenvironments [4]. A hallmark of solid tumor is abnormal vasculature, known as tumor angiogenesis, which is characterized by the new formation of vascular channels that enhance tumor cell proliferation, local invasion and distant metastasis. Tumor angiogenesis is uncontrolled and, in time, an unlimited process, involving the transition from the avascular to the vascular phases [5,6]. Tumor angiogenesis enhances the supply of oxygen and nutrients to solid tumor cells, which enables them to grow more rapidly and easily when vessels are formed in close proximity. It has been documented that new blood vessel formation is required after tumors attain a size of 1-2 mm [5]. Melanoma neovascularization has been correlated with poor prognosis, ulceration and an increased rate of relapse [6]. Recent studies showed that an effective marker for in vivo tumor angiogenesis is nestin, an intermediate filamentous protein that is considered to be a marker of endothelial proliferation [7]. Furthermore, it is also a marker of neuroectodermal stem and progenitor cells, because it is abundantly expressed in proliferating cells during embryonic development [8,9]. As a novel marker for activated blood, as well as lymphatic vessels, thymus cell antigen (CD90/Thy1) has been identified. CD90/Thy1 is a glycophosphatidylinositolanchored, strongly-glycosylated protein that is expressed on the cell surface and belongs to the class of the immunoglobulin superfamily. It was originally identified as a thymocyte antigen and is a pan T-cell marker in mice. It is also known to be expressed by neurons and fibroblasts [10]. The molecule is expressed exclusively on endothelial cells (EC) at sites of inflammation or tumors, showing signs of activation. In contrast, there was no expression of Thy1 on the cell surface of resting EC in healthy tissues [11][12][13]. Today, it is thought to be an activation-associated cell adhesion molecule of human dermal microvascular endothelial cells to tumor cells. The mechanisms of tumor cell adhesion to the endothelium and the subsequent invasion into the surrounding tissue share similarities with the interaction occurring during leukocyte extravasation at sites of inflammation [11,13]. The morphological gold standard for assessing the neovasculature in human tumors has become microvascular density (MVD). This method requires the use of specific markers that highlight the vascular endothelium using immunohistochemical procedures. MVD in primary tumors is significantly associated with metastasis and poorer prognosis in several tumors and is the most predictive in those tumors that induce significant angiogenesis, namely carcinomas of breast and prostate and hematological malignancies [4]. An integral component of the tumor microenvironment is an inflammatory infiltrate, with a wide range of effects, which can act as a double-edged sword. On the one hand, immune cells have been reported to regulate malignant cells, and on the other hand, they may also have tumor-promoting effects. It has been reported that the infiltration of different human malignancies, e.g., ovarian, colorectal and breast with CD8 + T lymphocytes is associated with favorable prognosis [14]. Natural killers, dendritic cells and macrophages may also be considered as independent good prognostic indicators in different human cancers [14,15]. Conversely, malignant cells have been documented to create an immunosuppressive microenvironment. In this way, immune cells may help them escape immune surveillance and promote tumor progression. Increasing attention is currently paid to regulatory T-cells (Tregs), which are a subpopulation of CD25 + CD4 + T lymphocytes with suppressive functionality [16]. The forehead transcriptional factor FOXP3 has been identified as a key regulator in the development and proper function of these cells, and it is also the only definitive marker [11,17]. In healthy individuals, the role of Treg is necessary in maintaining immunological tolerance and preventing autoimmune diseases. Activation of Tregs has been shown to lead to inhibition of cytotoxic CD8 + T lymphocytes and NK cells [17]. However, the role of Tregs in cancer development and progression is not clear. A large number of studies have shown that Tregs promote tumor growth by inducing host tolerance against tumor antigens by dampening the T-cell-mediated immune response against the tumor cells and enabling tumor cells to evade anti-tumor immunity. FOXP3 expression in cancers is thus associated with worse overall survival. Moreover, therapeutic inhibition of Tregs was shown to weaken their immunosuppressive effect and improve the course of the disease [14,18]. In malignant melanomas, FOXP3 + Treg is thought to be predictive of patient survival as a marker of early metastatic propagation [14]. The objectives of this study were to evaluate MVD with a focus on nestin, CD90-positive vessels and quantification of FOXP3 + Tregs in comparison to the numbers of CD3 + tumor infiltrating lymphocytes. To examine topographic differences, two distinct areas were analyzed in each lesion, the central area and the peripheral one, at the edge of the tumor adjacent to normal tissues. Results All obtained results with the Mann-Whitney U-test statistical analysis are summarized in Table 1. Microvascular Density with Anti-Nestin Antibody The microvascular density was quite low in benign nevi, ranging from 0 to 26 (median 4/mm 2 ). A marked increase was observed in a group of melanomas, with MVD from 2 to 78, median 10 in the center and 22 at the edge, confirming a significantly higher density of nestin-positive vessels (p = 0.0001) both in the center and at the edge of tumors ( Figure 1; Scheme 1). Positive correlation (p = 0.0001) was found between MVD at the tumor periphery and the depth of invasion, with median values of 17, 21, 34, 31 for pT1, pT2, pT3 and pT4 groups, respectively. Central areas exhibited very similar MVD values in each group, with a median of 10-14 and no statistical significance. Microvascular Density with Anti-CD90 Antibody No CD90 positive vessels were detected in nevi. In melanomas of the pT1 and pT2 stages, we found only individual vessels, both in the center and at the periphery (median zero). A significant increase (p = 0.0001) in CD90+ vasculature found for advanced tumors was predominantly intra-tumor vessels. Medians for pT3: three for the center, one for the periphery, pT4: 5.5 for the center and one for the periphery (Figure 2, Scheme 2). Tumor-Infiltrating Lymphocytes The numbers of CD3 + T lymphocytes in nevi ranged from one to 158, median 38, inside the lesion and 22 at the edge. In melanomas, there was a significant increase from 2 to 1330 elements per 1 mm 2 (p = 0.0001), with a median of 141 in central areas and 234 at the periphery. A significant increase in CD3 + tumor infiltrating lymphocytes was found in pT2, pT3 and pT4 versus pT1 melanomas (p = 0.0005) (Scheme 3). The peripheral area revealed even lymphocytic numbers, without any variations. Scheme 3. Evaluation of median values (y-axis) and error bars with the standard deviation of CD3 + T lymphocytes in the center (C) and at the periphery (P) of the microenvironment in different stages of melanomas (pT1-4) and benign nevi (x-axis). FOXP3 + Tregs were rare in pigmented nevi, with a median of five cells in the center, and one cell at the periphery. The numbers significantly increased in melanomas (p = 0.0001), from 1 to 192, median 30 in the center and 10 at the periphery (Scheme 4). We also found differences in Tregs among individual melanoma groups, where the median Tregs for the pT1 group of melanomas was 22 in the center and six at the periphery, for pT2, 55 in the center, 15 at the periphery, for pT3, 58 in the center, 16 at the periphery and for pT4, 23 in the center and 3.5 at the periphery (Figure 3). We found a significant higher number of Tregs in melanomas of pT2 versus pT1 (p = 0.015) and pT3 versus pT1 (p = 0.03). Surprisingly, in the pT4 group, a decrease in Tregs was observed in the center, as well as at the periphery. The ratio of CD3/FOXP3 + Treg showed a significant shift for Tregs in pT2 and pT3 groups at the periphery of lesions (p = 0.005) (Scheme 5). No associations were found with lymph node status or distant metastases. Discussion It has been determined that cancer progression is not solely determined by the characteristic of the tumor, but also by the host response [19]. CD8 + T-cells can be unquestionably heralded as one of the principal subsets of T-cells constitutively mediating an effective antitumor response. Activated T-cytotoxic lymphocytes can mediate specific destruction of tumor cells by the release of perforin and several types of granzymes, which are loaded in modified lysosomes [20,21]. CD4 + T lymphocytes are also an integral part of immunity, but their specific role in antitumor response remains unclear. They are known to facilitate cytotoxic T-cell (CTL) induction, although these cells have also been shown able to eliminate tumor cells in the absence of CD8 + T lymphocytes [22]. CD4 + T-cells have been documented to maintain a CTL response, too. During the last decade, a possible negative regulatory role of CD4 + T-cells has been described, and the existence of regulatory T-cells has been identified [14,17]. These cells represent about 6% of CD4 + T-cells and are present in peripheral blood and within the tumor environment. Antigen-specific activation and cell-cell contact were required for these clones of Tregs to exert suppressive activity. The presence of Tregs at tumor sites suggest that they could have a profound effect on the inhibition of T-cell effector responses against human cancers [17]. Besides anti-inflammatory cytokines, Tregs inside the tumor may repress immunity via other mechanisms. For example, they may inhibit T-cell proliferation. Whether the regulatory cells naturally exist in the host or whether they initially arrive as helper T-cells and only convert later is not altogether clear. Anti-tumor lymphocytes migrating to the tumor site may become compromised or may adversely adapt to the suppressive environment to promote growth instead of regression [16]. In agreement with these data, recent studies have revealed that the type, not the quantity of tumor-infiltrating cells seems to be a more critical determinant of prognosis. Since cancer is a disease caused by an array of various types of mutations, differences in T-cell subsets are not altogether surprising. Melanoma is one of those tumors known to possess the ability to elicit a profound immune response. Some data show that the induction of a strong immune response in patients with melanoma may improve survival [18,23]. Numerous immune-based therapies (involving cytokines, antibodies, cancer vaccines, adoptive immunotherapy and combinations of these therapeutic agents and modalities) are the focus of studies on alternative therapeutic approaches. Although cancer vaccines and adoptive T-cell transfer have been shown to increase the levels of the circulating tumor antigen-specific T-cells, these approaches produce clinical responses in only a few patients [17]. Recent studies have suggested that the presence of FOXP3 + Tregs in the tumor microenvironment, the expression of inhibitor ligands on melanoma cells, the secretion of immunosuppressive factors by melanoma cells and the activity of nutrient-catabolizing enzymes may contribute to the resistance of the tumor to immune destruction. It has been reported that high numbers of circulating Tregs are associated with rapid tumor progression in experimental animal models of melanoma and in patients with melanoma. In these patients, the presence of FOXP3 + cells in primary tumor has also been associated with a higher frequency of metastases in the sentinel lymph node [12,24]. On the other hand, the blocking of normal mechanisms responsible for the downregulation of immune responses has been shown to improve melanoma outcome efficiently [14]. In our study, we focused on evaluating the FOXP3 + Tregs, as well as the CD3 + /Treg ratio. While the density of these cells was very low in benign nevi, we confirmed their increase in melanomas, both inside and at the tumor edge. It was postulated that a major determinant of immune cell infiltration may be the stage of disease, where host immune response may decrease with increasing tumor growth [25]. In agreement with this finding is increased FOXP3 + Tregs in pT2 and pT3 melanoma stages in our study, with the most pronounced changes in the CD3 + /Treg ratio at the periphery of tumors. The increase in Tregs density may represent a mechanism of tumor resistance to immune destruction, creating an immunosuppressive melanoma microenvironment. Surprisingly, pT4 melanomas exhibited lower Tregs values and high a CD3 + /Treg ratio, particularly at the periphery. We suggest that low numbers of FOXP3 + accompanied by high TIL numbers may paradoxically be a feature of tumor progression, as was described for colorectal carcinomas [26]. The presence of cytotoxic T lymphocytes in advanced tumors may be a consequence of the greater production of abnormal peptides resulting in altered DNA repair, a typical feature of the genetic instability of malignancies [26,27]. Moreover, genetically-unstable tumors are often HLA class I-negative and might escape T-cell-mediated immune killing [19]. It has been well documented that angiogenesis is crucial for cutaneous melanoma progression, where melanoma neovascularization has been correlated with poor prognosis and an increased rate of relapse [6]. A possible explanation is that the increased vasculature enhances the chance for tumor cells to enter the circulation. Moreover, newly-formed vessels or capillaries have leaky and weak basement membranes, through which tumor cells can penetrate more easily than mature vessels [28]. Angiogenesis is a complicated and dynamic process, whose measurement in tissue provides only a snapshot, not straightforward views of tumors. Despite its limitation, microvascular density (MVD) counting has become the morphological standard for assessing the neovasculature in human tumors, with prognostic and predictive impact [4]. MVD seems to correlate with outcome, especially in high-grade tumors. It is widely assumed that tumors with high MVD are good candidates for clinical trials of antiangiogenic therapies, whereas tumors with typically low MVD are thought to be poor candidates for such clinical trials. Nevertheless, despite the initial confirmatory publications, numerous reports fail to show a positive association between increasing tumor vascularity and reduced tumor outcome [4,29]. One has to consider that heterogeneous methodologies used to calculate MVD among different studies might play a role. However, other factors have to be considered, too, such as tumor topography and functional changes in the endothelium. Topography is important in the differentiation of tumor vessels into those supplying the invading tumor edge and those serving the inner tumor area. As adhesive interactions between tumor cells and endothelium are critical steps in tumor metastasis, it is not surprising that functionally-and phenotypically-changed endothelium may substantially contribute to cancer progression. In accordance with previous explorations, we confirmed significantly higher MVD counts in melanomas versus benign tissue [30]. Moreover, we found markedly-enhanced vascularization in advanced pT3 and pT4 melanomas. As far as the predictive role of MVD is concerned, we cannot confirm and direct association, as none of our tumors within a five-year follow-up formed either distant metastasis or relapsed. However, a lack of correlation between MVD and tumor outcome was described in sinonasal, oral and canine melanomas, too [14,31]. In our study, we focused on activated, proliferating endothelium, using antibodies to highlight it-nestin and CD90/Thy1-instead of the widely-used CD31 or CD34 [28,32]. In this study, we found higher MVD of nestin-positive vessels in melanomas versus nevi, especially in advanced tumors. Although areas of hot spots were not infrequently seen within the inner tumor area, they usually predominated at the tumor edge-the zone of tumor/normal tissue interaction. Peripheral tumor areas are composed of typical capillaries derived from pre-existing vessels. Central areas of tumors, on the other hand, are at least partly made up of tube-like endothelial structures, known as vasculogenic mimicry (VM), that are generated directly by the tumor cells [15]. The molecular mechanisms that underlie VM are not fully clear, but metalloproteinases via their cleavage of laminin, E-cadherin by promoting adherence of the VM channel wall to tumor cells, tumor cell dedifferentiation and tumor microenvironment have been shown to play a role in VM. A three-stage phenomenon among VM channels, mosaic blood vessels and endothelium-dependent blood vessels has been proposed, where all three patterns participate in tumor blood supply. These facts may explain why therapeutic strategies targeting endothelial cells have no effect on tumor cells [6]. They may also partly explain why MVD measurement is not a direct predictor of anti-angiogenic therapy [4]. A good candidate for the detection of functionally-altered vessels seems to be CD90/Thy1. This molecule plays an important role in the adhesion of tumor cells to the endothelium and is associated with the specific interaction of the αvβ3 integrin on melanoma cells. This interaction mediates the binding melanoma cells to the endothelium. Blocking αvβ3 reduced the adhesion of αvβ3-expressing melanoma cells to the level of melanoma cells lacking αvβ3 [13]. Except for blood vessels, CD90/Thy-1 was found to be highly expressed on lymphatic endothelial cells [11,12]. We found no CD90 expression on endothelium of normal skins and nevi. Similarly, early-stage melanomas pT1 and pT2 had only very low numbers of CD90 + vessels. Advanced melanomas in pT3 and pT4 groups showed a significantly higher density of CD90-positive vessels, especially in central regions. These findings confirm phenotypically-and functionally-altered vascularization, especially in advanced-stage melanomas, and suggest a potential negative prognostic role of the protein in the disease. Experimental Section Archival cases of 82 cutaneous malignant melanomas and 25 benign pigmented compound or intradermal nevi were evaluated. Adult patients of both sexes, aged from 42 to 69, were included. The melanomas were divided according to the AJCC TNM classification for melanoma staging into four groups-pT1 (n = 35 melanomas), pT2 (n = 17 melanomas), pT3 (n = 18 melanomas) and pT4 (n = 12 melanomas) [33]. The corresponding H&E slides were first reviewed by the pathologist for confirmation of diagnosis and adequacy of the material. All selected tissue samples were formalin-fixed and paraffin-embedded. The study was performed on 5 µm-thick tissue sections by an indirect immunohistochemical method and stained in an automated immunostainer (VENTANA BENCHMARK XT, Ventana Medical System, Tucson, AZ, USA), in which all steps of the procedure were done. After deparaffinization, rehydration and blocking of endogenous peroxidase activity, all sections were incubated with a primary antibody at a room temperature. We used monoclonal mouse anti-nestin antibody (Millipore, Darmstadt, Germany, clone 10C2, Cat. #MAB5326, dilution 1:75, incubation time 20 min), monoclonal rabbit anti-FOXP3 antibody (Novus Biologicals, Cambridge, UK, clone SP97, NBP2-12498, dilution 1:150, incubation time 20 min), anti-CD3 (DakoCytomation, Glostrup, Denmark, polyclonal Rabbit Anti-human, Code 1580, dilution 1:50, incubation time 32 min) and rabbit monoclonal anti-CD90 antibody (RabMAbs, Abcam, Cambridge, UK, clone EPR3133, ab133350, dilution 1:100, incubation time 28 min). No primary antibody needed an antigen retrieval step. For detection, we used the VENTANA detection kit (VENTANA iVIEW™ DAB Detection Kit, Ventana Medical System, Tucson, AZ, USA, Catalogue No. 760-091), which is standardized to detect mouse IgG, IgM and rabbit IgG antibodies, without any further requirements on dilution or titration of the solutions. As a part of the kit, there is streptavidin-horseradish peroxidase complex conjugated to the biotin-bound secondary antibody, as well as hydrogen peroxide substrate and DAB (diaminobenzidine) for visualization. The whole set of cases was used for each analyzed marker. All parameters were evaluated by light microscopy counting capillary lumens, FOXP3 + and CD3 + lymphocytes per unit area of 1 mm 2 in a "hot spot"-a field with the highest capillary density or the highest lymphocytic infiltrate. We counted at least two fields for each tumor. Both the central areas of tumors (C) and their periphery (P) were measured. The differences between malignant and benign melanocytic lesions were evaluated. In a group of melanomas, obtained data were compared with the depth of invasion, lymph node and distant metastases status. The results were statistically evaluated using the Mann-Whitney U-test and Kruskal-Wallis test with Bonferroni correction. p-values of 0.05 or less were considered to be statistically significant. Conclusions In summary, the results show that MVD, TILs and FOXP3 + Tregs are substantially involved in the alteration of the cutaneous melanoma microenvironment. More marked changes were observed, especially in advanced stages of the disease. We also confirmed that there are significant topographic differences of the parameters between central areas of tumors and their boundaries. However, for determination of the analyzed parameters as unequivocal prognostic and predictive factors of melanoma, further studies are needed.
2016-03-22T00:56:01.885Z
2015-04-01T00:00:00.000
{ "year": 2015, "sha1": "4058d356516fd3ce647d7ccd01997cb5519eea33", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/16/4/7876/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4058d356516fd3ce647d7ccd01997cb5519eea33", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
43942798
pes2o/s2orc
v3-fos-license
Stable Labeled Isotopes as Internal Standards: A Critical Review Over the past few years, stable labeled isotopes (SILs) have played a critical role in bio-analysis, nearly replacing the use of structural analogues as internal standards. SILs are now the first choice of researchers/chemists when selecting an internal standard for day-to-day analysis to avoid process and analytical variation. However, although SILs are widely used in analytical labs today, they have challenging issues, such as the matrix effect, recovery, and ionization problems. The purpose of this review is to brief about both advantages and disadvantages of stable label internal standards from recent publications. Despite of having SILs in analytical methodology chemists observed the method issues and these were resolved later by replacing with structure analogs in their methodology. Introduction SIL internal standards are used in a wide range of analyses, including of small and large molecules, quantification of metabolites, and the determination of the in vivo metabolism of certain molecules. For liquid chromatography mass spectrometry (LC-MS/ MS) applications, there are two different types of internal standards that can be used, which include structural related compounds or analogues and isotopically labeled compounds such as deuterium (D), 13 C, or 15 N [1]. Commercially available SIL internal standards provide structural information, which enable researchers/chemists to better understand analyte fragmentation patterns in LC-MS/ MS, and also metabolism during in vivo administration. Today, commercial sources can provide custom designed SIL internal standards, which are easy for researchers to use. Applications of SIL Standards The switch from analogue internal standards to SILs for LC-MS/ MS analysis has proven to reduce variations in mass spectrometry results, such as ionization issues, and has also improved the accuracy and precision for the analysis of both small and large molecules. For example, Stokvis et al. [2] observed an improved performance in the assay of a novel anticancer drug, Kahalalide F, using an SIL internal standard. In another application of SIL standards, Freisleben et al. [3] described in their work the use of synthesized labeled vitamins of folic acid as internal standards in stable isotope dilution assays. Pawlosky & Flanagan [4] similarly contributed to this research area by developing a negative mode electrospray ionization (ESI) LC/MS method for the quantitative determination of folic acid in fortified foods with the aid of a stable labeled folic acid ( 13 C 5 ) internal standard, which helped diminish variation produced by the sample extraction procedure. The use of stable labeled macromolecules, such as peptides and proteins, as internal standards for large bio-molecule assays is also becoming more widely available. These stable labeled macromolecules (e.g., peptides and oligonucleotides) are produced using labeled starting materials and automated synthesizers. In addition, internal standards of macromolecules and proteins can be produced using recombinant, fermentation, and semi-synthetic approaches, which incorporate 13 C and 15 N building blocks into the biosynthetic process. Ong & Mann [5] discussed in their review the characterization of complex protein mixtures using MS, and explain how the post-harvest incorporation of stable isotopes can be achieved through chemical or metabolic processes in living cells. Using these methods, peptides can be distinguished by the predictable mass difference between the native and isotopic versions of the biomolecule. This isotopic harvesting process helps to quantify and provide precise functional information about peptides using mass spectra. In another application of SIL standards in the study of biomolecules, Hsu et al. [6] described in their paper a strategy for labeling the N-terminus and the ɛ-amino group of lysine with a stable isotope using a reductive amination procedure in the presence of a formaldehyde reagent. In another work, Palermo et al. [7] demonstrated the practical application Mod Appl Pharm Pharmacol of SILs in their methodology using gas chromatography to profile 3-oxo-4-ene urinary steroids, using a series of D-labeled cortisone and hydroxycortisol internal standards. Guo et al. [8] also demonstrated the use of SIL internal standards to study complex biomaterials, analyzing the metabolome by quantifying target metabolites with amino groups using LC-MS/ MS. The authors were able achieved this by introducing stable isotope tags onto the amine groups of the metabolites by reductive amination in the presence of formaldehyde. This similar strategy was investigated in approximately 20 amino acids and 15 amines. Creek [9] also showed that novel metabolites and their pathways can be identified using SIL standards in metabolomics. Stable isotopes can also be used for bioavailability and bioequivalence studies. Being less toxic, D and 13 C are well suited for such research in humans, as well as for in vivo studies as a pharmacological tool. These isotopic drugs can be administrated concomitantly by different routes, including oral and parenteral, and in different forms, such as solid and solution dosages. Concomitant administration reduces the variability and also enables the use of a single assay. This process also minimizes drug exposure and discomfort for the volunteer. Using this single-dose administration method also makes it easier to compare two different routes or dosage forms. The technique is also well suited for "pulse" administration, and the kinetics from a single dose during multiple dose or chronic dosing regimens can be compared with single-dose kinetics [10]. In another study, Heck et al. [11] described a methodology to compare the bioavailability of two commercially available brands of imipramine hydrochloride relative to an SIL internal standard. In this work, each formulation was compared with an SIL drug that was consumed orally at the same time the tested formulation was ingested. In Kasuya et al. [12] publication, "Stable-isotope methodology for the bioavailability study of phenytoin during multiple-dosing regimens," the authors determined accurate clearance values of unlabeled phenytoin at a steady state condition in the plasma concentration by comparing the results from the plasma concentration of a small amount of intravenously administrated SIL phenytoin (DPH-d10), and analyzing the plasma samples using a highly sensitive and specific gas chromatography-mass spectrometry (GC-MS). In another study, Gilbert et al. [13] studied the bioavailability of the drug timolol in dogs using both oral and ophthalmic formulations. In this work, the authors quantified the amount of timolol in plasma and urine samples in the presence of the drug's SIL internal standard ( 13 C 3 and 2 H 9 ) using LC with atmospheric-pressure chemical-ionization (APCI) tandem MS Bertilsson et al. [14] conducted an investigation of the autoinduction of carbamazepine metabolism in younger children (10 to 13 yrs old) using an SIL version of carbamazepine d4. Many researchers have shown that SIL internal standards are the best choice to correct recovery, matrix effects, and variability in ionization during extraction and analysis of an analyte in a complex matrix using LC-MS/MS. For example, Häubl et al. [15] found that matrix effects were corrected during the ionization process of an unclean sample of mycotoxins in the MS source in the presence of a 13 C-labeled internal standard. In addition, the authors found these SIL standards also improved the accuracy and precision of the determination of the mycotoxin deoxynivalenol by LC-MS/MS and LC-MS. Sheppard & Henion [16] developed a quantitative method for determining the concentration of EDTA in human plasma and urine. In their method, the samples were prepared with the addition of a 13 C SIL internal standard, and extracted using an automated anion-exchange solid phase procedure. The authors then analyzed and quantified the extracted samples labeled with the isotopic internal standard using capillary electrophoresis/ion spray tandem Berg & Strand [17] demonstrated that the ion suppression effects of drugs in biological samples during analysis using LC-MS/MS can be significantly reduced using 13 C SIL internal standards. Berg et al. [18] also evaluated and determined the amount of amphetamines in biological samples using reverse phase ultra-high performance LC-MS/MS in the presence of isotopically labeled internal standards, including 13 C and D. SIL internal standards that use isotopes, such as 13 C, 15 N, and 18 O were expected to behave more closely to their respective unlabeled analytes, compared with the results for the more classically used D-labeled internal standards. Despite this hypothesis, the authors found that data from the samples which used 13 C-labeled internal standards was more promising for analytical purposes compared to the D-labeled standards. Fierens et al. [19] described a methodology for the quantitative LC-MS/MS analysis of urinary C-peptide in the presence of a D-and 14 C-labeled peptide as an SIL internal standard. Limitations for SIL Internal Standards Despite the many uses of SIL internal standards in analytical applications, many researchers and scientists still face the challenges of matrix effects/recovery and ionizations issues even when stable isotopes are used. In addition, some researchers have reported a slight change in the retention of the analyte with the use of SILs, which can lead to ion suppression. For example, during their study in the determination of carvedilol enantiomers by LC-MS/MS, Wang et al. [20] observed a matrix effect in two specific lots of human plasma in spite of using a D-labeled carvedilol internal standard. This observation was further verified after diluting the extracted sample with the mobile phase, post-column infusion followed by extracted plasma blank. From these experiments, the S-enantiomer of carvedilol and its respective deuterated internal standard was shown to cause a matrix effect by ion suppression in two different lots of human plasma. The authors also observed that the deuterated internal standard caused a slight change in the retention time of the analyte, which resulted in different ion suppression between the analyte and the SIL internal standard. The authors concluded that this slight change in the presence of the SIL was significant enough to change the area ratio and affect the accuracy of the method. Liang et al. [21] extensively studied the phenomena of ion suppression in ESI, as well as ion enhancement in APCI while monitoring select-ions between the analytes and their respective labeled internal standards during the analysis of nine different drugs. The authors showed that ion suppression in ESI mode was Modern Applications in Pharmacy & Pharmacology How to cite this article: Nageswara R R. Stable Labeled Isotopes as Internal Standards: A Critical Review. Mod Appl Pharm Pharmacol. 1(2). MAPP.000508. 2017. 3/4 Mod Appl Pharm Pharmacol due to the co-elution of the labeled internal standards with the analyte. Additionally, other factors, such as the analyte's structure and concentration, matrix effects, and flow rate, which could cause ion suppression, were investigated apart from the ESI data. The authors observed that seven out of the nine analytes and their corresponding co-eluted labeled internal standards showed ion enhancement. This mutual ion suppression or enhancement between the analyte and the labeled internal standard may affect the sensitivity, linearity, accuracy, and reproducibility of quantitative analysis using LC/MS or LC-MS/MS. However, the authors concluded that their calibration curves were linear, and the response factor was constant with the addition of an appropriate concentration of the internal standard to the desired calibration sample ranges. Remane et al. [22] extensively investigated ion suppression and enhancement effects of fourteen different SIL internal standards in the presence of their native analogues using APCI and ESI. Multi-analyte (different class of drugs) quantification, measured during a single run, is a common procedure followed in many clinical and forensic toxicology labs. The authors found that both ion suppression and enhancement were influenced by the concentration of the native analyte, in which ion suppression increased with the concentration of the analyte, particularly for ESI analysis. However, ion enhancement effects were observed in solutions prepared in methanol and analyzed using APCI, with one exception, which occurred when plasma extracts were used under these conditions. Eleven SIL internal standards showed relevant ion suppression under ESI mode, but only one analyte showed suppression effects when APCI was used. The authors concluded from this study that researchers should ensure that the selection of internal standards used in multi-analyte quantification in matrix samples should be free of ion suppression and enhancement effects to avoid incorrect quantification. If not, a different ionization technique should be considered. Conclusion SIL internal standards have distinct advantages and disadvantages; however these materials still play a vital role in analysis. SIL internal standards are the first choice for quantitative bioanalytical LC/MS assays, as they generally produce better results. Despite this, SIL internal standards may or may not necessarily be appropriate for all quantitative bioanalytical methodologies when using LC/MS. D-labeled internal standards can behave differently compared to their native analytes, sometimes displaying different retention times and recoveries. In addition, the use of SIL internal standards, being structurally and chemically identical to their corresponding analyte, may inadvertently cover up problems in the assay, such as stability, recovery, and ion suppression. While SIL internal standards are useful in many aspects of validation experiments for different analytical methods, as well as for regular analysis, at the same time these compounds are not always available or can be very expensive, in which case structural analogues can be useful alternative standards [23].
2019-04-10T13:12:11.579Z
2017-12-18T00:00:00.000
{ "year": 2017, "sha1": "d6189f44fc7ab7b69d7ab876da30b123bfb8a496", "oa_license": "CCBY", "oa_url": "https://crimsonpublishers.com/mapp/pdf/MAPP.000508.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "769ad189b10898537b7b044858f69d3e4c6a920a", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
266882691
pes2o/s2orc
v3-fos-license
HUMAN LABOR AND ITS IRREGULAR NATURE AS A SOCIAL AND AXIOLOGICAL VALUE The intention of the authors of this publication was to present reflections on human labor as a social and axiological value, as well as to show various aspects resulting from its irregular nature. The above reflections show that this type of working time may be a kind of incentive for the employee and a factor motivating him to increase his effort and improve efficiency. In addition to the above-mentioned issues, the authors presented selected theoreticians’ perspectives related to human work in various historical and cultural periods. Moreover, the presented analyzes show that a person’s labor may constitute a distinctive social and psychological value. The authors’ intention was also to draw the reader’s attention to proper time management and methods of effective time management. Learning about these methods and implementing them in the employee environment will allow for more effective functioning and rational working time management as well as any other type of time. Finally, the so-called circle of time management rules, consisting of six elements, was presented Statement of the problem in a general form and its connection with important scientific or practical tasks.The social environment and social reality are related to human work, which is a specific field of scientific research that is characterized by constant transformation.Since ancient times, human labor has remained an extremely interesting topic both in theory and in life practice.Labor as such has changed over the centuries, and so has its understanding and the meaning given to it. Analysis of the latest studies and publications, which the author relies on, which consider this problem and approaches to its solution.The presented perspectives on understanding human work and the fact that it has accompanied man from the very beginning of his existence have resulted in an ongoing discourse bringing together various theorists dealing with this issue.The studies Bera R. [1], Kwiatkowski S. [2] show that labor has become an integral part of our lives, thanks to it we meet our needs, ambitions and aspirations.It is slowly becoming the highest value/luxury good (due to the rationalization of employment and the development of technology, robotics and automation), the basic meaning and purpose of human activity, and even a great addiction.Wołk Z. [3], Rycak M. [7; 10], Machol-Zajda L. [9] share the view that labor is a source of satisfaction and a criterion of desired behavior and social prestige, and, moreover, they are inclined to attribute the work process to the rank of a rudimentary element of mental and social health.Further development of the theory of labor productivity based on the improvement of time management in the works of Covey S. [11], Doran G. [12], Seiwert L. [13], implementation of the principles of social responsibility in modern business (Ushenko N., Blyzniuk V., Dniprov O., Ridel T., Kurbala N. [4]). Formulation of the goals of the article (statement of the task). The purpose of the article is to present reflections on human labor as a social and axiological value, as well as to show various aspects arising from its irregular nature. Presentation of the main research material.Despite the ongoing processes in the labor environment resulting from civilization changes, globalization and dislocation processes, R. Bera, recalling the words of H. Selye, notes in one of his studies that "(...) labor remains the main field of activity, without which it is difficult to imagine human life".(...) gives the individual the opportunity to pursue his own interests, talents and skills and dynamizes their individual physical, intellectual, spiritual, cultural and moral potential" [1, p.24-25].This is why modern society is called a working society.In turn, S.M. Kwiatkowski argues that: "Labor and profession in the industrial era (second wave civilization -as defined by A. Toffler) have become uniquely understood determinants of the way of life, action, human development, as well as the basis for assessing people in everyday interpersonal contacts.Who is who determines our attitude towards other people"' [2, p. 53].Z. Wołk, on the other hand, refers to labor through the prism of its function, writing: "Professional labor is a significant form of human activity due to the fact that it fulfills the following functions: it is a form of human activity, determines his professional and social position and is a source of income enabling implementation and development of needs" [3, p. 62]. When defining the concept of labor, it is worth quoting the sentence of Cz.Strzeszewski, who states "Labor is a free, although naturally necessary, human activity, arising from a sense of duty, combined with effort and joy, aimed at creating socially useful spiritual and material values".It is worth mentioning in this context that labor has many important features, namely [2, p. 25]: -it is a natural need, a source of satisfaction of individuals, -is a moral value and a source of many other values, -it is a specific form of survival of individuals and groups, -it is the basis for social integration, -it has therapeutic significance, -is a key factor in good quality of life and health, -may be a burden and cause of suffering for individuals, -does not always have market value (non-market labor, e.g.working at home), -is the basis of income, existence and development of employees and their families. John Paul II drew attention to this very important aspect, among others: in his encyclical Laborem exercens, he emphasizes the problem of human subjectivity and his primacy in relation to things.In the same encyclical, the Pope states: "(...) labor is the basic dimension of human existence on earth (...)".«There is no doubt that human labor has its ethical value, which is directly related to the fact that the one who performs it is a person, is a conscious and free, i.e., a subject who decides about himself".It follows that man is the source of ethical value and dignity of work."(…).The basis for determining the value of human labor is not primarily the type of activity performed, but the fact that the one performing it is a person.The sources of labor dignity should be sought not primarily in its objective dimension, but in its subjective dimension.Therefore research in the field of social responsibility [4] of a person -business -the state, which deepens the motivation of human behavior and labor activity, is currently also relevant. Irregular working hours are referred to as task working hours, namely the working time of an employed person, which is expressed primarily in the number of tasks [5].However, in this quantitative working time distribution, there are also components of the working hour's distribution.As A. Kamińska emphasizes, the employee's tasks must be adequate and consistent with the five-day working week [6, p. 814] and statutory standards relating to working time (40 hours per week) should be taken into account) [5].The essence of irregular working hours is the fact that the employee can independently allocate the number of tasks entrusted to him within a specific time.Rycak M., stating that "Task-based working hours is therefore a unique, flexible type of working time" [7, p. 132], points out, this fact among others. Therefore, the analyzes of references shows that working hours may be determined by the size of tasks.This means that working time is the time necessary to perform the tasks entrusted to the employee.According to the definition written in the encyclopedia of management, I am dealing with a task-based working time system [8]. It is worth mentioning that the irregular working hours system does not apply when the employee performs his work in the plant and the effect of his actions is subject to constant control by his superiors [9, p. 60-61]. We may use a task-based working time system in justified cases.These are [5]: -A type of job, -Organization (if the employees do not have rigid time standards for staying at work), -Workplace -The performance of assigned tasks usually depends on the personal involvement of the employee (e.g.creative work). The analyzes of references shows that such an employee is not bound by specific working hours, but his working time is bound by the so-called daily and average weekly norm.A given employee decides how many hours and on what days he or she works, and his or her goal is to prepare and complete a given task. It is also worth mentioning that in accordance with Art.140 of the Labor Code [5], the employer introduces irregular working hours after agreement with the employee. When analyzing irregular working hours, it should be noted that the issue of controlling employees, understood as focusing on their presence, fades into the background.This state of affairs may be a factor that demotivates effective effort, and the employer must devote most of his time to checking the attendance and activities performed by his employees.During normal working hours, employees divide their work into longer stages.Instead of trying to do everything as quickly as possible, they just make sure, they are "seen at work".Meanwhile, showing up at the company does not mean that you work effectively.Standardized working hours only make sense when you deal with, for example, external clients.Wherever possible, employees should be held accountable for tasks over time, not time per task.However, if the company has many employees and complicated procedures are involved, irregular working hours may lead to chaos.Therefore, such an undertaking should be carefully considered.It is also worth establishing that we expect all employees to be present at certain specific hours. It follows that irregular working hours are a kind of incentive and motivator for increased employee effort, if self-discipline is implemented into the employee's schedule. After analyzing the references relating to time management, it is worth mentioning that when an employee is allowed to work in an irregular system, one must avoid the mistake of excessive concentration -typical of some employers -on the socalled activity of their employees.Some employers want employees to send them information every day (e.g. via text messages, e-mails) on what stage their task is at, what they have to do, how many phone calls they have had, and how many contacts they have made.We don't think this is an effective way to encourage work.However, this is a classic waste of time for both our subordinates and ourselves which additionally confirms us in the blissful belief that we are working and have everything under control. It is also worth mentioning attempts to "manage employees' time".These attempts stem from several misconceptions: -the boss knows better what his employee should do; -the boss has the right to demand that the subordinate fulfill his various (not always important) requests.This employer's policy does not allow people to truly manage their time or even manage their own working time independently.Yet they should rather care about their subordinates being able to manage their own time.This definitely improves the quality of their work.The employer may require them to, for example, record their daily activities and record the course of their working day.It is not about control, but about possible self-assessment and help in better organizing your time or managing yourself in time.Analyzing the category of time from a sociological and cultural perspective, it should be mentioned that modern people cannot properly and rationally use time and manage it.If the employer agrees on irregular working hours, what is worse is when he imposes such a work system on us and we are unable to: -organize it well, -clearly define the tasks that we have to perform on a given day, -determine our own needs, -formulate goals, -set priorities, -motivate ourselves internally to work. In such a situation, we lose valuable moments that should be spent on achieving goals. It is worth noting that poor time management and neglect of important matters result in serious imbalance for many people.Labor often consumes a disproportionately large part of their lives, which affects family and neglected friends, not to mention recreation and sports. The paradox is that the work that absorbs and overwhelms us is probably neither as efficient nor as effective as it would be if we organized our lives better. If we often work late or take work home, if we don't even take breaks for lunch; If we are constantly bothered by too many responsibilities or we are always in a hurry to meet a set deadline, we should find a moment to calmly think about organizing our own work.You simply can't maintain productivity working ten hours a day, six days a week.After all there is a limit of time during which you can still work effectively.It is much more important to make better use of your time than to try to work even longer. In order to function more effectively and learn to manage our time properly, especially if we have irregular working hours, we should remodel our behavior and try to implement effective time management methods.These include: 1) The 60/40 rule: -Real planning method. -Planning only 60% of the time of the entire day. -Time reservation for unplanned activities in daily life. -A (Aufgaben) -to-do list of all planned Activities, tasks and meetings. The more time we spend on performing a given job (activity), the more time it will take us. 4) Pareto principle.80% of the results come from 20% of the expenditure, which means that with less effort you can achieve much better results.You should focus on activities that bring maximum results. The formulated goal should be. It is also worth analyzing the circle of time management rules, which is constituted by elements such as [13, p.41-43] (fig.3 Successes, Results: Preparation to achieve goals, optimal division and use of available time, reduction of time spent on subsequent activities.Checklists -ready-made forms.Successes, Results: Faster reading, better organization of conferences, setting meeting times, protection against disruptions, fewer interruptions at work, reducing the number of documents [13]. Conclusions from this study and prospects for further research in this direction.The intention of the authors of this publication was to present reflections on human labor as a social and axiological value, as well as to show various aspects resulting from its irregular nature.The above reflections show that this type of working time may be a kind of incentive for the employee and a factor motivating him to increase his effort and improve efficiency. In addition to the above-mentioned issues, the authors presented selected theoreticians' perspectives related to human work in various historical and cultural periods. Moreover, the presented analyzes show that a person's labor may constitute a distinctive social and psychological value. The authors' intention was also to draw the reader's attention to proper time management and methods of effective time management.Learning about these methods and implementing them in the employee environment will allow for more effective functioning and rational working time management as well as any other type of time.Finally, the so-called circle of time management rules, consisting of six elements, was presented. Stage 2 . Planning.Working techniques: Annual plans Monthly plans Weekly plans Time management principles.The mentioned ALPEN method.Working with a schedule.
2024-01-10T16:29:51.165Z
2023-12-27T00:00:00.000
{ "year": 2023, "sha1": "a0820e166c0331015897a0b500dc60b85f4331d5", "oa_license": "CCBY", "oa_url": "https://reicst.com.ua/pmt/article/download/2023-10-07-01/2023-10-07-01", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "95a2b9bbc0eef6526e6f6cb2b15512245a4f76ad", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [] }
119354372
pes2o/s2orc
v3-fos-license
The kinetic Monte Carlo Simulation scheme of the homoepicaxial growth of GaAs(001) for heterostructural growth on GaAs(001) substrate The simulation scheme for heterostructural growth of compound semiconductors is presented based on the kinetic Monte Carlo method. The sheme is designed as simple as possible in order to apply it for any heteroepitaxial growth on GaAs(001) substrate. The parameters used in the simulation are determined with the first-principles calculation to reproduce experimental RHEED intensity curves for homoepitaxial growth of GaAs(001). Introduction Nowadays, GaAs(001) surface is very important as a substrate for quantum dot(QD) growth with InAs. In order to use the quantum dot of InAs/GaAs(001) system for electronic and optical devices, it is significant to control the size and the position of the quantum dots in atomic scale. However, such control is still very difficult in real laboratories. In order to control size and location of QD, the optimization of the heteroepitaxial growth condition is important. Thus, we should investigate the mechanism of QD formation. Though it is known that strain of InAs layers on GaAs substrate is siginificant to determine size of QD, we should know also the mechanism to determine the location of QD in atomic scale. Using the in situ fast scan STM [1], it begins to be possible to observe directly the growth process of QD formation in laboratories. Thus, in order to investigate the mechanism to determine the location of QD, the growth simulation for heteroepitaxial growth of InAs/GaAs(001) surface is required as a reference for experimental investigation. We can investigate less than 1nm × 1nm area very accurately by using the first principles calculation. However, since the diffusion length of adatoms on the surface is longer than size of such area, the first principles calculation is not enough to investigate the epitaxial growth phenomena. One of very powerful computational method to investigate the epitaxial growth is the kinetic Monte Carlo method (kMC) [2]. The kMC is a kind of Monte Carlo simulation to investigate the time development of a system having multiple dynamical processes using the random number. The kMC is well-known for studies on the growth of Si(001) [3] and GaAs(001) [4][5][6][7][8]. The purpose of the paper is to present the similar kMC simulation scheme for heteroepitaxial growth for III-V compound semiconductors. Model For homoepitaxial growth of GaAs(001), Itoh [8] presented very complicated simulation model based on the zincblende(001) structure. However, his approach is constraint to only β2(2×4) reconstructed structure of GaAs(001) surface. Such too much accurate approach would not be adequate for the purpose to simulate heteroepitaxial growth on GaAs(001) substrate. Even for homoepitaxial growth of GaAs(001), the well-known reconstructed structures such as β2(2×4), c(4×4)α, c(4×4)β, c(8×2), (4×6), etc. appear only after growth stopped. Since we do not know much about the detailed atomic processes of heteroepitaxial growth, it is better to assume in the simulation model that the lattice structure is just zincblende without any reconstruction. For adatom dynamics, anisotropy of diffusion and anisotropic incorporation to islands can be included in the simulation as a hopping barrier energy from a site to other site. The hopping barrier energies for Ga and As should be tuned to reproduce RHEED intensity oscillation observed in experiments. Heteroepitaxy can be treated as a simulation with three atomic species, Ga, As and In. The strain effect between the substrate and adlayer can be included in the hopping barrier energy. In this article, we present a simulation model for heteroepitaxial growth on GaAs(001) substrate with a set of adjusted parameters for Ga and As which can reproduce surface step density or RHEED intensity oscillation curves for homoepitaxial growth of GaAs(001). Model for homoepitaxial growth of GaAs(001) In order to find the best condition of the epitaxial growth, the dynamical feature of Ga and As atoms on the surface is required. The dynamical feature can be simulated by using the kinetic Monte Carlo (kMC) method. In the kMC method, we can simulate rather easily the time evolution of the growth as stacking of atomic processes. The key parameters in the kMC method is the hopping barrier energies of each atoms from a site to a neighbor site on the surface. The algorithm of the kMC method was presented by Bortz et.al. [9] and it was applied to the MBE by Maksym [2]. The advantage of this algorithm is that we can simulate time-dependent phenomena consisting of several time-dependent events occuring in parallel. The key parameters for the kMC simulation is hopping barrier energies for atoms. Under the thermodynamic equilibrium, the migration rate R is is the prefactor, k is Boltzmann constant and T is the substrate temperature. is the barrier energy for adatom hop to a neighbor site. should be determined to reproduce experiments. Typically, the prefactor R0 is taken to be nearly a inverse number of the lattice vibration frequency. The rate of the arrival of atoms on the surface is also determined. The summation of the rate over all migrations and arrivals gives the rate of the event. For each steps, the event occured inteh step is chosen using the random number. In this study, the model of the kMC simulation is based on the work of Kawamura [3] who employ the realistic simulation scheme with diamond structure (001) surface instead of the simple SOS model. Here we extend his model for zincblende structure (001) surface. The extension is mainly to use multi atomic species. The barrier energy is defined by the enviroment of the atom, so that the barrier energies for isolated adatom on the terrace is different from the barrier energy of the atom incorporated at the step edge. For the case of the homoepitxial growth of GaAs(001), the migration barrier energies for Ga and As are assumed to be defined from the number of the first and second nearest atoms for each atoms in order to compare them with the first-principle calculation in near future. Thus, the barrier energy is defined as follows and are the binding energy for the first and the second nearest atoms, and and are the number of the first and the second nearest occupied atomic sites during growth. Since we should treat Ga and As individually in the simulation, we consider the barrier energy both for Ga and As atoms. As Ga E − is the Ga-As effective binding energy for hopping. The barrier energies due to the second nearest neighbour are different for Ga and As: is the Ga barrier energy due to the second nearest neighbour Ga atoms and is the As one. It should be noted that the each energies , and do not correspond directly to the bond breaking energy, because the migrating atom is still adsorbed on the surface even at the highest barrier energy position. For GaAs(001) surface, especially for β2(2×4) surface, anisotropy of islands in growth is well-known, so that we should include anisotropy of movement and incorporation of adatoms. The anisotropy observed in the morphology is caused by the two reasons; dimer-dimer correlation and anisotropy of hopping ratio itself. The dimer-dimer correlation can be included as the same way as that of the model of Kawamura [3] for Si(001) surface: the dimer formation energy E 2D and the dimer-dimer correlation energy along the dimer row direction, E 2DR are included. The intrinsic anisotropy for hopping of adatom itself on a terrace is also included as E Ga_anisotropy and E As_anisotropy where the barrier energy is determined for each hoppong path. It was not included in the model of Kawamura [3]. In our kMC model, the rate of migration is determined independently for each migration path in contrast to the model of Kawamura [3]. Thus, the hopping barrier energy for Ga and As is assumed to be as follows. Parameter adjustment for kMC model For the surface of GaAs(001)-β2(2×4), anisotropy of the islands during growth is known to be very strong. The islands elongate along the dimer direction in contrast to Si(001) surface where islands elongate along the dimer row direction with similar dimer structure. Because Ga layers and As layers are treated separately as the zincblende (001) surface structure, mixed dimer structure of c(4×4) structure cannot be modeled in our simulation. In order to reproduce the anisotropy of islands growth elongated along the dimer direction for GaAs(001)-β2(2×4) surface, we set to be E 2DAs >0 and E 2DRAs =0. Using such assumption, only the anisotropy of islands are included and the detailed dimer formation and the atomic trough appeared in β2(2×4) structure is excluded. Therefore, we can apply our simuation model for As-dmer-c(4×4) structure also. In our simulation model, we assume that there are very local and very temporal Ga-rich region on the surface where As adatoms diffuse and adsorb. Since the local and temporal structure during growth is not required thermodynamically stable because of thermally non-equilibrium condition, quasi-stable structure can appear. We assume that the local and temporal Ga-rich region can be consisted with two or three Ga dimers. We assume that the hopping motion of As on such Ga-rich region can be emulated with the hopping motion of As adatom on the Ga-terminated GaAs(001)-β2(4×2) surface which is considered to have similar anisotropy as GaAs(001)-β2(2×4). In kMC simulation, we must determine the barrier energy by a trial and error method or obtain from other methods like the first-principle calculation. In this paper, we determine the barrier energy by a trial and error to adjust the surface step density curve with the RHEED intensity oscillation curve measured in the experiment. In the recent study [10], the surface step density oscillation curve corresponds nearly to the RHEED intensity oscillation curve, though we should be careful to the diffraction condition which can affect the relative phase of the RHEED oscillation. E b E b E Ga_anisotropy and E As_anisotropy are determined from the first principles calculation directly that E Ga_anisotropy is 0.3eV and E As_anisotropy is 0.2eV for dimer row direction. Model for heteroepitaxial growth of InAs/GaAs(001) To apply the kMC simulation for heteroepitaxial growth, we should treat multi atomic species in the calculation. Since the crystal structure itself is same for InAs and GaAs, the atomic site definition is the same zincblende (001) surface with slightly different lattice constant. Namely, the hopping barrier energies for In, Ga, and As adatoms are defined as follows; E In = n 1In E In-As + n 2In-In E In-In + n 2In-Ga E In-Ga + n 2DIn E 2DIn + n 2DRIn E 2DRIn + E In_anisotropy E Ga = n 1Ga E Ga-As + n 2Ga-Ga E Ga-Ga + n 2Ga-In E Ga-In + n 2DGa E 2DGa + n 2DRGa E 2DRGa + E Ga_anisotropy E As = n 1As-In E As-In + n 1As-Ga E As-Ga + n 2As E As-As + n 2DAs E 2DAs + n 2DRAs E 2DRAs + E As_anisotropy Since the substrate GaAs(001) surface has anisotropy, we assume that E 2DAs =0 and E 2DRAs >0. Since InAs wetting layer on GaAs(001) has no anisotropy for island growth, we can assume that E 2DAs =0 and E 2DRAs =0. In the real simulation, the diffusion of In adatoms on InAs wetting layer on GaAs(001) and on intrinsic InAs(001) surface are predicted to be different using the first principles calculation [11]. Thus, the hopping barrier energy for In adatom should be calculated with different parameters for In adaom on the wetting layer and on the thick InAs islnad (quantun dot). Comparison with experiment We adjust the parameters for Ga and As as a base of the heteroepitaxial growth simulation. In fig.1, we show the adjusted negative surface step density curves for homoepitaxial growth of two vicinal surfaces of GaAs(001) -β2(2×4), the vicinal toward [110] and [ 0 1 1 ] direction or A-surface and B-surface where the substrate temperature is 556°C and the supplied beam intensities are set to be 0.4ML/sec for Ga and 2.0ML/sec for As assuming As 2 . The corresponding experiment is RHEED intensity oscillation curuves measured by Shitara et al. [12] where the beam intensity is 0.4ML/sec for Ga and 2.0ML/sec for As using As 2 beam source. The typical feature of the difference between the A-surface and the B-surface is that the growth is more step-flow like for the B-surface, especially at 556°C. In other words, the oscillation is larger for A-surface. The other important feature is that the oscillation behavior appears up to three periods. The oscillation strength degreases with increasing the time of growth and the oscillation seems to be very weak at the fourth period. The curves showed in fig.1 is the best adjusted curves with the RHEED intensity oscillation curves of ref. 12. The parameters used in fig.1 is shown in table 1. This set of parameters can be used also in the heteroepitaxial growth based on the GaAs(001) -β2(2×4) substrate surface. Comparison with the first principles calculations In our kinetic Monte Carlo scheme, the adjusted hopping barrier energy for Ga on truncated As-terminated GaAs(001) surface is calculated using table 1 as 2E Ga-As + 4E Ga-Ga = 0.928eV. Similary, the adjusted barrier energy for As adatom on truncated Ga-terminted surface is 0.728eV. It agrees with the first principles calculations [7,13,14] that As adatoms are more mobile than Ga adatoms. In the first principles calculation, the GaAs(001) surface is not the truncated surface but reconstructed surface. For example, under slight As-rich condition, the most stable reconstructed surface structure is known to be GaAs(001)-β2(2×4) surface where the easiest diffusion path is in the trough of this reconstructed structure along the As dimer direction. The diffusion barrier energy along the path is 1.2eV in the former first principles calculation [15]. The hopping barrier energy for Ga adatom on the trough of GaAs(001)-β2(2×4) surface is calculated as 2E Ga-As with two As atom forming As dimer in the bottom of the trough, 4E Ga-Ga with four Ga atom under the As dimer of the third layer and 2E Ga-Ga with Ga atom forming the wall of the trough with same height as that of the Ga adatom in the trough. The calculated value for the Ga adatom in the trough is 1.22eV. Similarly, we obtain 0.92eV for As adatom on the trough of Ga-terminated GaAs(001)-β2(4×2) surface. The first principles calculation shows us 1.1eV [7]. Therefore, the parameter set of table 1 shows us similar diffusion barrier energy compared with the first principles calculation. The other example is the barrier energy for As adatom on GaAs(001)-ζ(4×2) surface which was formerly accepted reconstructed surface structure of GaAs(001) in Ga-rich condition. The barrier energy for As adatom on this surface using the first principles calculation [14] is 0.5eV along [ 0 1 1 ] direction. The most stable position for As adatom is AA site in ref.14. For the AA site, the As adatom has one first nearest neighbor Ga atom, two second nearest neighbor As atoms of the topmost layer and one second nearest neighbor As atom connected with the first nearest Ga. Thus, using the parameter value in table 1, we obtain the barrier energy for As as 0.461eV. This value is also very similar to the value obtained by using the first principles calculation. Extension to heteroepitaxial growth After we checked the diffusion barrier parameters for Ga and As for homoepitaxial growth of GaAs(001) surface, the simulation for heteroepitaxial growth on GaAs(001) substrate will be possible. For example for InAs/GaAs(001), we should determine E In-As , E In-In , E In-Ga , E 2DIn , E 2DRIn , and E In_anisotropy. According to the discussion of the former section, the first principles calculation of diffusion barrier energy for In adatom on GaAs(001) surface [11] will be very helpful to determine the parameters. Conclusion The kinetic Monte Carlo smulation shceme for heteroepitaxial growth of InAs/GaAs(001) is presented. The parameters for diffusion of Ga and As adatoms on the surface are set to be nearly equal to the results of the first principles calculation. The parameters are checked that the RHEED intensity curves for homoepitaxial growth for two vicinal surfaces of GaAs(001) can be reproduced in the simulation. Adding diffusion parameters for Indium adatom, we will be able to perform heteroepitaxial growth of InAs/GaAs (001) Table 1 The adjusted parameters for homoepitaxial growth of GaAs(001) used in the simulation of Figure 1.
2019-04-14T02:07:28.185Z
2005-01-11T00:00:00.000
{ "year": 2005, "sha1": "a3c953f23e321a1b3c90332bb451d2f37d021dd8", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a3c953f23e321a1b3c90332bb451d2f37d021dd8", "s2fieldsofstudy": [ "Materials Science", "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
218494089
pes2o/s2orc
v3-fos-license
A Joint Deep Learning and Internet of Medical Things Driven Framework for Elderly Patients Deep learning (DL) driven cardiac image processing methods manage and monitor the massive medical data collected by the internet of things (IoT) based on wearable devices. A Joint DL and IoT platform are known as Deep-IoMT that extracts the accurate cardiac image data from noisy conventional devices and tools. Besides, smart and dynamic technological trends have caught the attention of every corner such as, healthcare, which is possible through portable and lightweight sensor-enabled devices. Tiny size and resource-constrained nature restrict them to perform several tasks at a time. Thus, energy drain, limited battery lifetime, and high packet loss ratio (PLR) are the keys challenges to be tackled carefully for ubiquitous medical care. Sustainability (i.e., longer battery lifetime), energy efficiency, and reliability are the vital ingredients for wearable devices to empower a cost-effective and pervasive healthcare environment. Thus, the key contribution of this paper is the sixth fold. First, a novel self-adaptive power control-based enhanced efficient-aware approach (EEA) is proposed to reduce energy consumption and enhance the battery lifetime and reliability. The proposed EEA and conventional constant TPC are evaluated by adopting real-time data traces of static (i.e., sitting) and dynamic (i.e., cycling) activities and cardiac images. Second, a novel joint DL-IoMT framework is proposed for the cardiac image processing of remote elderly patients. Third, DL driven layered architecture for IoMT is proposed. Forth, the battery model for IoMT is proposed by adopting the features of a wireless channel and body postures. Fifth, network performance is optimized by introducing sustainability, energy drain, and PLR and average threshold RSSI indicators. Sixth, a Use-case for cardiac image-enabled elderly patient’s monitoring is proposed. Finally, it is revealed through experimental results in MATLAB that the proposed EEA scheme performs better than the constant TPC by enhancing energy efficiency, sustainability, and reliability during data transmission for elderly healthcare. I. INTRODUCTION Cutting edge technologies such as deep learning (DL) and the internet of things (IoT) trend bring revolution in cardiac The associate editor coordinating the review of this manuscript and approving it for publication was Wei Wei . image-driven elderly patient monitoring. The cardiac image processing approaches, in association with the IoT driven portable devices, are promoting emerging and supportive real-time healthcare platforms at remote locations. In the meantime, electronics and wireless communication technologies have entirely reshaped the medical world by promoting intelligent and small sensors that can be used on or in the human body. The integration of these sensors with emerging healthcare technologies is the paradigm shift towards high sustainable, smart, and pervasive medical cities and homes to serve the elderly patients at remote locations [1], [2]. Body Sensor Networks (BSNs) is an instrumental and potential candidate to increase the research and development in the medical sector for further improving the healthcare platform. Besides, BSNs comprise a large number of heterogeneous biological sensors, and these sensing nodes measure and wirelessly transmit the abnormal changes in a patient's vital sign or physiological signals such as temperature, heartbeat, and brain signals blood pressure, as shown in Fig. 1. We explain the architecture of BSN based smart and sustainable healthcare in which wearable sensors sense the date and transmit the data through the wireless channel to the base station (BS). Further, it is sent to the eHealth care centers where data servers are present and can be accessed to diagnose and monitor the patients. At present, it is essential to provide high-quality healthcare facilities due to the increase in population, chronic diseases, and health un-aware tips. Cardiac images, IoMT, DL based applications to the healthcare industry are rapidly evolving due to state-of-the-art technological trends and practices. Besides they provide ease and comfort with 24-hours medical facilities to everyone without any constraint on his/her normal daily life routine. However, due to small size, lightweight, and power-constrained nature, these devices face one of the severe problems of battery charge drain and hence, the shorter lifetime and less energy efficiency. Many researchers have proposed distinct techniques/methods for energy optimization and battery lifetime extension, e.g., medium access control (MAC), physical layer, network topology-oriented, and transmission power control (TPC). But smart and sustainable healthcare platform is still the cornerstone to be developed. Main contributions of this research are: • First, a novel self-adaptive power control-based enhanced energy-aware approach (EEA) is proposed to reduce energy consumption and enhance the battery lifetime and reliability. Proposed EEA and conventional constant TPC are evaluated by adopting real-time data traces of static (i.e., sitting) and dynamic (i.e., cycling) activities and cardiac images. • Second, a novel joint DL-IoMT framework is proposed for cardiac image-driven remote elderly patients. • Third, DL driven layered architecture for IoMT is proposed, this helps in analyzing the medical image processing mechanism • Forth, Battery model for IoMT is proposed by adopting the features of the wireless channel and body postures • Fifth, network performance is optimized by introducing sustainability, energy drain, and PLR and average threshold RSSI indicators. • Sixth, a Use-case for cardiac image-enabled elderly patient's monitoring is proposed. The rest of the sections are arranged as follows. Section II presents detailed related works. A novel joint DL-IoMT VOLUME 8, 2020 framework is proposed in Section III. Dynamic wireless channel modelling is addressed in Section IV. System architecture with detailed functionality is presented in Section V. DL driven layered architecture for IoMT is proposed in section VI. Section VII proposes a battery model for IoMT. Section VIII proposes a novel energy-efficient algorithm. Experimental results are discussed in Section IX. Finally, Section X concludes the paper. II. EXISTING WORKS Most relevant research work is presented. Gao et al. [1], propose an energy-saving scheme for medical images through capsule endoscopy in BANs, which control energy consumption by adaptively adjusting the transmission power, but they do not consider other parameters like reliability and latency, etc. Besides, their work does not consider the received signal strength indicator (RSSI) for system performance examination. The adaptive TPC algorithm for energy saving in medical image-based health monitoring systems, where they used real-time channel datasets and analyzed that dynamic nature of wireless link impacts a lot on the energy and reliability but their proposed adaptive TPC method saves more energy by compromising reliability for healthcare applications [2]. While due to the sensitive nature of the medical information, it is important to develop a reliable, sustainable, and delay-tolerant methodology. In [3], Obaidat et al. examine WBAN performance by capturing the packet reception ratio (PRR) and its concurrence with RSSI by building performance benchmarking in resources management. In healthcare, it is essential to analyze static on-body channel characterization and link quality for 2.4 GHz medical healthcare platforms [4]. Cheour et al. present an overview of the routing protocols and power management techniques for global and local systems [5]. Sodhro et al. present various power-efficient and battery charge optimization strategies during media transmission with a novel framework of heartattack patients but has not been considered the TPC-enabled strategy [6]. Energy saving is the cornerstone of a sustainable and smart healthcare system by adopting TPC-driven techniques [7], [8]. The adaptive energy-saving mechanism has more advantages than traditional methods in medical applications. Besides, it is tested on real-time datasets with dynamic TP levels [9]. Xiao et al. develop novel TCP algorithms with vast experimental set-up for energy saving in BANs [10]. A unique technique for the telemedicine system, which optimizes the medical-QoS could be useful in different medical scenarios [11]. Won et al. present TPC based energy saving technique in wireless networks [12]. The introduction of the notion of energy-aware and battery lifetime extension approach for wearable devices during media transmission in WBSNs plays a significant role [13], [14]. Sodhro et al. develop the battery-friendly strategy for charge optimization in wireless-capsule endoscopy. All the aforementioned researchers mostly focus on the energy-saving techniques by different methods in wireless system, WBAN, and WSN, but very few focus on the energy optimization by using TCP, if they use TPC approached, but oversimplified to consider the real-time channel datasets for static and dynamic body postures with network metrics such as, standard deviation, packet loss ratio (PLR), and RSSI [15]. Chenfu et al. develop a novel energy-saving mechanism in transmission merely for internal circuitry but does not work for other parts of transceiver [16]. An innovative framework is proposed in [17]; it is based on four different methods and algorithms that jointly adjust the TPC and duty-cycle of the BSNs to optimize energy consumption. This framework is evaluated through Monte Carlo simulation, and this paper claim that this framework saves more energy at acceptable PLR due to its self-adaptive nature. Youming et al. design a scheduling strategy based on a game hierarchy for resource allocation in wireless communication [18]. However, two different algorithms for IoT based smart cities are stated in [19]. The first algorithm adaptively adjusts the bandwidth and power of the tiny nodes by the hybrid approach to optimize the energy consumption. In contrast, the Second algorithm controls the delay during the transmission of media. Moreover, tanwar et al. devise an IoT based smart home for elderly citizens [20]. Along with power-efficient communication in IoT, researchers across the globe have highlighted that medical data processing, data security, development of smart home automation systems are also key concerns for IoT [21]- [26]. III. PROPOSED DEEP-IOMT FRAMEWORK The proposed framework comprises three essential parts, first, cardiac image and vital sign signal data analytics: which contains several wearable devices, i.e., edge devices, mobile cell phones, sensor nodes. Second, deep learning (DL), which is the key supporting role in examining the features and classes of the data in correlation to the internet of medical things (IoMT) networks. Third, IoMT is a medical healthcare platform with key focus on pervasive and smart healthcare (see figure 4). IoMT is the network of wearable devices for classifying the data patterns by focusing on error estimation. Because DL techniques are intelligent and adaptive techniques for identifying distinctive and promising data types. IoT-devices for wearable healthcare are the key role players for data examining and human nerve systems such as vital sign signals, etc. So, cardiac images and collected big data analytics, IoT and other DL are the key factors for wise and intelligent decision making. IV. WIRELESS CHANNEL FEATURES The performance of wireless links is examined by properly evaluating the received signal strength, which is the main parameter for analyzing the cardiac image data quality to exploit the stability and reliability of the medical system. It is computed by taking an average of the incoming data packets by adopting sitting and cycling body features. RSSI is associated with the transmission power (TP) and distance, while here only TP is considered with a minimum FIGURE 2. Body Posture detection in smart healthcare systems and maximum levels of −25dBm, and 0dBm, respectively. It is assumed that if its value is −100 dBm then the packet will be dropped, which shows the worst channel condition, and if −88 dBm threshold is adopted then better link quality will be obtained. In this experimental set-up real-time datasets of cardiac images and body postures from NICTA [2], [25] are considered, which support in measuring the path-loss and data analysis of static and dynamic body postures respectively. High frequency, for instance, 2.4 GHz promotes the large PLR, unlike the low-frequency band. Besides, sitting and cycling body postures needs different frequency and hence the PLR with less and more quantity, as shown in Figure 2. In other words, it can be claimed that channel features efficiently characterize the power and reliability requirements. V. SYSTEM BLOCK DIAGRAM Transmitter nodes generate the cardiac image and body posture data packets in the periodic fashion by storing in buffer size, then transferred to the receiver node. Aggregated RSSI will be estimated by considering the transmission power requirement. After the short inter-frame space period (pSIFS), the receiver node forwards the acknowledgment (ACK) to the transmitter node. We assumed that all the packets are transmitted successfully at the receiver node, as shown in Fig.3. VI. PROPOSED DL DRIVEN LAYERED ARCHITECTURE FOR IOMT In IoMT there is the continuous transmission of media like medical imaging, capsule endoscopy data among patients and doctors; more charges are consumed. The main challenge for the recent emerging and innovative digital imaging world is the heterogeneous technological platform without a stable communication/content delivery environment. Also, lack of high interoperability among heterogeneous technological trends there are chances of less throughput and delay while transferring medical imaging information. For instance, IoT driven sensor devices are considering as the paradigm shift to transform the landscape of the medical imaging from patients' homes to hospitals. Also, feedback from physicians, patients, and medical staff/nurses will be transferred consequently for proper examination and monitoring of the critical events. The pervasive and smart medical platform is revolutionized by device to device (D2D) communication. IoT enabled internet of healthcare vehicles (IoHVs) are the backbone of the ubiquitous medical-care in urban and rural areas to facilitate the end-users. These portable devices, on the one hand, made convenience to the medical world while, on the other hand consume more battery charge and power, thus shorter battery lifetime. It is necessary to develop the power and battery-charge aware methods in IoMT for facilitating the aging society at the cost-effective rates while medical imaging and media streaming contents are vital indicators for presenting a better and clear picture of the emergency patients. This section proposes the layered framework of IoMT, which is illustrated in Figs 5 (a) and (b). The proposed DL driven layered structure comprises four layers. The detailed explanation of the layers is as followed. Layer 1: This layer defines the patient having wearable devices attached to the human body. The wearable devices take medical data like ECG, temperature, EEG, etc of the patient. Even if the patient is in motion or sleeping, if the medical condition becomes ill, then measures the data. Layer 2: This layer defines the connectivity means how communication will occur. The patient can send the data to the doctor through Wi-Fi, Zigbee connections. The connectivity must be reliable so that the data should be transfer properly. If the link lost, then it produces a delay in the treatment of the patient. Layer 3: Rapid proliferation in IoMT devices are playing a remarkable role in collecting medical image data because desktop computers are not efficient and accurate for data collection, clustering, and analysis. Besides, there are more chances to get unfiltered and raw data. The medical cloud for storing the patient's data/information is one of the emerging healthcare entities for emergency content backup. That information will be used by physicians and hospital staff for predicting future medical image related diseases. So, this layer connects the patent data with doctors so that doctors can saw that information and give proper treatment. Layer 4: This layer defines the doctor's side or hospitals where a doctor can have access to the patient medical data. The doctors can have access to patient records. The proposed battery model enhances the battery lifetime of low power sensors in IoMT. The proposed algorithm improves the battery lifetime by taking the recovery effect of the battery into concern is a battery-aware method. The recovery effect of battery is the process of giving some idle time to the battery so that the remaining charges can be utilized. The system model further explains the details of model. VII. PROPOSED BATTERY MODEL FOR IOMT The human physiological signal, such as an electrocardiogram (ECG), blood pressure (BP), temperature, etc transmission through wearable devices, is the emerging trend in today's pervasive healthcare sector. The critical challenge is to optimize the battery charge, power drain and hence the lifetime of IoT driven portable devices because due to the small size and resource-limited nature of handheld devices, frequent replacement and recharging of the battery is a cumbersome task. Besides, the discharging process of battery is non-linear. There are two factors in battery behavior one is rate capacity effect, and the second one is the recovery effect. The rate capacity effect is the maximum capacity of battery supplied to load like 1C means battery gives 1-hour capacity of charges. The C-rate is inverse relation with time 2C with half-hour time discharge. Fig.6 shows the C-rate versus the percentage of capacity. Rakjmatov presented an analytical battery method which is based on the electrochemical reactions and equation of diffusion having following mathematical expression as in eqs. (1) and (2) 75826 VOLUME 8, 2020 Whereas, α, β, i(t) and L presents battery storage, non-linear functionality of battery, Current profile (mA) and lifetime of battery respectively. For easy understanding and analysis above equation is transformed into a discrete equation. Considering the load current formed into a series of current values I 1 , I 2 , ........, I N whereby I k denotes the current for the kth task at duration t k with inter-arrival period k = t k+1 − 1. In addition, battery cost function σ (t) over time t properly explains its features for computing charge drain as shown in eq. (2). The two key parts of battery model are linear l(t) and nonlinear, and non-negative unavailable u(t) which are used fully and partly while transferring medical images to and from hospitals. If the idle time slot is introduced in the beginning, then unused charge will be converted into available charge amount with the help of charge recovery principle. If the continuous functioning of the battery is being analyzed, then lifetime L is considered. If the unavailable charge amount exceeds the actual stored quantity, then it is difficult to obtain the charged battery status back. Due to the diffusion mechanism of Li-ion battery non-linear discharging process is achieved, and thus entire charge amount cannot be transferred to the load. During discharge process charges that are attached to electrodes of batteries consumed first and continuously replacing other, which far away from electrodes, and this process continues till all charges of electrodes depleted completely. The remaining costs which are far away from electrodes remain as unusable and not reach to the surface of the batteries (electrodes of batteries). The idle required to recover these unusable charges is known as recovery effect. The process of recovery effect is explained in Fig. 6. The battery is able to consume energy due to the active elements attached to electrodes of the battery. When all the dynamic elements close to electrodes are depleted, and the remaining charges which left in the battery are diffused, besides battery cannot supply power to load because there is no active element attached to electrodes of battery and battery will sleep. If idle time will be given the active components move towards electrodes of battery for supplying maximum power until active participation of all involved entities. The lifetime of the battery can be extended by adequately following the recovery VOLUME 8, 2020 effect, which remarkably extends lifetime, unlike traditional methods. The different types of medical imaging sensors on/in or implanted to human body collects proper health data for examining and monitoring emergency events such as brain tumor, and endoscopy in IoMT. These implanted sensors are Li-ion battery-powered systems with tiny size, lightweight, less computationally complex and smaller charge storage capacity [8]. The various research works are carried-out on battery charge drain optimization and lifetime extension of the entire network by focusing on decreasing the consumption of energy of the battery [18]. Self-recovery is the effect of Li-ion battery that means when some period of idle time given to them, then the unusable charges can be utilized or transferred into available charge [18]. We can prolong the working time of battery and node by properly scheduling the recovery time [18]. The recovery effect of the battery is modeled as a finite state machine as it continues its state to recover the available charges. The state transition depends on the input voltage levels given to the system. VIII. PROPOSED ENHENCED ENERGY-AWARE APPROACH We propose a novel TPC driven enhanced energy-aware algorithm (EEA) for cardiac image-based elderly patient monitoring systems. It adopts the transmitter power levels by considering the ACK from the receiver node and temporal variations in wireless channels. Proposed EEA is a reorientation of the adaptive power control algorithm [2], but both use different strategies of power allocation. Key components of proposed EEA are lowest (i.e., initial/first valued) and latest (last/second) RSSI samples as shown in Fig. 7. The proposed EEA considers both the lowest and latest samples due to the dynamic wireless channel and its adaptive power allocation mechanism. Weighted average RSSI and threshold RSSI are denoted as RSSI , and respectively. While RSSI th functions in-between lower and higher variable threshold TRL, TRH var accordingly. Wireless channel performance is categorized by assigning weights such as α 1 (i.e., good quality), and α 2 (i.e., bad quality). Besides, change in TP level, path loss, and distance between transmitter and receiver nodes, RSSI deviation, interference and fading are depicted as, P, PL, d, S, I , Fa accordingly. Eq.(3) computes the aggregate value of RSSI by considering various entities which are affecting the reliability of the wireless channel and hence the energy drain and battery lifetime of sensor nodes. Eq.(4), allocates power level according to the need of the receiver nodes by using the channel coefficients, targeted RSSI, path loss and RSSI variation according to the distance between transmitter and receiver nodes. Power levels will be adapted by using eq.(5), according to the fluctuation in the wireless channel, deviation in RSSI value, and requirement of the receiver. A higher threshold with dynamic features is calculated by considering the fixed lower threshold and RSSI variation, as in eq.(4). Deviation in RSSI can be calculated in eq. (7) by considering total RSSI samples, lower and higher thresholds. Sustainable and smart elderly healthcare is essential in today's medical world. Lowest RSSI samples always help to recall the lost latest RSSI samples in the form of ACK from the receiver. Typical constant TPC has more reliability than the proposed EEA because it adopts the values of RSSI threshold dynamically. One of the drawbacks of constant TPC is that its power adaptation mechanism is constant and complex, which is not appropriate for emergency and delay-tolerant healthcare applications. IX. EXPERIMENTAL RESULTS AND DISCUSSION Experimental results of proposed EEA and constant TPC are revealed by adopting real-time cardiac image datasets from NICTA [21] with the average values of RSSI and TP are considered with static and dynamic body features, i.e., sitting and cycling. We adopted 0.5 km/h and 1.5 km/h for sitting and cycling by assuming the confined mobility of elderly patients. The data packets are transmitted with a specific TP level every second, while the RSSI values of the transmitted signal are recorded at the receiver. During the static and dynamic body features, less and more power is drained, respectively, due to the deviation in the wireless link. Generally, it is observed that dynamic body posture gives slightly higher RSSI deviation and PLR than the static one. Body features are related to the channel characteristics, which impacts a lot to the performance of proposed EEA and conventional TPC methods. Figure 8 (a), (b) presents the entire energy drain and PLR during sitting and cycling scenarios for both proposed EEA and constant TPC, respectively. It is analyzed that there are more power drain and PLR in cycling, unlike the sitting. Also, it is examined that more power drain and less PLR and vice versa are achieved by constant TPC and proposed EEA accordingly. Sustainability and reliability are also affected by more energy dissipation and PLR, and it vital to tackle these for smart and pervasive healthcare. Fig. 9 presents the histogram of proposed EEA and typical constant TPC method. It is analyzed that former consumes less energy than the later while increasing number of sensor nodes. Figure 10 presents the received RSSI values with associated TP levels at a specific time interval for proposed EEA and constant TPC method by considering static and dynamic body postures. Figure 10 (a), (c) reveals the power consumption by proposed EEA and constant TPC by adopting the static and dynamic features. It can be seen clearly that the proposed EEA keeps TP level lower about −21dBm than the constant TPC method, which consumes more TP about −15dBm. Experimental results show that constant TPC non-linear relationship between power drain and PLR, unlike the proposed EEA, saves more energy with acceptable PLR for smart and sustainable healthcare applications. The extracted results reveal that the proposed EEA is the potential candidate for the intelligent and sustainable pervasive healthcare platform. On the contrary constant TPC does not follow the features of wireless channel so consumes more power with less PLR and vice versa. In other words, it can be said that conventional constant TPC is not appropriate for emergency and delay-oriented medical applications. Thus, the power levels must fairly be allocated whenever it is found that RSSI is below the lower threshold. In that situation, constant TPC method increases TP needlessly without involving the channel behavior. Hence, proposed algorithm reduces the PLR with more energy-saving at both sitting and cycling postures. Moreover, proposed EEA adopts varying higher thresholds to adapt the RSSI and hence the channel fluctuation. Figure 10 (b) and (d) revealed the RSSI values of −87dBm, and 90dBm for constant TPC and proposed EEA, respectively. The results show that that proposed EEA shows a stable RSSI level, with acceptable PLR level unlike its counterpart i.e., constant TPC method with less stable RSSI level and less VOLUME 8, 2020 reliability at both dynamic and static body postures, as given n Table 1. Table 2, presents the simulation parameters adopted for Monte Carlo experimental setup to get the desired results in terms of energy, sustainability and reliability optimization. The proposer test-bed setup is the cornerstone to empower the smart and pervasive healthcare platform. Experimental results reveal the proposed EEA performs better at reasonable PLR unlike the constant TPC method. Therefore, we can claim that the energy dissipation is reduced with acceptable PLR and high sustainability (i.e., battery lifetime) by proposed EEA as compared to the constant TPC with more power drain, less PLR and low sustainability. X. CONCLUSION AND FUTURE RESEARCH Elderly patient monitoring through cardiac images to portray the big, clear, and accurate picture of the emergency scenario is very vital for pervasive medical care. This paper contributes in four distinct ways. First, a novel self-adaptive power control based EEA is proposed to reduce energy consumption and enhance the battery lifetime and reliability. Proposed EEA and conventional constant TPC are evaluated by adopting real-time data traces of static (i.e., sitting) and dynamic (i.e. Cycling) activities and cardiac images. Second, a novel joint DL-IoMT framework is proposed for cardiac image-driven remote elderly patients. Third, network performance is optimized by introducing sustainability, energy drain, and PLR and average threshold RSSI indicators. Forth, a Use-case for cardiac image-enabled elderly patient's monitoring is proposed. Proposed EEA is evaluated by considering real-time datasets of cardiac images and two body postures case 1, static i.e., sitting and case2, dynamic i.e., cycling. Besides, the performance of resource-constrained sensor devices is examined and evaluated by considering the average values of transmission power, and RSSI threshold over both proposed EEA and traditional constant TPC for pervasive and economical medical care. It is revealed through extensive experimental results that proposed EEA enhances energy efficiency, reliability, and sustainability, hence battery lifetime unlike its counter-part i.e., constant TPC. Hence, it can be said that proposed EEA is suitable for candidate for smart, sustainable and reliable healthcare for elderly patients. In near future, we will focus on proposing a cardiac pattern recognition platform in association with the national healthcare and clinical sectors. Besides, developing secure as well as efficient elderly patient monitoring prototypes is also one of future tasks. He has published more than 40 international journal articles including IEEE TBME, IEEE SENSOR JOURNAL, and IEEE JBHI, among others, more than 10 international conference proceedings articles and four book chapters. His current research focuses on wireless body sensor networks (WSNs), privacy and security for WSNs, bioinformatics, greenness and security solutions, and the Internet of Things. He received an award of ''Visiting Scientist'' from Massey University, New Zealand, in 2016. He received the Best Student Presentation Award with Cash Prize at 11th IEEE-EMBS International Summer School & Symposium. He is a Guest Editor of peer-reviewed international journals, i.e., IEEE ACCESS, the Journal of Medical Imaging and Health Informatics (JMIHI), and so on. He served as a Reviewer for exceed ten international journals such as the IEEE SENSOR JOURNAL, IEEE ACCESS, and so on.
2020-04-23T09:14:48.137Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "5051b1b1fce7a397a51ad7bc0d414c46d471ab0f", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09075231.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "d968050f14f34e9bb12d879c65590a167f5cb1df", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
256277711
pes2o/s2orc
v3-fos-license
Investigation of predictors for invasive pulmonary aspergillosis in patients with severe fever with thrombocytopenia syndrome Patients with severe fever with thrombocytopenia syndrome (SFTS) had been confirmed to have immune dysfunction and were prone to invasive pulmonary aspergillosis (IPA), which was directly related to the increased mortality. The aim of this study was to investigate the predictors for IPA in SFTS patients, and the results were expected to be helpful for early identification of IPA and initiation of anti-fungal therapy. The study was performed to review laboratory confirmed SFTS patients in two tertiary hospitals in Shandong province (Qilu Hospital of Shandong University and Shandong Public Health Clinical Center) from April 2021 to August 2022. The enrolled patients were further divided into IPA group and non-IPA group. Demographic characteristics, clinical manifestations and laboratory parameters between IPA group and non-IPA group patients were analyzed and compared to identify the independent predictors for IPA by univariate analysis and multivariable logistic regression analysis. Sensitivity and specificity of independent predictors were evaluated by receiver operating characteristic (ROC) curve analysis. In total, 67 SFTS patients were enrolled with an average age of 64.7 (± 8.4) years old. The incidence of IPA was 32.8% (22/67). Mortality of patients in IPA group was 27.3% (6/22), which was significantly higher than that in non-IPA group. Results of univariate analysis showed that uncontrolled diabetes, central nervous system symptoms, platelet < 40 × 109/L, CD4+ T cell < 300/μL and CD8+ T cell < 400/μL were risk factors for development of IPA. These factors were further analyzed by multivariable logistic regression analysis and the results indicated that uncontrolled diabetes, platelet < 40 × 109/L, CD4+ T cell < 300/μL and CD8+ T cell < 400/μL could be recognized as independent predictors for IPA in SFTS patients. In conclusion, IPA is a serious complication for SFTS patients and increases mortality. It is necessary to early identify predictors of IPA for improving survival of SFTS patients. the high mortality, SFTSV has been included in the list of priority target pathogens requiring urgent attention by the World Health Organization (WHO) 5 . SFTS patients can have multiple systemic complications, in which invasive pulmonary aspergillosis (IPA) is one of the most serious complications. IPA as a common type of invasive aspergillosis usually occurs in immunocompromised patients with neutropenia, transplantation, hematological malignancy or long term use of corticosteroids 6 . Severe SFTS patients had been confirmed to have immune dysfunction including leukopenia and reduction of immune cells 7 . Some studies had reported that SFTS patients were prone to IPA, which was directly related to the increased mortality [8][9][10] . Therefore, early identification of IPA in SFTS patients is necessary. In order to improve survival of SFTS patients, this study was performed to confirm the predictors for IPA in SFTS patients in their early stage of disease. The results are expected to be helpful for the early identification of IPA and initiation of anti-fungal therapy. Methods Study design. To confirm the predictors for IPA in SFTS patients, we analyzed demographic feature, clinical manifestations and laboratory parameters of 67 laboratory confirmed SFTS patients from two tertiary hospitals (Qilu Hospital of Shandong University and Shandong Public Health Clinical Center) in Shandong province between April 2021 and August 2022. This study was a purely retrospective study. The clinical course without any additional intervention was reviewed to analyze the risk relationship between related risk factors and the occurrence of IPA. This study was approved by Medical Ethical Committee in Qilu Hospital of Shandong University (2021-120) and written informed consent was acquired from every enrolled patients or their guardians. All methods were performed in accordance with the Declaration of Helsinki and the relevant guidelines and regulations. Patients enrollment and grouping. SFTS patients were diagnosed and enrolled according to the following criteria: (i) clinical presentation with acute fever and thrombocytopenia; (ii) serum positive for SFTSV RNA detected by real-time polymerase chain reaction (RT-PCR) assay. The enrolled SFTS patients were further divided into IPA group (n = 22) and non-IPA group (n = 45). The formulation of diagnostic criteria for IPA was based on the 2019 European Organization for the Research and Treatment of Cancer/Mycosis Study Group (EORTC/MSG) consensus 11 : (i) compatible signs and symptoms of IPA (such as cough, expectoration or wheezing); (ii) abnormal findings by CT scan of the lungs (such as patchy shadows, air crescent sign or cavity formation); (iii) mycological evidence: positive culture for Aspergillus from deep sputum or bronchoalveolar lavage. Patients who met all three or the last two criteria were diagnosed as IPA. The exclusion criteria included: (i) the patients had an uncured IPA before SFTSV infection; (ii) the related data of patients were incomplete because of death or other reasons. Patients who met any of the exclusion criteria were excluded. The flow chart of patients' selection was shown in Fig. 1. Data collection. Related data of enrolled SFTS patients including demographic feature, clinical manifestations and laboratory parameters were collected and sorted according to their electronic medical records. Among clinical manifestations, uncontrolled diabetes and central nervous system (CNS) symptoms were defined as follows. Uncontrolled diabetes is defined as the condition that the fasting blood glucose is still higher than 7.0 mmol/L despite the treatment with oral medicine or subcutaneous injection of insulin. CNS symptoms are defined as the presence of restlessness, lethargy or coma. All authors had access to information that could identify individual participants during or after data collection. Statistical analysis. Statistical analysis was conducted using SPSS software (version 26.0). Categorical variables were represented by rate. The measurement data of normal and abnormal distribution were compared by t-test and Wilcoxon rank sum test, respectively. Enumeration data were compared by chi-square test or Fisher exact probability test. Univariate analysis was performed to assess the relevance between demographic feature, clinical manifestations, laboratory parameters and occurrence of IPA. Factors with P < 0.05 in univariate analysis were further analyzed using multivariable logistic regression analysis to identify the independent risk factors for IPA. Receiver operating characteristic (ROC) curve analysis was used to evaluate the sensitivity and specificity of independent risk factors to predict IPA. For all analysis, a P-value less than 0.05 was considered statistically significant. Ethics approval and consent to participate. This study was approved by Medical Ethical Committee in Qilu Hospital of Shandong University (2021-120). Written informed consent was acquired from every enrolled patients or their guardians. Results Demographic characteristics. During the period of this study from April 2021 to August 2022, a total of 67 laboratory confirmed SFTS patients were enrolled. Average age of the patients was 64.7 (± 8.4) years old, and 34 (50.7%) were male. Among the 67 patients, 22 (32.8%) were diagnosed with IPA, based on which they were assigned to the IPA group. The average time from onset of SFTS illness to IPA diagnosis was 9.2 (± 2.3) days. The average age of the patients was 66.1 (± 7.2) years old, including 11 (50.0%) male patients and 11 (50.0%) female patient. 45 (67.2%) patients with an average age of 64.0 (± 8.9) years old were in non-IPA group, including 23 (51.1%) males and 22 (48.9%) females. The differences of age and gender between the two groups were not statistically significant (P = 0.341 and P = 0.932, respectively). www.nature.com/scientificreports/ Clinical manifestations. As shown in Table 1, mortality of SFTS patients in IPA group and non-IPA group was 27.3% (6/22) and 8.9% (4/45), respectively. The difference of mortality between the two groups was statistically significant (P = 0.047). Diabetes was the most relevant underlying disease to invasive aspergillosis. The incidences of diabetes among SFTS patients in IPA group and non-IPA group were 40.9% (9/22) and 31.1% (14/45), respectively. Rate of uncontrolled diabetes in IPA group was significantly higher than that in non-IPA group (31.8% vs 0.4%, P = 0.002). The average body temperature was 38.8 (± 0.6) °C in IPA group and 38.9 (± 0.6) °C in non-IPA group (shown in Table 1). The different degree of body temperature was not statistically significant (P = 0.493). In this study, the incidence of CNS symptoms in IPA group and non-IPA group was 72.7% (16/22) and 20.0% (9/45), respectively. A statistically significant difference between the two groups was observed (P < 0.001). Laboratory parameters. Some associated laboratory parameters including WBC, neutrophils, platelets, CD4 + T cell and CD8 + T cell were collected during the first seven days after onset of illness and the most severe values were selected to analyze. The results were shown in Table 1. www.nature.com/scientificreports/ The counts of WBC, neutrophils and platelets were reduced obviously. The average counts of WBC and neutrophils in IPA group were 3.4 (± 2.7) × 10 9 /L and 1.6 (± 1.3) × 10 9 /L, respectively. The average counts of WBC and neutrophils in non-IPA group were 2.9 (± 1.4) × 10 9 /L and 1.6 (± 1.0) × 10 9 /L, respectively. The differences of counts of WBC and neutrophils between the two groups were not statistically significant (P = 0.463 and P = 0.999). The average counts of platelets in IPA group was statistically lower than that in non-IPA group (33.9 (± 17.6) × 10 9 /L vs 76.4 (± 58.8) × 10 9 /L, P = 0.002). Multivariable logistic regression analysis. Five variables were found to be statistically different (P < 0.05) based on the univariate risk assessment. To identify the independence of each variable for promotion of IPA, multivariable logistic regression analysis were performed. As shown in Table 2, the results of multivariable logistic regression analysis indicated uncontrolled diabetes, platelets < 40 × 10 9 /L, CD4 + T cell < 300/μL, and CD8 + T cell < 400/μL were associated with occurrence of IPA in SFTS patients. www.nature.com/scientificreports/ Receiver operating characteristic curve analysis. As showed in Table 3 and Fig. 2, ROC curve analysis was used to evaluate the sensitivity and specificity of independent factors to predict IPA in SFTS patients. Cut-off value of platelets to predict IPA incidence in SFTS patients was 45 × 10 9 /L, with sensitivity of 81.8% and specificity of 73.3%. Cut-off value of CD4 + T cell counts to predict IPA was 319 cells/μL, with sensitivity of 90.9% and specificity of 73.3%. Cut-off value of CD8 + T cell counts to predict IPA was 395 cells/μL, with sensitivity of 81.8% and specificity of 75.6%. Discussion Our previous study on the risk factors associated with fatal outcome showed that pulmonary infection was significantly associated with risk of death among SFTS patients 12 . In this study, mortality of SFTS patients in IPA group was significantly higher than that in non-IPA group. The result indicated that IPA could be considered a risk factor of fatal outcome and early identification of IPA among SFTS patients was necessary. In this study, the incidence of IPA in SFTS patients was 32.8%, which was similar to the studies from Bae et al. and Xu et al. with incidence of 20% and 31.9%, respectively 8,10 . These results indicated SFTS patients would be more prone to aspergillus infection. Some factors might contribute to increased risks of susceptibility to IPA, including median age ˃ 60 years, uncontrolled underlying disease, leukocytopenia and neutropenia, severe complications, impaired immune functions and excessive inflammatory response. The aim of this study was to confirm predictors for IPA in their early course of the disease by analyzing demographic feature, clinical manifestations and laboratory parameters of SFTS patients. Patients with diabetes are prone to aspergillus infection because high level of blood glucose is conducive to the growth of aspergillus, inhibits leukocyte chemotaxis, reduces phagocytosis of phagocytes, and decreases complement production 13 . Diabetes may be considered a risk factor for the development of aspergillus infection and should be added to the list of well-known risk factors for invasive aspergillosis 14 . In our study, diabetes was a common underlying disease in SFTS patients with a rate of 34.3%. Although the proportion of patients with diabetes in IPA group was similar to that in non-IPA group, uncontrolled diabetes were more common in IPA group with a rate of 31.8%. The difference of incidence of uncontrolled diabetes between the two groups was statistically significant, which suggested that uncontrolled diabetes might be a risk factor for the development of IPA. The result of multivariable logistic regression analysis further confirmed that uncontrolled diabetes was an independent predictor for IPA. The conclusion was not consistent to the study by Xu et al. 15 , in which diabetes was recognized as a risk factor for IPA with univariate analysis and not an independent risk factor by multivariable logistic regression analysis. The reason might be that the previous study regarded diabetes as a risk factor to analyze, but did not analyze the control condition of diabetes. CNS symptoms are one of the most common severe complications among SFTS patients and has been confirmed to be associated with fatal outcome 2,12 . In our study, the incidence of CNS symptoms was 37.3%. Sixteen Table 3. Receiver operating characteristic curve analysis of independent predictors for IPA in SFTS patients. Parameters Cutoff value Area under curve (95% CI) Sensitivity (%) Specificity (%) P value Platelets ( www.nature.com/scientificreports/ patients in IPA group had CNS symptoms, which was significantly higher than that in non-IPA group. Patients with CNS symptoms might have intestinal flora disturbance and bucking, which would increase the risk for pulmonary infection 16 . The significant difference of incidence of CNS symptoms between the two groups indicated that combined with CNS symptoms could be considered a risk factor for IPA. However, multivariable logistic regression analysis did not support it as an independent predictor for IPA. Leukopenia, mainly neutropenia, is a well-known risk factor for IPA. For invasive aspergillus infection, neutrophils are the major immune cells in non-specific immunity and exert anti-infection effects through chemotaxis, opsonization and phagocytosis. Therefore, persistent neutropenia is a high risk for deep aspergillus infection. Platelets play anti-fungal effect by adhering to the cell wall of the hyphae to block aspergillus germination and hyphal elongation 17 . When activated platelets was removed from blood, thrombocytopenia may result in invasive aspergillus infection 18 . SFTS patients are characterized by leukopenia and thrombocytopenia, which may be high risk factors for IPA. In the present study, the counts of WBC and neutrophils were reduced obviously in both IPA group and non-IPA group, and there was no significant difference between the two groups. Therefore, the results did not suggest that reduced WBC and neutrophils were risk factors for IPA among SFTS patients. The reason might be that leukopenia and neutropenia were transient which were not like patients with hematologic tumor 19 . Although thrombocytopenia could be seen in both groups, thrombocytopenia was more pronounced in IPA group than that in non-IPA group and the difference was significant. Furthermore, the proportion of patients with platelets < 40 × 10 9 /L in IPA group was higher than that in non-IPA group. These results indicated that thrombocytopenia might be a predictor for IPA, which was confirmed by multivariable logistic regression analysis. Especially, when the cutoff value of platelets was 45 × 10 9 /L, the sensitivity and specificity for predicting IPA were 81.8% and 73.3%. Most SFTS patients are elderly with the median age of 63 years and immune function decreases with the increase of age, which is associated with the severity and mortality of the disease 20 . In addition, SFTSV infection can also damage the immune function of SFTS patients through changing the distribution of lymphocytic sub-populations 7 . These reasons put patients at high risk for aspergillus infection. In this study, age was not an independent predictor for IPA, but the damage of immune function due to SFTSV infection could be considered an independent predictor. Generally, cell mediated immunity plays powerful roles in protection against invasive fungal infection. CD4 + T cell, as antigen presenting cell and CD8 + T cell, as a cytotoxic cell constitute an important immune defense barrier against fungal infection. In this study, CD4 + and CD8 + T cells decreased obviously in IPA group with an average level of 196(± 107)/μL and 287 (± 263)/μL, respectively. The decreased T lymphocytes resulted in the number of active T cells were insufficient to participate in cellular immune response and caused lower cellular immune function among SFTS patients, which increased the risk of IPA. The result of multivariable logistic regression analysis suggested that CD4 + T cell < 300/μL and counts of CD8 + T cell < 400/ μL could be considered as independent predictors for development of IPA. The results were consistent to the previous study 21 , in which decreased lymphocyte was also considered a predictive factor for IPA among SFTS patients. Base on the ROC curve analysis, the sensitivity and specificity for predicting IPA were higher when the counts of CD4 + T cell < 319/μL and counts of CD8 + T cell < 395/μL. However, the present study had some limitations. First, based on EORTC/MSG criteria, the diagnosis of IPA in this study was categorized into probable diagnosis for lack of histopathologic evidence. False positive might be present among patients diagnosed with IPA. In the previous study 22 , a clinical algorithm for diagnosing IPA by discriminating Aspergillus colonization from invasive disease in ICU patients with Aspergillus positive cultures was established, and the algorithm demonstrated 61% specificity and 92% sensitivity. Similar to the previous study 22 , better diagnosis methods are necessary to discriminate colonization from FSTS patients who were positive for Aspergillus culture, and to increase the rigor of the research. Second, corticosteroids had been clinically applied to treat SFTS patients because of the ability to suppress systemic inflammation response and alleviate cytokine storm. However, inappropriate application of corticosteroids for treatment in SFTS patients may cause secondary infection 23 , which contributes to the development of IPA. Use of corticosteroids was not included in analysis due to the low rate, low dosage and short duration (< 3 days) of use among the enrolled SFTS patients. Third, rapid replication of virus may result in imbalance of immune regulation, which makes SFTS patients susceptible to IPA. Comparison of viral load between IPA group and non-IPA group was not analyzed because the examination of viral load in most patients was missing. Further investigations are needed on the limitations to confirm whether these factors are risk predictors for development of IPA among patients with SFTS. Conclusion Patients with SFTS need to be alert to the possibility of secondary IPA. Our study confirmed that uncontrolled diabetes, platelets < 45 × 10 9 /L, CD4 + T cell < 319/μL and CD8 + T cell < 395/μL could be considered independent predictors for development of IPA. Identification of these predictors may prompt physicians to initiate early diagnostic examinations for aspergillus infection and to initiate initial treatment, which contribute to improve outcomes of SFTS patients. Data availability The databases used and analyzed during the current study are available from the corresponding author on reasonable request.
2023-01-27T15:16:39.702Z
2023-01-27T00:00:00.000
{ "year": 2023, "sha1": "de2131639a35d087e0b58f4fbb01771a8ba5990b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "de2131639a35d087e0b58f4fbb01771a8ba5990b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
265551264
pes2o/s2orc
v3-fos-license
The sinoatrial node extracellular matrix promotes pacemaker phenotype and protects automaticity in engineered heart tissues from cyclic strain SUMMARY The composite material-like extracellular matrix (ECM) in the sinoatrial node (SAN) supports the native pacemaking cardiomyocytes (PCMs). To test the roles of SAN ECM in the PCM phenotype and function, we engineered reconstructed-SAN heart tissues (rSANHTs) by recellularizing porcine SAN ECMs with hiPSC-derived PCMs. The hiPSC-PCMs in rSANHTs self-organized into clusters resembling the native SAN and displayed higher expression of pacemaker-specific genes and a faster automaticity compared with PCMs in reconstructed-left ventricular heart tissues (rLVHTs). To test the protective nature of SAN ECMs under strain, rSANHTs and rLVHTs were transplanted onto the murine thoracic diaphragm to undergo constant cyclic strain. All strained-rSANHTs preserved automaticity, whereas 66% of strained-rLVHTs lost their automaticity. In contrast to the strained-rLVHTs, PCMs in strained-rSANHTs maintained high expression of key pacemaker genes (HCN4, TBX3, and TBX18). These findings highlight the promotive and protective roles of the composite SAN ECM and provide valuable insights for pacemaking tissue engineering. INTRODUCTION 1][12][13][14] Although these electrical impulse-generating cells constitute the functional component of an engineered biopacemaker tissue, the extracellular matrix (ECM) scaffold is an equally important ancillary component that supports and maintains the automaticity in the PCMs.The decellularized ECM derived from a tissue of interest in theory is the most suitable matrix scaffold for engineering that same tissue as it bestows the native microenvironment to the resident cells. 15It provides not only biochemical cues but also the native tissue architecture and biophysical properties, including mechanical strength and stiffness, to support cell survival, differentiation, and function. 16The ECM from the left ventricle (LV) has been studied extensively and used in engineering contractile heart tissues with working CMs, [17][18][19][20][21][22][23][24] but the effects of the SAN ECMs on hiPSC-PCMs are unknown.Based on the reported biomechanical properties and biochemical compositions unique to the porcine SAN ECM in comparison with the LV counterpart, we had proposed a composite material model describing how the SAN ECMs may shield the resident PCMs from cyclic mechanical strain and provide critically conducive biochemical and biomechanical signals in protecting and sustaining their pacemaking function. 25king a stepwise reverse engineering approach to test our proposed model, we engineered reconstructed-SAN heart tissues (rSANHTs) by recellularizing the decellularized porcine SAN ECMs with hiPSC-PCMs.For rigorous experimental design, hiPSC-PCMs were also seeded in porcine LV ECMs to generate reconstructed-LV heart tissues (rLVHTs) as controls.In contrast to the hiPSC-PCMs in rLVHTs, those in the rSANHTs exhibited phenotype and function that resembled the native PCMs.To test the mechanical insulating property of the composite SAN ECM, paired rSANHTs and rLVHTs are subjected to repeated mechanical strain at a novel in vivo cardiac-like site-the murine thoracic diaphragm-with human fetal-like heart rate.Strained hiPSC-PCMs in the SAN ECM continued to express high pacemaking genes and automaticity in contrast to those in the LV ECM.Our data demonstrated that the SAN ECMs guided the hiPSC-PCMs to arrange into an SAN-like cellular organization, promoted PCM phenotype and function in these cells, but most importantly protected and sustained the self-organization and pacemaking phenotype of hiPSC-PCMs similar to resident PCMs in the SAN even under cyclic mechanical strain.The findings highlight the importance of incorporating a composite SAN-like ECM in the design of engineered biopacemakers. Generation of rSANHTs and rLVHTs To test the effects of ECMs on human PCMs, hiPSC-PCMs were differentiated using our recently published small molecule protocol 11 that temporally inhibited both the canonical Wnt signaling with IWR1 and Nodal signaling with SB431542 (SB) in the cardiac mesoderm stage from differentiating day 3-5 (Figure 1A), resulting in ~78% CMs identified by the positive expression for cardiac troponin T (cTNT) on day 20 post-differentiation (Figure 1B).While the IWR1+SB-differentiated hiPSC-CMs expressed pacemaking hyperpolarization-activated cyclic nucleotide-modulated (HCN)4 channels (Figure 1C), they were heterogeneous in CM subtypes, including ventricular-, atrial-, and pacemaker-like CMs, as classified by optically recorded action potentials (APs) using voltage-sensitive dye FluoVolt in single cells (Figure 1D).Notably, the yield of the PCMlike fraction from the IWR1+SB-differentiated cultures was 3-fold higher compared with the culture differentiated by an established protocol 26 with IWR1 (26% vs. 8%, p < 0.01 by χ 2 test; Figure 1E).Although not all hiPSC-differentiated CMs are classified as the PCM subtype by APs, we had demonstrated by flow cytometry that nearly all IWR1+SBdifferentiated CMs are positive for pacemaking genes HCN4, T-box (TBX)3, TBX18, and short stature homeobox (SHOX)2 with ~50%-100% higher expression than the IWR1 protocol. 11Most importantly, our modified protocol induced hiPSC-CMs with a faster frequency of automaticity in the cultures than the control IWR1 protocol (median: 162 vs. 60 bpm, p < 0.01; Figure 1F; Video S1).These IWR1+SB-differentiated hiPSC-PCMs were used to recellularize the decellularized SAN ECMs (Figures S1 and S2) to fabricate the rSANHTs (Figure 2A).The same cells were seeded in the LV ECMs (Figures S1 and S2) to generate the rLVHTs as control for a head-to-head, direct comparison of the ECM effects on hiPSC-PCMs (Figure 2A).Spontaneous contractions in a broad range of frequencies from 6 to 140 bpm were observed in the hiPSC-PCMs within 2-4 days of rHT construction (Videos S2 and S3).Automaticity of engineered tissues could be maintained for >100 days in culture. rSANHTs structurally resemble the native SAN Whole-mount rSANHTs and rLVHTs immunostained with a CM marker cTNT 14 days after culture show that hiPSC-PCMs in the rSANHTs have self-organized into densely compact but randomly oriented clusters, whereas those in the rLVHTs are in bundles that aligned along the ECM (Figure 2B).The cellular organization in both rHTs resembled their respective native counterpart.Morphometric analysis of nuclei from high-resolution fluorescence images (Figure S3A) demonstrated that the hiPSC-PCMs in the rSANHT were significantly less elongated compared with those in the rLVHTs (median aspect ratio: 1.51 vs. 1.88, p < 0.05), recapitulating the features observed in the native SAN compared with the native LV (median aspect ratio: 1.60 vs. 2.10, p < 0.01; Figure 2C) and the native atrial myocardium (median aspect ratio: 2.43, p < 0.01 compared with the native SAN; Figure S3C).Although not statistically significant, cells in the SANHT were oriented in a less uniformed direction compared with cells in rLVHTs (median angle: 1.42 vs. 0.93 Rad; Figure 2D), following a similar trend observed between the native SAN and the adjacent atrial myocardium (median angle: 2.04 vs. 0.25 Rad, p < 0.01; Figure S3D) or the native SAN relative to the native LV (median angle: 2.04 vs. 0.83 Rad, p < 0.01; Figures 2D and S4).Hence, the quantified morphometry suggests that hiPSC-PCMs in the rSANHTs were organized structurally similar to PCMs in the native SAN and distinctly differed from those in the rLVHTs.Of note, the cellular organization on the SAN and LV ECM closely follows the respective ECM scaffold organization revealed by wheat germ agglutinin staining (Figure S5). The SAN ECM promotes pacemaker gene expression in hiPSC-PCMs To determine the effects of the SAN ECM on the transcriptional changes in hiPSC-PCMs relative to those in the LV ECMs, we conducted quantitative real-time PCR analysis on whole rHTs for a select group of pro-pacemaking genes after 2 weeks of in vitro culture (Figure 3A).Our results showed that transcription factors associated with SAN development, TBX18 and Islet (ISL)1, were 6-and 3-fold higher in the rSANHTs relative to the rLVHT control, respectively.Pacemaker channels, HCN4 and HCN1, were both 2-fold higher in the rSANHTs compared with the rLVHTs, albeit only ISL1 and HCN1 (p < 0.01 and p < 0.05, respectively) reached statistical significance (Figure 3B).The pituitary homeobox (PITX) 2 and TBX3 transcription were comparable between the rHTs, but SHOX2 was marginally increased in the rSANHTs (Figure 3B).Protein expression of select pacemaking genes and ventricular-specific marker were assessed by immunostaining followed by image analysis (Figures 3C and 3D).Immunostaining analysis of cTNT + hiPSC-PCMs revealed stronger protein expression of HCN4 channels, and TBX18 transcription factors in the rSANHTs compared with rLVHT.The TBX3 protein expression was comparable between the two rHTs, consistent with the transcript analysis.Immunofluorescent staining of ventricular-specific myosin light chain (MLC)2v showed a relatively low abundance of the protein in both rHTs.Our findings suggest that utilizing the SAN ECM as a scaffold for the hiPSC-PCMs provides a suitable natural microenvironment for their growth and development, supported by the expression of pacemaking genes. The SAN ECM promotes functional pacemaking phenotype in hiPSC-PCMs To evaluate the contractile function of hiPSC-PCMs in the rHTs, we utilized the MUSCLEMOTION algorithm 27 to generate contractile traces and particle imaging velocimetry (PIV) 28,29 to accurately quantify the maximum contractile displacement of hiPSC-PCMs (Figure 4A).At 2 weeks post-construction of rHTs, PIV revealed a maximum displacement of ~2 μm for the CM clusters in the rSANHT and a higher displacement reaching 5 μm in the CM bundles of rLVHT (Figure 4B; Video S4).Additionally, we observed a >3-fold reduction in the mean contraction amplitude in the rSANHTs compared with the rLVHTs (median: 6,761 vs. 35,389 a.u., p < 0.01; Figures 4C and 4D).The contractions in the rSANHTs also exhibited a slower kinetics compared with the LV counterpart, as indicated by an increased time-to-peak (median: 504 vs. 151 ms, p < 0.01) and relaxation time (median: 431 vs. 302 ms, p < 0.01). To investigate the electrophysiology and automaticity of the hiPSC-CMs in the rHTs, we utilized a genetically engineered hiPSC line that encoded an ultrasensitive calcium indicator, GCaMP6f, 30 which enabled us to monitor intracellular calcium dynamics as a surrogate of AP recording in individual CMs as well as in the engineered heart tissues (Figure 4E).The GCaMP6f-hiPSC line was chosen to avoid possible uneven outside-in loading of calcium probes in the rHTs.We differentiated GCaMP6f-PCMs using the same IWR1+SB protocol and then constructed rSANHTs and rLVHTs.We recorded calcium transients (CaTs) optically (Figure 4F) and observed spontaneous and robust CaTs in both rHTs over the 2 weeks of in vitro culture (Videos S5 and S6).CaTs of GCaMP6f-PCMs in the rSANHT compared with those in the rLVHT, exhibited a statistically faster frequency of spontaneous CaTs (median: 52 vs. 35 bpm, p < 0.01) that were smaller in amplitude (median ΔF/F o : 59 vs. 140%, p < 0.01; Figure 4G; Table S1; three biological replicates), which are consistent with a PCM-like functional phenotype.Collectively, these functional data on contractions and CaTs suggest a weaker contractile function but more robust automaticity-hallmark features of PCMs-in the hiPSC-PCMs residing in rSANHTs compared with those in the rLVHTs.This is consistent with the notion that the native SAN ECM microenvironment promotes functionally pacemaker-like hiPSC-CMs. SAN ECM preserves pacemaking phenotype in hiPSC-PCMs subjected to cyclic mechanical strain in vivo Cyclic mechanical stretch is constantly imposed on all CMs in the heart.Naturally, it has been commonly employed as a strategy to enhance the maturation of contractile hiPSC-CMs through the activation of mechanotransduction signaling that can direct the CM phenotype, including cellular hypertrophy and myofilament alignment, 31,32 which are notably counter to the key features of PCMs.In this study, we directly tested the mechanical protective effects of the SAN ECM on the resident PCMs that were proposed in our comprehensive report on the SAN ECM. 25 Paired rSANHTs and rLVHTs were contralaterally transplanted on the thoracic diaphragm of immune-deficient NOD-SCID gamma (NSG) mice and subjected to a cardiac-like microenvironment with cyclic mechanical strain in vivo for 2 weeks (Figure 5A).This novel in vivo test site in small animals was specifically chosen to impose a cyclic strain, through the mouse diaphragm contracting continuously with a respiratory rate of 80-230 bpm, on the transplanted rHTs.The imposed straining rate is comparable to the human fetal heart but without any electrical overdrive suppression of the hiPSC-PCMs by the host cells due to the lack of electrical coupling between the skeletal and cardiac myocytes. 33Two weeks post-transplantation, strained-rSANHTs showed an upregulation of the pro-pacemaking transcription factors, TBX3 (10.6-fold),ISL1 (1.3-fold), and SHOX2 (2.2-fold), and pacemaking HCN1 (2.3-fold) and HCN4 (2.3-fold) channels compared with the rLVHT control, albeit only ISL1 and TBX3 showed statistically significant changes (p < 0.01 in both cases; Figure 5B).There was no significant change in cTNT, MLC2v, KCNA5, or TBX18 between the rHTs.PITX2 expression was upregulated in the rSANHTs relative to the rLVHTs but did not reach statistical significance. To assess the pacemaking protein expression in hiPSC-PCMs, whole-mount strained-rHTs were immunostained for protein of interest in conjunction with cTNT and quantified by image analysis.The cTNT + hiPSC-PCMs in the strained-rHTs maintained their selforganized clusters in the rSANHTs and the aligned bundles in the rLVHTs (Figures 5C-5F).The striated myofilaments were observed in both rHTs but were less ordered in the rSANHTs than the rLVHTs.Some hiPSC-PCMs from the transplanted rLVHTs had integrated into the host skeletal muscles, as indicated by the presence of cTNT + cells in the host diaphragm (Figure 5C).In contrast, fewer cells infiltrated the host skeletal muscle from the rSANHTs and the hiPSC-PCMs maintained a compact organization similar to the native SAN tissue.The origin of the cTNT + PCMs in the rHTs was confirmed by immunostaining with a human nuclear antigen antibody (Figure S6).Semi-quantifications showed that TBX18, TBX3, and HCN4 proteins were significantly upregulated by roughly 5.1-, 2.4-, and 18.4-fold, respectively, in the cTNT + cells in the rSANHTs compared with the rLVHTs (Figures 5C-5E).The inset, with a magnified view of the dashed line area in the TBX18 image, clearly demonstrates that the nuclear expression of TBX18 is limited to the rSANHT graft and is not present in the surrounding host tissues.Connexins (CX)43, responsible for the electrical coupling between working CMs in the LV, exhibited 10-fold lower expression in hiPSC-PCMs in the rSANHTs compared with those in the rLVHTs (Figure 5F).Overall, our data suggest that in a mechanically active environment similar to the human heart, the SAN matrix preserves a pacemaker-like gene expression profile in the cyclic strained-hiPSC-PCMs, whereas those in the LV matrix are unable to maintain the pacemaking gene expression. SAN ECM protects automaticity of hiPSC-PCMs from mechanical strain in vivo To assess the electrophysiological function of the hiPSC-PCMs in the transplanted rHTs after 2 weeks of cyclic strain, we recorded CaTs of the strained-hiPSC-PCMs expressing GCaMP6f to indirectly evaluate the automaticity (Figure 6A).Paired rLVHTs -recellularized with the same batch of differentiated GCaMP6f-PCMs as the rSANHTs and engrafted on the contralateral diaphragm-were used as controls.Within 2 h of tissue extraction, 100% of the in vivo strained-rSANHTs (n = 6 of 6) generated spontaneous CaTs with a broad range of spontaneous firing rates (4-122 bpm).In contrast, only 33% of the strained-rLVHTs (n = 2 of 6) displayed spontaneous CaTs (p = 0.014 by χ 2 test; Figure 6B).Spontaneous CaTs were optically recorded using an epifluorescence microscope from each whole-mount rHT in the Tyrode's solution (Figure 6C; Videos S7 and S8).For rHTs without detectable CaTs, a multiphoton confocal microscope was used to confirm the absence of cyclic fluorescent signals.The frequency of spontaneous CaTs of hiPSC-PCMs in the strained-rLVHTs with automaticity was significantly slower than those in the strained-rSANHTs (median frequency: 37 vs. 61 bpm, p < 0.001 by Student's t test; Figure 6D), suggesting robust automaticity is preserved in hiPSC-PCMs recellularized in the SAN ECM after cyclic strain.The amplitude of the CaTs recorded from the strained-rSANHTs was significantly smaller than that from the rLVHTs (median ΔF/F o : 13 vs. 58%, p = 0.005 by Student's t test; Figure 6E; Table S2).An rLVHT with a slow frequency of automaticity could be captured by electrical pacing to exhibit 1:1 CaT to the stimulation frequency (Figure 6E).The presence of hiPSC-CMs was confirmed by immunostaining for cTNT in all strained-rLVHTs including those failed to generate spontaneous CaTs.Retention of drug sensitivity in the strained-rSANHTs after transplantation was demonstrated by the application of well-characterized drugs, isoproterenol and nifedipine, to elicit the response of a β-adrenoceptor agonist and a calcium channel blocker, respectively, in the rSANHTs.We observed a 20% increase in CaT frequency and an 80% increase in the normalized amplitude (ΔF/F 0 ) upon the administration of 500 nM isoproterenol (Figure 6G), which is in agreement with the property of isoproterenol as a positive chronotrope and inotrope.Administration of 100 nM nifedipine induced a negative inotropic effect, as indicated by the complete cessation of CaTs (Figure 6H; Video S9).Our data demonstrated that only the SAN ECM, not the LV ECM, is able to retain the pacemaking function in the resident hiPSC-PCMs in a cyclic straining environment. DISCUSSION The limitations of electronic pacemakers have sparked interest in hiPSC-based biopacemakers as a potential alternative. 101][12][13][14] Although improving the differentiation of hiPSC-PCMs is a critical first step for engineering a biopacemaker, the microenvironment that shapes the resident CM phenotype by providing the mechanical support and appropriate mechanotransduction in the resident cells may be equally critical in maintaining automaticity in the PCMs.Considering that the native ECM should be most supportive of its resident cells, we had extensively characterized the properties of the SAN ECM by examining the ultrastructure using scanning electron microscopy, the stiffness by atomic force microscopy, the biochemical composition by mass spectrometry, and the ECM protein spatial distribution relative to the resident CMs by immunostaining, which demonstrated distinct biochemical and biomechanical differences compared with that of the LV. 25 The SAN ECM is composed of >95% tensile-bearing collagens that surrounds the resident CMs compared with 74% in the LV.This is reflected in the 3-fold higher Young's modulus and the denser and more abundant fibrillar collagen network in the decellularized SAN than the LV ECM.Based on the ECM protein distribution in the tissues, the SAN ECM exhibits a composite material-like organization with regions of high elastin spanning between the stiffer collagen network that immediately surrounds the PCM clusters, whereas there is minimal protective tensile-bearing collagen around the working CMs and minimal elastin interspersed between the working CM bundles (Figure 7A).Based on this composite organization, we hypothesized that under active strain, the hiPSC-PCMs residing in the protective enclosure of the stiff, tensile-bearing collagen in the endomysial space would experience less strain as the collagen aids in resisting the strain, while the elastic perimysial region would undergo deformation to dissipate the strain (Figure 7B).This is in contrast to the LV ECM that does not provide the resident CMs with the same protective mechanisms, resulting in greater imposed strain on the working CMs.In this study, we directly tested this notion by determining the ability of the SAN ECM in promoting and preserving the pacemaking phenotype in the hiPSC-PCMs under cyclic strain by constructing the rSANHTs that are directly compared with rLVHTs. SAN ECMs induce hiPSC-PCMs to form SAN-like cellular organization and morphology Self-organization is a hallmark in cardiac development-a process that is instructed by complex yet tightly regulated signaling events-in which ECMs play an essential role. 34n our study, the rSANHTs exhibited structural organization and cellular morphology that mimic the native SAN tissue and PCMs, respectively.Self-organization of hiPSC-PCMs in the rSANHTs is likely induced by (1) the ECM proteins serving as ligands with affinity for specific integrin isoforms expressed in the hiPSC-CMs and (2) the physical cues from the matrix geometry.Indeed, recellularization of whole hearts had reported endothelial cells homing to the inner lining of the blood vessels and hiPSC-CMs to the matrix scaffold region where the ventricular CMs had resided. 35Thus, following the blueprint presented by the ECMs, hiPSC-CMs may preferentially bind and conform to the matrix regions where the native CMs had previously resided and not to residential regions of the fibroblasts, resulting in the observed cellular organizational and morphological differences in the rHTs.The presence of islands of PCMs in the rSANHTs is consistent with our recent publication that reported PCM clusters surrounded by fibroblast clusters in the porcine SAN. 36The minimal cell spreading with a lack of preferential alignment in the hiPSC-PCMs recellularized in the SAN ECMs but a clear alignment of the same cells in the LV ECMs, as indicated by the cellular aspect ratio and angle of alignment, is consistent with studies demonstrating regulation of hiPSC-CM morphology by patterned substrates. 37,38The findings support the use of native ECMs as a blueprint in directing recellularized cells to organize and exhibit morphology of the native tissues. SAN ECMs promote pacemaking phenotype and function in hiPSC-PCMs Beyond cellular organization and morphology, the SAN ECMs promoted a pro-pacemaking gene profile in the recellularized hiPSC-PCMs, as demonstrated by both the transcript analysis by qPCR and protein expression from immunostaining of the rSANHTs compared with those in the rLVHTs.Functional data on contractility and CaTs also indicate the pacemaking phenotype of hiPSC-PCMs is better maintained by the SAN ECMs.Mechanistically, the SAN ECMs may modulate the gene expression and the current sourcesink balance between the resident cell types. First, the gene expression in hiPSC-PCMs can be modulated by the chemical and mechanical cues of the microenvironment.Indeed, HCN4, a pacemaking channel responsible for automaticity by driving the membrane clock of pacemaking function, 39 is upregulated in the rSANHTs.Transcription factors, TBX3 and TBX18, known to inhibit the atrial CM phenotype and promote the PCM phenotype of the SAN head, respectively, 40,41 are also higher in the hiPSC-PCMs of the rSANHTs than the control.Hence, upregulation of these pacemaking genes likely contributes to the robust automaticity in hiPSC-PCMs recellularized in the SAN ECMs.Using mass spectrometry, we had previously reported >3-fold higher fibrillar collagens but >3-fold lower basement membrane-associated glycoproteins and non-fibrillar collagens in the SAN ECM relative to the LV ECM 25 Consequently, the integrin isoform-to-ligand binding pairs are likely different between the hiPSC-CMs residing in the rSANHTs and rLVHTs, activating distinct mechanotransduction pathways, resulting in differential cellular phenotypes.Indeed, differential phosphorylation of CX43 in atrial and ventricular tissues has been shown to be regulated by the specific integrin isoform that binds to the ECM ligand. 42The SAN ECM that is stiffer compared with the LV ECM may also affect the mechanotransduction signaling and modulate the gene expression through the nuclear translocation of transcription co-regulators and the chromatin state. 43In the rSANHTs, the abundant fibrillar collagens may be responsible for the clustering of the hiPSC-PCMs.This cell clustering, in contrast to a spread-out morphology in the rLVHTs, may lead to less cell-ECM contact and fewer engaged integrins.This agrees with the reported PCM-like electrophysiology with a fast frequency of automaticity in β1 integrin-deficient embryonic stem cell-derived CMs. 44cond, the cellular organization in the rSANHTs, dictated by the ECM blueprint, may affect the automaticity through the current source-sink balance between the PCMs and non-PCMs that results from their cluster organization.PCMs are the current source for initiating spontaneous APs.Non-pacemaking cells, such as fibroblasts and working CMs, can act as a sink that draws current from the source, leading to suppressed automaticity in the PCMs to which they are electrically coupled. 45Bressan et al. reported that collagen III is essential for the proper development of a functional SAN in the chick embryo by maintaining the proper cell-cell electrical coupling. 46Therefore, in contrast to the LV ECM, the porcine SAN ECM blueprint with a higher abundance of fibrillar collagens I and III in the perimysium 25 could direct the cell density and cell type organization (the ratio of PCM to non-PCM clusters) to establish an optimal source-sink balance that is conducive for automaticity. SAN ECMs protect automaticity of hiPSC-PCMs from in vivo cyclic strain Working CMs in the LV mature and acquire increased cell size and an abundance of aligned myofilaments as a result of rising longitudinal strain due to hemodynamic load during cardiac development. 47,48Accordingly, mechanical stretch mimicking the physiological environment of the heart has been employed as a strategy to mature working hiPSC-CMs by promoting cellular hypertrophy, alignment, organization, and functional properties of the adult contractile CMs, including the expression of ion channels and gap junctions. 31,32,49,50owever, a concurrent loss of automaticity, that is initially innate in all immature CMs, in the maturing neonatal LV CMs 51 suggests repeated mechanical stretch could negatively affect the PCM phenotype and function.Moreover, the lack of hypertrophy and myofilament development in PCMs in the SAN suggests that the PCMs may be protected from mechanical strain.We proposed a model on the protective mechanisms of the SAN ECM that may be responsible for reducing the strain experienced by the resident PCMs under cyclic contractions (Figure 7).Indeed, in agreement with our proposed model, the hiPSC-PCMs in the strained-rSANHTs remained self-organized similar to the native SAN with robust PCM gene expression and function.The high pacemaking gene expression in the rSANHTs (i.e., HCN4, TBX3, and TBX18), but more importantly the corresponding downregulation in the rLVHTs (i.e., HCN4 and TBX3), supports the role of the SAN ECM in sustaining the pacemaking phenotype in hiPSC-PCMs under in vivo cyclic strain (Figure 7B). Our rigorous experimental design, with the LV ECMs serving as the control, demonstrates that the pacemaking phenotype and gene expression in hiPSC-PCMs cannot be retained solely by any cardiac-derived ECM but specifically the SAN ECM.We theorize that the composite material-like organization in the SAN ECM with elastin fibers interspersed between PCM clusters would undergo mechanical deformation instead of the PCMs that are mechanically insulated in the stiff collagen matrix enclosure to minimize mechanotransduction signaling from stretch in the PCMs (Figure 7B).The preserved pacemaking function in the rSANHT and the lack of progression to a mature working CM phenotype, which typically coincides with an increase in postnatal mechanical stress, 48 may be attributed to a reduced mechanical strain in the hiPSC-PCMs seeded in the SAN ECM.This notion is also consistent with the retention of pacemaking function in adult rabbit PCMs in vitro with inhibited mechanofeedback from contractions. 52Additionally, cyclic stretch has been shown to upregulate CX43 in neonatal rat CMs and human embryonic stem cell-derived CMs. 53,54Therefore, the low CX43 expression in the rSANHTs compared with the rLVHTs indicates the PCMs in the rSANHTs may be experiencing a reduced mechanical strain in contrast to those in the rLVHTs.Collectively, the data support our proposed model that the SAN ECM, through its composite nature, may be preserving the pacemaking function in hiPSC-PCMs by shielding the cells from cyclic strain (Figure 7). In summary, the current study provides new insights into the protective nature of the composite SAN ECM and presents a novel strategy for subjecting human cell-based biopacemaker under cyclic strain in small animals without electrical overdrive suppression.This study highlights the importance of the SAN matrix scaffold, not just any cardiac ECM, in retaining the PCM properties of the hiPSC-PCMs subjected to a cardiac-like cyclic strain.Hence, shielding the cyclic mechanical strain imposed on the PCMs should be included as a consideration in the design of the engineered biopacemakers.While we have reported the unique biochemical and biophysical properties of the SAN ECM, 25,36 the discrete factors in the matrix scaffold and the precise signaling pathways that are responsible for preserving the pacemaking function in hiPSC-PCMs under cyclic strain warrant further study. Limitations of the study One major limitation in this study is the use of a mixture of CM subtypes in recellularizing the ECMs.Although our PCM differentiation protocol increased the PCM fraction by up to 3-fold compared with the established CM differentiation protocol, the PCMs are still only ~30% of the total CMs.With future improvement in cardiac subtype differentiation, the resulting rSANHT could exhibit an even faster frequency of automaticity without the presence of working CMs that may be present as a current sink and depressing the pacemaking function.Additionally, the loss of automaticity in the engineered LV tissue constructs could be due to the maturation of the working CMs on the LV ECM.This, however, still supports our hypothesis that the SAN ECM promotes and preserves the pacemaking phenotype, whereas the LV ECM supports the contractile phenotype.One other limitation in the study is the use of Ca 2+ transient recordings as surrogate for APs, rather than a direct AP measurement due to the difficulty of detecting the low fluorescence intensity of voltage-sensitive ArcLight against a high background autofluorescence stemming from collagen in the ECMs.The electrophysiological assessment could be improved as new brighter genetically encoded voltage-sensitive indicators become available. STAR★METHODS RESOURCE AVAILABILITY Lead contact-Additional information and requests for resources and reagents should be directed to the lead corresponding author, Deborah K. Lieu, at dklieu@ucdavis.edu.Materials availability-Additional information and requests for materials should be directed to the Lead Contact. • ed information of some experimental procedures is available online in Supplemental Information.Source data for Figures 1 to 6 and Figures S3 and S4 are included in Data S1.Additional data can be requested from the corresponding author. • Any additional information required to reanalyze the data reported in this work paper is available from the lead contact upon request. Mice-NOD-SCID IL-2Rγ null (NSG, The Jackson laboratory, IMSR_JAX:005557) mice ~12-week of age were used as bioreactors for in vivo cyclic strain testing of our rHTs. All animal usage and care followed the protocol approved by the Institutional Animal Care and Use Committee (IACUC) of the University of California, Davis and adhered to the guidelines of the National Institutes of Health. METHOD DETAILS Isolation and decellularization of porcine SAN and LV-Porcine hearts were chosen for their translational value given their similar physiology and ECM composition as that of human 56 to minimize species-mismatch and the precedence of FDA-approved porcine small intestine submucosa scaffold for clinical use.Fresh hearts of 6-month-old market hogs were obtained from the UC Davis Meat Laboratory.As previously described, 25 the SAN region was identified and manually dissected under a microscope (Figures S1A and S1B) and verified by trichrome staining (Figure S1C) and immunostaining for HCN4 channels to identify the pacemaking CMs (Figure S1D).For all experiments, the LV tissues dissected with consistent myocardial alignment served as the control.Tissues of 300-μm thick slices were decellularized to obtain SAN and LV matrix scaffolds as previously described 25 (Figure S2). Fabrication of rSANHTs and rLVHTs with hiPSC-PCMs-To generate rSANHTs and rLVHTs, decellularized SAN and LV ECMs were each spread out in a well of a 24-well plate and dried to enable attachment to the culture surface.Prior to recellularization, ECMs were rehydrated overnight at 37°C in CM culture medium.HiPSC-PCMs of day 7-12 post-differentiation were seeded on a SAN or an LV ECM at 2.6 × 10 6 cells/cm 2 .Cultured rHTs were assessed 14 days post-construction for phenotype and function as described below. Cyclic mechanical strain rHTs in vivo-Heterotopic transplantation has been shown to allow vascularization of tissue constructs within two weeks and survival for at least 6 months, 57 suggesting the feasibility of utilizing the NSG mice as bioreactors for longterm in vivo testing of our rHTs.To impose cyclic mechanical strain at a frequency comparable to the human fetal heart rate (110-160 beats/min), the rHTs were heterotopically transplanted onto the thoracic diaphragm (80-230 breaths/min) of ~12-week-old NSG mice. No electrical overdrive suppression of the human PCMs by the host diaphragm is expected because CMs do not electrically couple with skeletal myocytes. 33Each pair of rSANHT and rLVHT 2-3 days post-fabrication were transplanted contralaterally onto the diaphragm from an abdominal access for side-by-side comparison, with the cell-seeded side facing the diaphragm.Fibrin gel was used to secure the rHTs at the transplantation site.Engrafted rHTs were harvested 2 weeks after transplantation and analyzed for cardiac and pacemaker markers through immunostaining (Table S3) and transcription analysis (Table S4), or for functional assessment by calcium transient (CaT) recording as described for in vitro rHTs. Assessment of gene expression and morphometry of hiPSC-PCMs in rSANHTs and rLVHTs-Transcript expression of rSANHTs for pacemaker genes of interest were analyzed by quantitative real-time PCR (Table S4) using SYBR Green that was normalized by GAPDH with rLVHTs as control using a ΔΔCT method.Protein expression of rHTs were assessed by whole-mount immunostaining for pacemaker and CM genes.Stained images were also analyzed for cell morphometric parameters, including area, major axis, and minor axis, by ImageJ to determine the aspect ratio and level of cellular elongation.The angle of alignment of CMs to myofibril in radian was plotted in MATLAB using a custom script. 55ntractile assessment of rSANHT and rLVHT-Contractile function of the regional PCMs in rSANHT or rLVHT attached to a dish and whole rHTs in suspension were measured optically at 37°C with 5% CO 2 in an on-stage incubator using an Observer Z1 microscope (Zeiss) at 10× magnification with an EMCCD camera (Photometrics).Spontaneous contractions of regional cells or clusters was quantified using the MUSCLEMOTION software.27 The contractions were analyzed for frequency, peak amplitude, time-to-peak, and relaxation time. Figure 2 . Figure 2. Generating rHTs that structurally mimic the native SAN and ventricular myocardium (A) An illustration depicting the rHT construction processes.(B) Representative images of whole-mount native (n) SAN, nLV, decellularized (d)SAN, dLV, rSANHTs and rLVHTs stained for cTNT and DAPI.Rectangular dashed outline in the bright-field tissue image indicates the location of the stitched fluorescence image shown.A magnified inset image is shown for the dashed square region in each stitched fluorescence image.(C) Quantification of the aspect ratio for cell nuclei relative to the cell body in the native tissues and rHTs.*p < 0.05, **p < 0.01 by one-way ANOVA followed by Bonferroni post hoc test. Figure 4 . Figure 4. Functional assessment of the hiPSC-PCMs in the rSANHTs and rLVHTs (A) A schematic overview of contractility analysis approaches in assessing regional contractions of hiPSC-PCMs in the rHTs 2 weeks after recellularization.(B) Representative maximum displacements of rHTs calculated by particle imaging velocimetry (PIV) analysis.(C) Representative contractile traces of the hiPSC-PCMs in rSANHTs and rLVHTs quantified by MUSCLEMOTION.Traces were normalized to arbitrary unit (a.u.).(D) Quantification of contraction amplitude, time-to-peak, and relaxation time of the regional hiPSC-CMs in rSANHT (n = 8) and rLVHT (n = 10).(E) A schematic overview of the experimental workflow for CaT assessment in rSANHTs and rLVHTs. Figure 5 . Figure 5. Phenotype of hiPSC-PCMs in cyclic strained-rSANHT and rLVHT (A) A diagram of the experimental design for in vivo cyclic straining of rHTs and a representative gross morphology of extracted rSANHT and rLVHT transplanted on the thoracic diaphragm after cyclic mechanical strain of 2 weeks.(B) Bar graph showing relative transcript expression of the rSANHTs to the rLVHTs after in vivo cyclic strain.Data are presented as mean relative fold changes with SEM of rSANHT vs. rLVHT with GAPDH normalization (n = 3).**p < 0.01 by Student's t test.(C-F) Immunostaining and quantification of relative protein expression in cyclic strained-rSANHTs and rLVHTs for TBX18, TBX3, HCN4, and CX43 (magenta).Recovered transplants were co-stained with general CM marker cTNT (yellow) and nuclear counterstain DAPI (cyan).The orange dashed lines in (C) delineate the boundary between the grafted rHTs and host skeletal muscle tissues.White asterisks in the images indicate the host skeletal muscle tissues revealed by the differential interference contrast (DIC) overlay.Relative fluorescence intensities of protein expression were quantified from four confocal
2023-12-04T06:16:55.321Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "e56c01b1679a9de20c68f748ec1e963d6520a58a", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2211124723015176/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b39e029a337882880b7635dbe962ea87e0924ad1", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211252524
pes2o/s2orc
v3-fos-license
Unified Description of Polarized and Unpolarized Quark Distributions in the Proton We propose a unified new approach to describe polarized and unpolarized quark distributions in the proton based on the gauge-gravity correspondence, light-front holography, and the generalized Veneziano model. We find that the spin-dependent quark distributions are uniquely determined in terms of the unpolarized distributions by chirality separation without the introduction of additional free parameters. The predictions are consistent with existing experimental data and agree with perturbative QCD constraints at large longitudinal momentum $x$. In particular, we predict the sign reversal of the polarized down-quark distribution in the proton at $x=0.8\pm0.03$, a key property of nucleon substructure which will be tested very soon in upcoming experiments. We utilize a new approach to hadron dynamics and spectroscopy based on gauge/gravity correspondence, light-front holography, and Veneziano duality to predict the spin-dependent quark distributions of the nucleons. The polarized quark distributions are uniquely determined in terms of the unpolarized distributions without the introduction of additional free parameters. The predictions are consistent with existing experimental data and agree with perturbative QCD constraints at large x. In particular, we predict the sign reversal of the polarized down-quark distribution in the proton at x ∼ 0.8, a key property of nucleon substructure which will be tested very soon in upcoming experiments. Introduction.-Understanding how the spin of the proton originates from its quark and gluon constituents is one of the most active research frontiers in hadron physics [1,2]. A key challenge is to determine the polarized parton distribution functions (PDFs), ∆q(x, Q), which describe the difference of the probability density between helicity-parallel and helicity-antiparallel quarks in a proton. Here x is the light-front longitudinal momentum fraction of the proton carried by quarks of flavor q. The PDFs represent the universal frame-independent distribution functions of the proton which are measured in deep inelastic lepton-proton scattering (DIS) at spacelike momentum transfer Q. Since they are determined by fundamental dynamics of color confinement, they are nonperturbative quantities. It is thus challenging to derive the quark distributions from first principles. However, the x-dependence at large x and the magnitude of the PDFs in the x → 1 limit are constrained by perturbative QCD (pQCD) [3,4]. These important constraints [3,5], which are first-principle predictions of pQCD, predict the helicity retention at x ∼ 1; i.e., the helicity of a quark carrying large momentum fraction will tend to match the helicity of its parent nucleon: the helicity asymmetry ∆q(x, Q)/q(x, Q) is predicted to approach 1 as x → 1, where q(x, Q) being the unpolarized PDF. Precise measurements of ∆q(x, Q) from polarized lepton-proton DIS are now available [1,2]. Although the expected increase of ∆u/u toward 1 as x → 1 is observed, ∆d/d is surprisingly found to remain negative in the experimentally covered region of x 0.6 [6][7][8][9][10][11][12], without any indication of a sign reversal at large x-value. Global pQCD analyses of the experimental data extrapolated to large x also favor negative values of ∆d/d at x ∼ 1 [13][14][15][16][17], as do Dyson-Schwinger equation calculations [18]. This contradiction with the pQCD constraint at x → 1 challenges our confidence in understanding the large-x behavior of the polarized PDFs. In this letter, we present a novel approach to polarized quark distributions based on light-front holographic QCD (LFHQCD) [19] and the Veneziano duality [20] to calculate ∆q(x). This approach provides for the first time a unique determination of the polarized quark distributions with unpolarized quark distributions from nonperturbative color-confining dynamics. Our determination of ∆q(x) provides an accurate description of the available experimental data and agrees with the pQCD constraints in the x → 1 limit. In particular, the value of x for the sign reversal of ∆d(x)/d(x) is predicted, a key prediction which will be tested in upcoming experiments [21,22]. Recently, we introduced a new approach for deriving PDFs as well as generalized parton distributions (GPDs) from LFHQCD [50]. It incorporates both Regge behavior at small-x and inclusive counting rules at large-x. This approach can simultaneously produce the nucleon and pion unpolarized PDFs with minimal parameters, keeping the predictive power with the universality of the reparametrization function. Motivated by these successes, we will extend the formalism here to polarized arXiv:1909.13818v1 [hep-ph] 30 Sep 2019 distributions; no additional parameters will be required. Formalism.-We first briefly review the derivation of unpolarized proton PDFs from the holographic expression of its spin-non-flip Dirac form factor F 1 (t), where t = −Q 2 is the square of transferred momentum. The contribution from a twist-τ Fock state in the light-front Fock expansion of the proton eigensolution, a component with effectively τ constituents, to the Dirac form factor is given by [19,51] with The subscript V indicates the coupling to a vector current. λ is the universal mass scale in LFHQCD, which can be fixed by hadron spectroscopy; the fit to the ρ/ω trajectory gives √ λ = 0.534 GeV. The c V,τ and c V,τ +1 are coefficients to be determined, N V,τ is a normalization factor, and B(x, y) is the Euler beta function. The two terms in Eq. (1) correspond to the contribution from the two chiral components, Ψ + and Ψ − , of the bulk field solution [19]. Eq. (2) has the same structure as a generalization of the Veneziano amplitude B 1−α(s), 1−α(t) [20] to non-strong process [52,53], here electron-nucleon scattering. This amounts to replace the s-dependence 1−α(s) by a constant, which determines the asymptotic behavior of the form factor for large negative values of t [52,53]. Our framework thus incorporates nonpertubative analytic structures found in pre-QCD studies, such as Regge trajectories and generalized Veneziano amplitudes. The t-dependence in Eq. (2) can be rewritten as 1 − α V (t) with the Regge trajectory [50] This is just the ρ/ω trajectory emerging from LFHQCD for vector mesons with massless quarks [30]. The quark mass correction is negligible for u and d quarks; for the strange quark contribution, the φ trajectory shifts the intercept to α φ (0) ≈ 0.01 [54]. The GPDs at zero skewness ξ, obtained from the integral representation of B(x, y), are [50] where the unpolarized PDF q τ (x) and the profile function f (x) are related by a universal reparameterization function w(x), The function w(x) obeys the boundary conditions: w (1) = 0, w (1) = 0. Then for a twist-τ state, the unpolarized PDF is Now, we turn to the polarized distributions, for which the coupling of an axial current -rather than a vector current -is needed. Since the current operator differs by a γ 5 , the axial form factor follows Eq. (1), but with a sign flip from the contribution of the chiral-odd component, where where the subscript A indicates the coupling to an axial current. F A,τ (t) has the same structure as F V,τ (t), but with the Regge trajectory replaced by the axial one: emerging from LFHQCD [30]. The coefficients in (10) and those in (1) are related since they correspond to the same state. Thus apart from the sign-flip in the second term in (10), they have the same value relative to the normalization factors as given by Since the normalization convention is arbitrary, we set N V,τ = N A,τ = N τ , and therefore identify the coefficients as c V,τ = c A,τ = c τ [55]. Following the same procedure, we express the ∆q(x) for a twist-τ state as where At large-x, we expand w(x) near x = 1 according to the boundary conditions (7) and (8), and find that q τ (x) and ∆q τ (x) have the same behavior, where higher powers of (1 − x) are suppressed. For both the q(x) (9) and the ∆q(x) (14), the function is dominated by the first term at large-x, unless its coefficient c τ vanishes. Then the helicity asymmetry at x → 1 is consistent with the pQCD constraint [3,5]. The spin-aligned and spin-antialigned distributions are linear combinations of the unpolarized and polarized distributions: We find, in the large-x limit, The two helicity distributions tend respectively to a pure contribution from a single chiral component, Ψ + or Ψ − , of the bulk field solution. Eqs. (21) and (22) provide the asymptotic normalization, which can be used to derive the same relation as in Eq. (13). From Eq. (17), q ↑ (x) and q ↓ (x) decrease as (1 − x) 2τ −3 and (1−x) 2τ −1 , respectively. For the valence state τ = 3, they behave as (1 − x) 3 and (1 − x) 5 , consistent with pQCD up to logarithmic corrections [3,4]. At small-x, w(x) has the linear x-dependence: w(x) ∼ x. Thus ∆q(x) decreases faster than q(x) with decreasing x, and the helicity asymmetry behaves as where the exponent 1 /2 is given by the difference between the intercepts of the vector and axial Regge trajectories (3) and (12); the intercepts are shifted by negligible amount when u and d quarks mass corrections are included. When x → 0, the helicity asymmetry goes to zero, which indicates that the helicity correlation between a quark and its parent nucleon disappears. This result is a natural expectation [3], because the constituents and the nucleon have infinite relative rapidity for x ∼ 0. This property is confirmed by the experimental data [56]. Numerical results.-Up to now, all results have been derived for arbitrary twist-τ components without any specific choices for the coefficients c τ or for w(x), as long as the general boundary conditions are fulfilled. In order to obtain quantitative predictions for the polarized distributions, the c τ values of the c τ are required. We will determine them via the proton's Dirac form factor. If only valence states are considered, we can express the Dirac form factors of u and d quarks as where the quark number sum rule has been applied, with N τ = B(τ − 1, 1/2) normalizing F V,τ (0) to 1. The sea quark constituents, beyond the valence state, are encoded in higher Fock states with additional quark-antiquark pairs. In this work, we will truncate the Fock expansion of the nucleon state up to only one quark-antiquark pair, which is a twist-5 state. As a simplifying procedure to include the sea quark contributions we can add to Eq. (25) and Eq. (26) the terms which assumes that the quark number sum rule is saturated by the contribution from the valence quarks. One can also include the intrinsic strange contribution as in Ref. [54]. We will compare the three situations: i) only the valence state contribution; ii) including the contribution from the uū and dd pairs; iii) also including the contribution from the intrinsic strange sea, taking results from our previous work [54]. We fix the coefficients by matching to the Dirac form factor [57], as listed in Table I. Since the electromagnetic form factors only measure the difference between quark and antiquark contributions, namely c τ,u ≡ u τ −ū τ and similarly for the d quark, contributions to u τ andū τ cannot be uniquely separated. However, a lower boundary can be derived from the positivity bounds q ↑ (x) ≥ 0 and q ↓ (x) ≥ 0. With the asymptotic relations (21) and (22), this requirement is fulfilled by the minimal sea contribution, and similarly ford. This constraint is stronger than that utilized in Ref. [54], where only the sum q ↑ (x)+q ↓ (x) ≥ 0 is required. Since the sea quark distributions are not separately constrained by electromagnetic form factors, one needs other physical observables that are sensitive to the quark and antiquark contributions individually to determine them separately. Instead of attempting a full separation, which is beyond the purpose of this work, we use the relation of the isovector axial charge, to constrain the non-minimal sea quark. The value of the isovector axial charge g A = 1.2732(23) is precisely determined by the neutron weak decay [58]. As shown in Table I, its value evaluated with a minimal sea component, g A,min , is smaller than the experimental value. To obtain the value of g A with the minimal shift u τ → u τ + δ τ,u ,ū τ →ū τ + δ τ,u and similarly for the d-quark, implies a positive shift δ τ =5,u and/or δ τ =6,d . Therefore, we satisfy the relation (30) by the shift δ τ =5,u and δ τ =6,d , and take the variation between them as part of the theoretical uncertainty. [15] and experimental data [6][7][8][9][10][11][12]. Three sets of parameters, see Table I, are determined from the Dirac form factor and unpolarized valence distributions. The bands represent the variation with different approaches to saturate the axial charge gA. The blue dashed curve is the valence state contribution without saturating the axial charge. For the universal reparametrization function w(x), we take the same form as in [50], with the parameter "a" fixed with the first moment of unpolarized valence quark distributions. One can in principle adopt any parametrization form that fulfills the boundary conditions (7) and (8), and the predictive power is kept by the universality of w(x) for all [15] and experimental data [10][11][12]. The bands have the same meaning as in Fig. 1. PDFs. For comparison with measurements, we evolve the PDFs from Q = 1.06 GeV, which is the matching scale determined from the study of the strong coupling constant [59]. As shown in Figs. 1-3, our numerical results are in good agreement with the global fit [15] and measurements [6][7][8][9][10][11][12]. The isovector combination ∆u + −∆d + , where u + and d + stand for u +ū and d +d, is the distribution relevant to the relation of the axial charge (30). The dashed blue curve in Fig. 1 is the contribution from the valence state only; the difference with the full results, cases I, II and III, which include saturation of the axial charge is noticeable. This is consistent with the analysis of the Pauli form factor in [60], which demonstrates the significance of the sea quarks in describing spin-related quantities. As shown in Fig. 2, the variation of our predictions for each flavor from the three sets of coefficients is large, since the sea quark coefficients are not well constrained by the procedure discussed above. Furthermore, the truncation of the Fock state up to five-quark states, which allow only one pair of sea quarks, may potentially result in greater theoretical uncertainties for each individual flavor. The relation Eq. (30) provides an important constraint, but it still leaves some flexibility, such as adding the same term to uū and dd. Since the goal of this letter is to introduce a new approach to the study of polarized PDFs, we will leave this issue to more detailed future investigations. Most important, the critical region for the upcoming Jefferson Lab spin program [21,22] is the large-x region, which is dominated by the valence state and is thus much less affected by the variation of the sea. As observed in Fig. 3, the predictions with three sets of coefficients are consistent and very similar in the large-x region. As we have analytically demonstrated above, our approach supports the pQCD prediction that the helicity asymmetry approaches 1 in the large-x limit and follows the power behavior (1 − x) 2 . In particular, the sign-reversal of the d-quark helicity distribution in the proton is robustly predicted to be close to x ∼ 0.8. Summary.-We have presented a new approach to the prediction of spin-dependent quark distributions from nonperturbative color-confining dynamics. With all parameters fixed by the nucleon Dirac form factor and unpolarized quark distributions, our predictions for the polarized distributions agree with existing data. Our analytic results for ∆q(x)/q(x) are consistent with the largex behavior predicted by pQCD. Our analysis also supports the pQCD prediction of the helicity retention at x ∼ 1; this fundamental prediction has been challenged by Dyson-Schwinger equation calculations, but it has not yet been constrained by existing data. In the large-x region, where the valence state dominates, we predict that the d-quark helicity will flip its sign at x ∼ 0.8, regardless of the procedure used to include the sea quark contribution. This prediction will be tested soon [21,22]. The analytic behavior at large-x and the agreement with existing data reinforces confidence in the pQCD prediction, which can be implemented in global analysis such as that of Ref. [61]. In addition, the relation between the unpolarized and polarized distributions can be tested by simultaneous fits to unpolarized and polarized PDFs. We would like to thank J.-P. Chen for helpful discussions. This work is supported in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics under contracts No. DE-AC05-06OR23177 and No. DE-FG02-03ER41231. * liutb@jlab.org
2019-09-30T16:11:19.000Z
2019-09-30T00:00:00.000
{ "year": 2019, "sha1": "86dc4d1d6781b4118583f1406f32e9cea324d178", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevLett.124.082003", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "9d5150f96f365328031d954aa0594805588fa07c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
195544717
pes2o/s2orc
v3-fos-license
Constitutive relations for compressible granular flow in the inertial regime Granular flows occur in a wide range of situations of practical interest to industry, in our natural environment and in our everyday lives. This paper focuses on granular flow in the so-called inertial regime, when the rheology is independent of the very large particle stiffness. Such flows have been modelled with the $\unicode[STIX]{x1D707}(I),\unicode[STIX]{x1D6F7}(I)$ -rheology, which postulates that the bulk friction coefficient $\unicode[STIX]{x1D707}$ (i.e. the ratio of the shear stress to the pressure) and the solids volume fraction $\unicode[STIX]{x1D719}$ are functions of the inertial number $I$ only. Although the $\unicode[STIX]{x1D707}(I),\unicode[STIX]{x1D6F7}(I)$ -rheology has been validated in steady state against both experiments and discrete particle simulations in several different geometries, it has recently been shown that this theory is mathematically ill-posed in time-dependent problems. As a direct result, computations using this rheology may blow up exponentially, with a growth rate that tends to infinity as the discretization length tends to zero, as explicitly demonstrated in this paper for the first time. Such catastrophic instability due to ill-posedness is a common issue when developing new mathematical models and implies that either some important physics is missing or the model has not been properly formulated. In this paper an alternative to the $\unicode[STIX]{x1D707}(I),\unicode[STIX]{x1D6F7}(I)$ -rheology that does not suffer from such defects is proposed. In the framework of compressible $I$ -dependent rheology (CIDR), new constitutive laws for the inertial regime are introduced; these match the well-established $\unicode[STIX]{x1D707}(I)$ and $\unicode[STIX]{x1D6F7}(I)$ relations in the steady-state limit and at the same time are well-posed for all deformations and all packing densities. Time-dependent numerical solutions of the resultant equations are performed to demonstrate that the new inertial CIDR model leads to numerical convergence towards physically realistic solutions that are supported by discrete element method simulations. Granular flows occur in a wide range of situations of practical interest to industry, in our natural environment and in our everyday lives. This paper focuses on granular flow in the so-called inertial regime, when the rheology is independent of the very large particle stiffness. Such flows have been modelled with the µ(I), Φ(I)-rheology, which postulates that the bulk friction coefficient µ (i.e. the ratio of the shear stress to the pressure) and the solids volume fraction φ are functions of the inertial number I only. Although the µ(I), Φ(I)-rheology has been validated in steady state against both experiments and discrete particle simulations in several different geometries, it has recently been shown that this theory is mathematically ill-posed in time-dependent problems. As a direct result, computations using this rheology may blow up exponentially, with a growth rate that tends to infinity as the discretization length tends to zero, as explicitly demonstrated in this paper for the first time. Such catastrophic instability due to ill-posedness is a common issue when developing new mathematical models and implies that either some important physics is missing or the model has not been properly formulated. In this paper an alternative to the µ(I), Φ(I)-rheology that does not suffer from such defects is proposed. In the framework of compressible I-dependent rheology (CIDR), new constitutive laws for the inertial regime are introduced; these match the well-established µ(I) and Φ(I) relations in the steady-state limit and at the same time are well-posed for all deformations and all packing densities. Time-dependent numerical solutions of the resultant equations are performed to demonstrate that the new inertial CIDR model leads to numerical convergence towards physically realistic solutions that are supported by discrete element method simulations. Introduction The original incompressible µ(I)-rheology was proposed (GDR MiDi 2004; to describe granular flow in which the particles are rigid and the solids volume fraction φ is constant and uniform. The key physical insight behind the theory was that, under these circumstances, the only non-dimensional groups are the bulk friction coefficient µ (the ratio τ /p of shear stress to pressure) and the inertial number where d is the grain diameter,γ is the shear rate, p is the pressure and ρ * is the intrinsic grain density. When forming a constitutive law from these groups, dimensionless analysis implies that one group must be a function of the other, i.e. τ p = µ(I). In reality granular flow is compressible and the solids volume fraction φ may vary. With compressibility there are multiple non-dimensional groups, which greatly complicates the possible rheology. Nevertheless, GDR MiDi (2004), da Cruz et al. (2005) and Pouliquen et al. (2006) found that, in steady-state simulations and experiments, φ depended on just the inertial number where Φ is a monotonically decreasing function of I. Note that (1.3) implies Bagnold scaling (Bagnold 1954) for the pressure (i.e. it scales with the square of strain rate). Specifically, if Ψ (φ) is the inverse function of Φ(I), then (1.1) may be rearranged to yield (1.4) This is consistent with the discrete element method (DEM) simulations of Chialvo, Sun & Sundaresan (2012) in the inertial regime, in which the solids volume fraction φ is less than a critical volume fraction φ c , namely the jamming point (Liu & Nagel 1998). For φ > φ c , the stiffness of the particles becomes important and the rheology changes to either a 'quasi-static' or an 'intermediate' regime, which both depart from the Bagnold scaling. In general, the critical volume fraction φ c is dependent on the polydispersity of the grain-size distribution as well as the interparticle friction. These observations led to the compressible µ(I), Φ(I)-rheology (GDR MiDi 2004;Pouliquen et al. 2006;Forterre & Pouliquen 2008) in which the scalar relation (1.3), as well as (1.2), is assumed to hold even in non-steady situations. Although it might seem that compressibility would reduce the tendency to ill-posedness, in fact the compressible µ(I), Φ(I)-rheology is even more prone to ill-posedness in time-dependent calculations than incompressible µ(I)-rheology (Heyman et al. 2017). Such instability is perhaps not surprising since the equations are not always dissipative (see appendix B). To demonstrate this ill-posedness, § 3 presents computations for 928 D. G. Schaeffer and others a one-dimensional gravity-free shear cell in which the µ(I), Φ(I)-rheology blows up catastrophically once the mesh size is refined sufficiently, even though the initial conditions are just a small perturbation of the steady solution. The purpose of this paper is to develop a viable alternative to µ(I), Φ(I)-rheology that preserves its successes. This alternative is based on the compressible I-dependent rheology (CIDR) introduced recently in . After briefly recalling the general formulation of CIDR, specific yield and dilatancy functions are introduced, which ensure that the theory is well-posed and reduces to µ(I), Φ(I)-rheology in the correct limits. Specifically, the new inertial CIDR equations recover both the µ(I) and Φ(I) relations (1.2)-(1.3) when the flow is isochoric (div u = 0), which in many geometries corresponds to steady flow. Sample computations with the one-dimensional inertial CIDR equations in a gravity-free shear cell converge to the steady-state solution, even for initial conditions that are a long way from steady state. Moreover, these computations agree well with transient DEM simulations of the same flows. In § § 2 and 3 the µ(I), Φ(I)-rheology is described and computations that blow up because of ill-posedness are presented. Sections 4 and 5 introduce CIDR for the inertial regime and show that computations converge to steady state, and these computations are supported by DEM simulations described in § 6. The final two sections discuss a number of related issues and summarize the overall conclusions. Appendix A derives conditions for ill-posedness of the one-dimensional µ(I), Φ(I)-rheology in a gravity-free shear cell, appendix B relates the CIDR equations to the thermodynamic analysis of Goddard & Lee (2018) and appendix C describes the formulation of the DEM simulations. In a continuum formulation, dense granular flow is described by the solids fraction φ, the velocity vector u and the symmetric Cauchy stress tensor σ . In two dimensions (to which this paper is restricted) this formulation constitutes six scalar unknown functions of position and time. These satisfy governing equations including three conservation laws: one for conservation of mass (2.1) and two momentum conservation laws where ρ * is the intrinsic density of the grains and body forces have been neglected. In addition to the three conservation laws, three constitutive relations are needed to close this system. To write these constitutive laws, it is convenient to decompose the stress tensor into a pressure term p = −σ ii /2 plus a trace-free tensor τ , called the shear-stress tensor or the deviatoric stress tensor, such that (2. 3) The constitutive law for the incompressible µ(I)-rheology Forterre & Pouliquen 2008;Barker et al. 2015) is formulated in terms of the (total) strain-rate but for the compressible generalizations of the µ(I)-rheology (GDR MiDi 2004;Pouliquen et al. 2006;Forterre & Pouliquen 2008;Heyman et al. 2017;Goddard & Lee 2018) it is also useful to define the deviatoric strain-rate tensor where S ij is trace free. The alignment constitutive law, which is imposed in both µ(I), Φ(I)-rheology and CIDR, states that where the notation T = tr(T 2 )/2 denotes the second invariant of any symmetric tensor T . In particular, the eigenvectors of the deviatoric stress tensor and the deviatoric strain-rate tensor are parallel. This matrix equation is in fact equivalent to just one scalar equation and relies upon the implicit assumption that S = 0; i.e. that material is actually shearing. The other two constitutive laws in the µ(I), Φ(I)-rheology specify the deviatoric stress τ = τ and the solids fraction φ: (2.8) in terms of the pressure p and the inertial number where µ(I) is the bulk friction coefficient and d is the grain diameter. The constitutive laws (2.6)-(2.8) are referred to as µ(I), Φ(I)-rheology. Commonly used functional forms for µ(I) and Φ(I) in the constitutive laws (2.7) and (2.8) are where µ s , µ d , I 0 , φ c and a are constant material parameters (GDR MiDi 2004;Jop et al. 2006;Forterre & Pouliquen 2008;Trulsson et al. 2013). As shown in figure 1, these functions provide a good fit for the DEM simulations performed in this paper (see appendix C). Table 1 lists the best-fit parameters extracted from the steady-state DEM simulations. Note that figure 1 includes data for multiple cases with differing particle stiffness and system size, which shows that the parameters do not depend on the details of the DEM simulations. Incidentally the form Φ(I) = φ c − (φ c − φ min )/(I * /I + 1), where φ min and I * are constants, might be preferable to (2.11), which becomes negative for large I. However, the simpler form (2.11) is adequate for most practical purposes, because I rarely becomes large enough to drive Φ(I) negative. Since (2.11) is a monotonically decreasing function, equation (2.8) can be inverted to write the inertial number as a function of the solids volume fraction which, for the specific function (2.11) used here, gives (2.13) From this it is also possible to determine an equation of state for the pressure by substituting (2.12) into the inertial number (2.9) and solving for p to give (2.14) which, for fixed φ, exhibits the Bagnold scaling in that the pressure scales with the square of strain rate as in (1.4). Note that this pressure tends to infinity as φ → φ c , behaviour that is discussed in § 7.2. 3. Catastrophic failure with the µ(I), Φ(I)-rheology As mentioned in the introduction, it follows from Heyman et al. (2017) that the dynamic equations of µ(I), Φ(I)-rheology for two-dimensional flow are always illposed. In this section it is demonstrated that ill-posedness can contaminate even the simplest of problems: flow in a two-dimensional shear cell that depends on only one spatial variable. Equations for one-dimensional flow in a shear cell In a planar shear cell granular material is confined from above and below by two parallel flat frictional walls, whose relative motion, in the absence of gravity, provides the only driving for the flow. This is a popular idealized geometry for the study of granular rheology, and the set-up can be either volume controlled, by fixing the wall separation distance H, or pressure controlled, by applying a normal force at the walls. In the following investigations H will be fixed and solutions will be restricted to those which are invariant in the shearing direction x and depend only on the vertical coordinate z and time. As such, this reduces the two-dimensional problem to one spatial dimension in the interval 0 < z < H. As the flow is taken to be compressible, both components of the velocity u = (u, w) may be non-zero and depend on z and t. For this special class of flows, the conservation laws (2.1)-(2.2) simplify to since all x-derivatives vanish. In addition, the deviatoric strain rate reduces to and its second invariant becomes It follows that in the shear cell the equation of state (2.14) reduces to which provides an explicit expression for the pressure. The alignment condition (2.6) implies that the relevant components of the deviatoric stress are which using the constitutive law (2.7) and equations (3.5) and (2.12) implies Equations (3.6) and (3.8) may then be substituted into the conservation laws (3.1)-(3.3) to obtain a system of three partial differential equations (PDEs) for φ, u and w that are first order in time and second order in space. For a given set of initial conditions φ(z, 0), u(z, 0), w(z, 0) the three PDEs must be solved subject to the boundary conditions that there is no slip at the walls where V 0 is the velocity of the top wall. It is easily verified that the steady linear shearing solution where φ 0 is the average initial solids volume fraction satisfies (3.1)-(3.3) given (3.6), (3.8) and the boundary conditions (3.9). It is therefore anticipated that any set of initial conditions will tend to this solution. 932 D. G. Schaeffer and others 3.2. Blow up and grid dependence with the µ(I), Φ(I)-rheology For one-dimensional flow in a shear cell, the physical variables are non-dimensionalized with the scalings where the hats indicate non-dimensional variables. Equations (3.1)-(3.3) with the relations (3.6) and (3.8) therefore reduce to the non-dimensional system where the grains size d, the grain density ρ * and the wall velocity V 0 scale completely out of the system. The conservation laws (3.13)-(3.15) are solved by the numerical method of lines (Schiesser 2012). This method discretizes the spatial derivatives in the PDEs which generates a system of coupled ordinary differential equations. These are then integrated forward in time using MATLAB's ODE15s routine. Two different methods are used to approximate the spatial derivatives: the first is a finite difference method using first-order differences, while the second method uses a Chebyshev spectral scheme (Canuto et al. 1988;Trefethen 2000). The non-dimensional height of the cellĤ is chosen to equal 30 (i.e. the physical height of the cell is 30 grain diameters). The initial conditions are plotted in figure 2 and are identical to the non-dimensionalized steady solution except that the vertical velocity has a small smooth imperfection in the centre of the domain: where is a small parameter. It should be noted that this form does not satisfy the boundary conditions exactly but does within numerical precision. Provided is not too small, the ill-posedness condition (A 19), outlined in appendix A, is satisfied in the centre of the domain, as shown in figure 2(d), and it is therefore anticipated that numerical problems will develop. Confirming this anticipation, various snapshots of the non-dimensional vertical velocity are shown in figure 3. (The full time evolutions ofû(ẑ,t),ŵ(ẑ,t) and φ(ẑ,t) are shown in supplementary movies 1-3.) In the high-resolution plots (figure 3a,b), the growth of noise causes the numerical method to break down at the indicated time -at this point integration tolerances can no longer be met. Note that the misbehaviour originates in the centre of the domain, where the ill-posedness condition (A 19) is satisfied. Moreover, although the two solutions blow up at similar times, their spatial structure is completely different. In contrast, a low-resolution (N z = 47) simulation using the finite difference scheme does not blow up and in fact decays towards the steady state, as shown in figures 3(c) and 4(a). (The same occurs for the low-resolution run with the Chebyshev spectral scheme (see figure 4a).) This is exactly the behaviour that one might expect of a wellposed model, but here it is entirely due to the higher numerical diffusion in the lowresolution scheme. On grid refinement this diffusion is no longer sufficient to suppress the underlying ill-posedness of the equations, and instabilities develop. For all values of N z above a certain threshold (see figure 4b), the initial perturbation is amplified indefinitely, causing the method to fail. The timet * at which the Chebyshev spectral discretization fails is plotted in figure 4(b) as a function of the number of grid points N z . Higher spatial resolution computations resolve higher wavenumber instabilities that grow at a faster rate, leading to earlier failure. Inertial compressible I-dependent rheology (iCIDR) The CIDR is a general framework that retains the conservation laws (2.1)-(2.2) and the alignment condition (2.6), but it modifies the other two constitutive equations. The constitutive law for the deviatoric stress (2.7) is replaced by assuming that there is a yield condition such that, if material is deforming, then where Y is called the 'yield function'. In addition, it is assumed that the density evolves dynamically according to a flow rule that is reminiscent of critical state soil mechanics (Schofield & Wroth 1968;Jackson 1983): where f is called the 'dilatancy function'. showed that the CIDR equations are linearly well-posed provided the yield and dilatancy functions satisfy the following three conditions: . The computations use the method of lines with a finite difference (dashed lines) and a Chebyshev (solids lines) discretization at both high (red lines) and low (blue lines) spatial resolution. The timet * = 0.01052 at which the Chebyshev scheme becomes non-convergent is indicated. (b) The final timet * before convergence failure is plotted for the Chebyshev spectral discretization as a function of the number of grid points N z . The same plot (not shown) for failure of the finite difference computation is qualitatively similar, but differs in detail. A key result of this paper is the introduction of new constitutive functions which are motivated by the inertial regime. These are constructed to satisfy both the wellposedness conditions (4.3)-(4.5) and the observed asymptotic steady-state behaviour, known as the µ(I) and Φ(I) relations (1.2)-(1.3). These relationships are derived from isochoric (constant volume) flows in steady state (see figure 1 and appendix C). For the purpose of deriving Y and f they may be conveniently stated as follows: where Ψ (φ) is the inverse function of Φ(I). Even with these constraints there are infinitely many possible choices for the yield function and dilatancy function. What might be described as the simplest acceptable choice for these functions is now proposed. The starting point for this new theory is the relation Y = µ(Ψ (φ))p in (4.6). However, this cannot lead to a well-posed theory as ∂ I Y = 0 and thus (4.4) is not satisfied. Taking instead the simplest non-trivial I-dependence gives which reduces to the µ(I), Φ(I)-rheology when I = Ψ (φ). Figure 5 is a useful aid for comparing this formula with µ(I), Φ(I)-rheology. The corresponding dilatancy function is then found through substitution of (4.7) into the well-posedness PDE (4.3). Taking contributions from both the homogeneous and particular solutions of (4.3) and imposing (4.6) gives the expression This theory will be termed 'inertial CIDR' or iCIDR, since, as will be shown, it captures the Bagnold scaling. In addition to satisfying (4.3) and (4.6), it is readily verified that the iCIDR constitutive functions (4.7)-(4.8) satisfy the inequalities (4.4)-(4.5). As such, the dynamic equations which result from iCIDR are guaranteed to be linearly well-posed for all deformations and for all values of the solids volume fraction. In µ(I), Φ(I)-rheology I is tied rigidly to φ through the I = Ψ (φ) relation (2.12), whereas for iCIDR I evolves through the flow rule (4.2) given the dilatancy function (4.8). Substituting (4.8) into (4.2) yields the quadratic equation which has the positive root (4.10) The right-hand side of (4.10) is a function of the solids volume fraction φ and the ratio div u/ S . Equation (2.9) can be used to determine an equation of state for the pressure (4.11) This equation differs from (2.14) only in that Ψ (φ) is replaced by I defined by (4.10). Since (4.10) depends on the velocity gradient only through the ratio div u/ S , which is unchanged by scaling the velocity gradient, (4.11) satisfies Bagnold scaling (Bagnold 1954). According to (4.9), if S → 0, then I tends to either zero or infinity, depending on the sign of div u. Hence, it seems that the iCIDR constitutive laws may break down if S = 0. However, the inertial number I can be bypassed by calculating an alternative expression for the pressure. Substituting (2.9) into (4.9) and solving for p gives (4.12) which is well defined for all deformation rates. Note that p is strictly positive unless S = 0 and div u 0, (4.13a,b) in which case the pressure is zero. Thus, in the absence of shear, there is no pressure if the grains are diverging from one another, but there is a finite, positive pressure if they are converging. Moreover, using the alignment condition (2.6), the yield function (4.7) and the definition of I, it follows that the deviatoric stress is given by which is also well defined for all deformation rates. Finally, note that if φ → φ c , then Ψ (φ) → 0 and the pressure tends to infinity (provided there is some straining). One-dimensional flow in a shear cell with the iCIDR rheology The response of the iCIDR rheology is now tested in the one-dimensional gravitationless shear cell, as studied in § 3.2 for the µ(I), Φ(I)-rheology. In this geometry the equation of state (4.12) implies the relations for the pressure D. G. Schaeffer and others and the deviatoric stresses (4.14) Substituting these expressions into the conservation laws (3.1)-(3.3) and using the scalings (3.12) implies the non-dimensional iCIDR equations are where the pressure is given bỹ (5.6) The pressure (5.6) can be substituted directly into (5.4)-(5.5) to produce a system of three equations for φ,û andŵ. Unlike the incompressible µ(I)-rheology and the incompressible Navier-Stokes equations, in which pressure must be determined globally as part of solving the equations, all terms in this system (5.4)-(5.5) are explicitly specified locally. This makes the development of numerical methods much simpler. Figure 6(a) and supplementary movies 4 and 5 show numerical solutions according to iCIDR for the one-dimensional gravity-free flow of § 3, specifically for initial conditions (3.16). Note that the solution converges smoothly to steady state, the same steady state as for the µ(I), Φ(I)-rheology (3.10), without any sign of the catastrophic resolution-dependent blow-up seen before. The simulations are computed using the method of lines with the finite difference discretization for both N z = 47 and 201 grid points. Note that the solutions in figure 6(a) lie directly on top of one another! This shows that these solutions are grid converged. The fact that the iCIDR equations are mathematically well-posed suggests that they can handle larger perturbations from steady state than (3.16). An interesting test case is an initial condition with |∂ẑû| = 0 for at least one point. This is interesting because the function χ, defined in (A 18), is infinite when |∂ẑû| = 0, so the µ(I), Φ(I)-rheology is strongly unstable even at low resolution. Following this idea, the results of a simulation with the initial condition are plotted in figure 7. As figure 7(d) shows, with the parameters φ 0 = 0.78,â = 0.16 andb = 0.1, there is a large central region where χ > 0 and two points where χ is infinite. This problem is therefore very strongly ill-posed for the µ(I), Φ(I)rheology. Ill-posed behaviour has been verified numerically; indeed, for these initial conditions, even for the coarse grid with N z = 47, the solution blows up (not shown). By contrast, with the iCIDR equations (5.3)-(5.6) there is no catastrophic blow-up and the simulations smoothly evolve from the initial conditions towards the steady-state order of one time unit. The solids volume fraction starts out independent ofẑ, but it develops a non-trivial profile which changes sign and then decays much more slowly. The latter evolution is responsible for the local maximum of |ŵ| neart = 3 shown in figure 7(e). Comparison of iCIDR with DEM simulations The DEM calculations detailed in appendix C and figure 1, which were used to recover the steady µ(I) and Φ(I) relations, are also capable of verifying the time-dependent characteristics of the iCIDR solutions. Here new DEM simulations are initialized with the same procedure that was used to obtain the steady linear shearing solution (3.10), but then the velocity fields are replaced with the sinusoidally perturbed profiles (5.7) discussed in the previous section. As expected, this results in a decay from these applied fields back towards the steady solution. Figure 8 shows that the iCIDR solutions differ by less than 2.5 % relative error from the DEM simulations throughout the dynamics. Figure 8(a-c) shows that iCIDR captures the spatial variation of the three flow variables φ,û andŵ. Figure 8(d) indicates that it also captures complex details of the time evolution. Although more testing is needed, this agreement indicates that iCIDR correctly represents significant aspects of the rheology. 7. Discussion of related issues 7.1. Remarks on ill-posedness Issues of ill-posedness in µ(I)-rheology were first raised in Barker et al. (2015). It was shown that, although the dynamic equations derived from incompressible µ(I)rheology are mathematically well-posed for a large range of inertial numbers, the system is ill-posed when I is too high or too low. The following remarks may help reconcile this ill-posedness with the fact that problems of practical interest (e.g. Lagrée et al. 2011;Staron et al. 2012) have apparently been successfully solved numerically using µ(I)-rheology. In the first place, ill-posedness effects may be masked in simulations performed on a low-resolution grid, as in § 4 of this paper. Specifically, numerical diffusion may be sufficient to suppress the instability. Ill-posedness may become apparent only through careful comparison of progressively mesh-refined simulations -see figure 4 of and figure 21 of Martin et al. (2017); in these papers certain spurious flow features continue to become ever more finely scaled as the grid size gets smaller. It is sometimes suggested that low-resolution solutions of an ill-posed model might be sufficient for some practical purposes. However, such approaches are scientifically flawed, because the results rely on numerical diffusion to regularize the problem, which is dependent on both grid size and numerical scheme, and is often not known precisely. In our view it is far better to try to understand what physics is missing in the model, and only compute solutions when a well-posed theory has been formulated. Other issues are the following. (i) In some problems, such as column collapse (Lagrée et al. 2011), the ill-posed region of parameter space may only be active for a short period of time. In such cases careful comparison of numerical results at different spatial resolutions, including some very fine grids, may be needed for non-convergence to become apparent (Barker et al. 2015;Martin et al. 2017). (ii) Ill-posedness may also be partially suppressed by attempts to remove the singularity in the viscosity at low strain rates. Many numerical codes do this by introducing an upper bound on the viscosity, which implies that the material reverts to a Newtonian fluid for slow flows. However, these procedures are ad hoc in nature, and there is no guarantee that ill-posedness is suppressed completely. In this subsection attention has been focused on the consequences of ill-posedness that may be seen through computations. However, on the mathematical side, it 942 D. G. Schaeffer and others where = 0, cannot be solved on any non-zero interval {0 < t < t 0 } unless p is an even non-negative integer. The singularities of the initial conditions at x = nπ, n = 0, ±1, ±2, . . . (which are extremely mild if p is large) make such solutions impossible. 7.2. Behaviour at the boundary of and outside the inertial regime Regarding the boundary of the inertial regime, note in (2.14) and (5.11) that p tends to infinity as φ → φ c . In DEM simulations, Chialvo et al. (2012) found that this growth was cut off and blended into the pressure from the intermediate regime through the formula p blend = p inert p itm p inert + p itm , Thus the pressure tends to p itm as φ → φ c , where the limit pressure depends on the elastic modulus. Although it is beyond the scope of the present paper, it is anticipated that a more complete version of CIDR would remain valid across regime boundaries. Granular flow at densities φ > φ c (provided the strain rate is not too large) corresponds to what is called the quasi-static regime (Otsuki & Hayakawa 2009;Chialvo et al. 2012;Singh et al. 2015). In this regime stresses may remain non-zero even as the strain rate tends to zero, i.e. a static yield stress exists; the scale of these static stresses is set by k/d, where k is the spring constant in DEM simulations. CIDR is a general theory that can model granular material outside, as well as inside, of the inertial regime. In particular, the version of CIDR in § 2(e) of , which was motivated by critical state soil mechanics (Schofield & Wroth 1968), has a non-zero static yield stress, as in the quasi-static regime in DEM simulations. (In that paper, to facilitate comparison with Silbert et al. (2001), the stress scale was chosen as ρ * gd, i.e. dependent on gravitational acceleration g. This was an unfortunate choice with no intrinsic significance. A more appropriate choice would have been k/d, where k is the spring constant in DEM simulations.) 7.3. Extensions to the theory Inertial CIDR is also able to incorporate non-monotonicity of the µ(I) function (DeGiuli & Wyart 2017), which is crucial for modelling hysteretic effects, such as coexisting static and moving regions, in depth-averaged avalanche models (Daerr & Douady 1999;Pouliquen & Forterre 2002;Edwards & Gray 2015;Edwards et al. 2017;Russell et al. 2019). Non-monotonic µ = µ(I) functions are problematic in the incompressible µ(I)-rheology, because they imply that the theory is always ill-posed in regions of decreasing friction (Barker et al. 2015). For iCIDR, however, having a non-monotonic µ = µ(I) function is not a problem, because it is formulated in terms of the solids volume fraction, i.e. µ = µ(Ψ (φ)), and so it does not affect the well-posedness conditions (4.3)-(4.5). At present iCIDR is explicitly a local theory which cannot account for the observed role of fluctuations that inspired the non-local theories of Pouliquen & Forterre (2009), Kamrin & Koval (2012) and Bouzid et al. (2013). Inclusion of these effects is an important direction for future work. The iCIDR equations, introduced in this paper, provide a continuum model for fluid-like inertial flows of rigid spherical particles that lie below a critical volume fraction, above which the compressibility of the grains becomes important (Otsuki & Hayakawa 2009;Chialvo et al. 2012). One striking aspect of the iCIDR theory is its simplicity: it is a minimal extension of µ(I), Φ(I)-rheology with no extra variables, no extra parameters, no extra evolution equations beyond conservation of mass and momentum and no extra boundary conditions. While retaining the success of µ(I), Φ(I)-rheology for steady inertial flow, iCIDR is well-posed, thermodynamically sound (Onsager symmetric and dissipative) and agrees well with transient DEM simulations in a one-dimensional gravity-free shear cell. Inertial CIDR is very well suited to numerical calculations because the pressure is defined by a local equation of state. This contrasts with incompressible theories, in which the pressure can only be found globally as part of solving the equations. The numerical simulations presented in this paper are therefore a particularly encouraging proof of concept and it is hoped that other existing numerical methods can similarly be modified in order to bring the advantages of the iCIDR formulation to a wide range of practical applications. Although there are some technical complications, effectively the dynamic equations of µ(I), Φ(I)-rheology for two-dimensional flow are always ill-posed (Heyman et al. 2017). As in Barker et al. (2015), the ill-posedness has a directional character: plane-wave solutions (in all space) in certain directions suffer uncontrolled growth while those in other directions decay smoothly to uniform flow. If, as in § 3, solutions depending on only one spatial variable are sought, these one-dimensional equations may be either well-posed or ill-posed, depending on whether the one independent direction retained corresponds to one of the stable or unstable directions. In this appendix, conditions for the ill-posedness of the one-dimensional system (3.1)-(3.8) are derived. The equations (3.1)-(3.3) together with (3.6) and (3.8) are linearized around a base flow (φ 0 , u 0 , w 0 ) in the dependent variables (φ, u, w) by introducing perturbations (φ,ȗ,w) of the form and then freezing the base flow coefficients. In the linearization the highest-order derivatives of the perturbed variables are retained in each equation, together with the convective derivatives. This leads to linearized equations of the form where a i and b ij are constant coefficients derived from the pressure (3.6) and the deviatoric stresses (3.8). The linearized system (A 2) admits normal-mode solutions of the form in which ξ is the wavenumber, λ is the temporal growth rate andφ,ũ andw are constants. Substituting these into (A 2) reveals that λ must be an eigenvalue of the matrix The imaginary terms −iw 0 ξ on the diagonal shift the eigenvalues λ, but do not affect stability or well-posedness, which are governed by the real part of λ. The vertical component of the base state velocity w 0 is therefore set to zero in what follows. Denoting the bottom right 2 × 2 matrix of terms multiplying −ξ 2 as the eigenvalues are calculated by solving where the coefficients are To test for ill-posedness, the large-wavenumber limit ξ → ∞ is taken. Balancing the order of terms in (A 6), it is clear that λ(ξ ) ∼ ξ 2 , so the substitution λ = Λξ 2 is made. Then the terms of maximal order in (A 6), i.e. O(ξ 6 ), have the coefficient Λ 3 + (tr Γ )Λ 2 + (det Γ )Λ. Thus, to leading order, either Λ = 0, or Λ is a solution of i.e. one of the two roots To determine the signs of these roots, the coefficients in (A 2) are now evaluated. If the superscript 0 notation is dropped, the coefficients in (A 2) are then where all values are evaluated with the base-state fields From (A 10) and (A 5) it follows that The two roots in (A 9) are real since Since λ + ∼ Λ + ξ 2 as ξ → ∞, the system is linearly ill-posed if the larger root Λ + is positive, which occurs if either It may be seen from (A 12) and (A 13) that if tr Γ < 0 then det Γ < 0, so it suffices to consider only case (b). Thus, is a sufficient condition for ill-posedness. Conversely, suppose Appendix C. Details of the DEM calculations Two-dimensional DEM simulations (Cundall & Strack 1979) were performed in a shear-box geometry, both to confirm the well-established steady behaviour and to explore transient flows. The domain is a rectangle in (x, z) space, with all units non-dimensionalized with the scalings (3.12), such that 0 x <L and 0 ẑ <Ĥ. Boundary conditions and the system size are then designed to suppress confinement effects so that the volume can be taken to be a representative bulk element. Periodicity is enforced in thex direction and Lees-Edwards boundary conditions (Lees & Edwards 1972) are applied at the bottomẑ = 0 and topẑ =Ĥ of the domain. There is no gravity applied to the system and the only driving is provided by the difference in horizontal velocity between the top and bottom V 0 , as prescribed by the Lees-Edwards algorithm. Details of the precise DEM simulation algorithm can be found in Otsuki & Hayakawa (2011) as an identical method is employed here. Normal forces f (n) between particles are calculated from a linear spring-dashpot arrangement with an associated spring constant k (n) and viscous dissipation constant η (n) . Tangential forces f (t) may either stick or slip depending on whether a Coulomb friction criterion, with particle friction constant µ p , is satisfied. Stick interactions are defined as those with |f (t) | < µ p |f (n) | and, like the normal force, are calculated from a linear spring-dashpot with parameters k (t) and η (t) . Interactions with greater computed tangential forces are labelled as slip events and the tangential force is truncated to |f (t) | = µ p |f (n) |. In this paper parameters are chosen which give k (n) /p > 10 4 so that calculations are in the rigid particle regime of da Cruz et al. (2005). Unless stated otherwise the values µ p = 0.4, k (n) = 10 4 and k (t) = 0.5k (n) are used. Particle interactions with these values are very short-lived so that results are invariant of the viscous dissipation. The tangential dashpot is therefore neglected (η (t) = 0) and η (n) = 4.2 is chosen, as in Silbert et al. (2001), so that the particles have a restitution coefficient of e = 0.9. To select a mean packing fraction φ 0 , the system size along thex direction is determined as L =ĀN/(Hφ 0 ), whereĀ is the average grain area. The shear cell is then populated with N particles with density ρ * and mean diameter d and V 0 set to unity in order to match the non-dimensional iCIDR equations. In order to avoid crystallization effects, the individual particle diameters are chosen randomly from a discretized distribution. Here an even spread from 0.8d, 0.9d, d, 1.1d and 1.2d is taken so that the number of particles of each diameter is N/5. These particles, which are initially elastic but not frictional, are randomly distributed in the domain. This results in overlaps which would normally cause very large elastic forces, so firstly there is a period during which the total kinetic energy of all particles is scaled to a small constant value so that the system can reach an equilibrium state. The arrangement which results from this procedure has almost uniform packing density and very small overlaps. Then, the interaction algorithm is altered so that the particles are approximately rigid (very large k (n) ) and frictional. The true simulation begins after the velocity fields are prescribed. The steady-state µ(I) and Φ(I) relations found in previous works (e.g. da Cruz et al. 2005) are first confirmed in order to obtain macroscopic rheological parameters. Both curves are derived from the same set of experiments in which the solids volume fractions φ 0 take values in the range 0.76-0.8. This range is expected to lie in the inertial regime and, due to the Φ(I) relation, each packing corresponds to a unique inertial number. Once the system has reached a steady state, the flow fields are coarse-grained. This is achieved by averaging in thex direction and then averaging within bins which discretizeẑ into boxes of height 2d. Each run is then repeated 10 times in order to calculate error estimates. The solids volume fraction φ and the two velocity componentsû andŵ are clearly defined in the DEM data and allow the inertial number to be readily calculated from its definition (1.1). Calculation of the bulk friction coefficient µ requires the macroscopic stress components to be defined. As in Silbert et al. (2001) this involves a sum over all particles α in the sampling volume: where r αβ is the centre-to-centre vector, f αβ is the total force between particle pairs and δû α = (û α −V 0ẑ /Ĥ,ŵ α ) is the velocity fluctuation of particle α. From the stress decomposition (2.3) these stress components are used to calculate the pressure p, deviatoric stress τ and hence µ across the domain. As these quantities are all found to be invariant ofẑ, the mean value is taken for the µ(I) data presented in figure 1(a). The volume fraction is also found to be constant at steady state so that the Φ(I) relation, plotted in figure 1(b), is simply verified using the mean packing density Φ = φ 0 .
2019-06-26T14:37:24.998Z
2019-07-15T00:00:00.000
{ "year": 2019, "sha1": "fc04e2e63c719e0f62c3675d007c2a1dea329cfb", "oa_license": "CCBY", "oa_url": "https://www.pure.ed.ac.uk/ws/files/137078841/Schaeffer2019.pdf", "oa_status": "GREEN", "pdf_src": "Cambridge", "pdf_hash": "36dd90e4615c96f0791dc48a54f60ed9f488f1d6", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
10647136
pes2o/s2orc
v3-fos-license
Posterior Hox gene reduction in an arthropod: Ultrabithorax and Abdominal-B are expressed in a single segment in the mite Archegozetes longisetosus Background Hox genes encode transcription factors that have an ancestral role in all bilaterian animals in specifying regions along the antero-posterior axis. In arthropods (insects, crustaceans, myriapods and chelicerates), Hox genes function to specify segmental identity, and changes in Hox gene expression domains in different segments have been causal to the evolution of novel arthropod morphologies. Despite this, the roles of Hox genes in arthropods that have secondarily lost or reduced their segmental composition have been relatively unexplored. Recent data suggest that acariform mites have a reduced segmental component of their posterior body tagma, the opisthosoma, in that only two segments are patterned during embryogenesis. This is in contrast to the observation that in many extinct and extant chelicerates (that is, horseshoe crabs, scorpions, spiders and harvestmen) the opisthosoma is comprised of ten or more segments. To explore the role of Hox genes in this reduced body region, we followed the expression of the posterior-patterning Hox genes Ultrabithorax (Ubx) and Abdominal-B (Abd-B), as well as the segment polarity genes patched (ptc) and engrailed (en), in the oribatid mite Archegozetes longisetosus. Results We find that the expression patterns of ptc are in agreement with previous reports of a reduced mite opisthosoma. In comparison to the ptc and en expression patterns, we find that Ubx and Abd-B are expressed in a single segment in A. longisetosus, the second opisthosomal segment. Abd-B is initially expressed more posteriorly than Ubx, that is, into the unsegmented telson; however, this domain clears in subsequent stages where it remains in the second opisthosomal segment. Conclusions Our findings suggest that Ubx and Abd-B are expressed in a single segment in the opisthosoma. This is a novel observation, in that these genes are expressed in several segments in all studied arthropods. These data imply that a reduction in opisthosomal segmentation may be tied to a dramatically reduced Hox gene input in the opisthosoma. Background Hox genes are highly conserved transcription factorencoding genes that regulate a large suite of transcriptional targets in all bilaterian animals [1,2]. The conserved role of each Hox gene in specifying distinct body regions along the antero-posterior axis has caused this set of genes to be targets of evolutionary change throughout animal evolution [1,[3][4][5][6]. Changes in Hox gene function and expression domains have been shown to have led to a wide array of morphological novelties, for example, the limbless bodies of snakes [7], the evolution of the tetrapod limb [8] and the repression of limbs in the insect abdomen [9]. Despite the general observation that changes in Hox gene expression domains correlate with the generation of new morphologies, a relatively less explored phenomenon is how Hox genes are expressed and utilized in body regions that have been secondarily reduced. The arthropods, including insects, crustaceans, myriapods and chelicerates (arachnids and horseshoe crabs) display a wide degree of morphological variation on their relatively modular and segmented body plan. The origin of this morphological diversity has been due, in large part, to changes in Hox expression domains and targets throughout their evolution [6,[10][11][12]. In the arthropods, Hox genes act to specify the distinct identities of the developing body segments (for example, head versus abdominal). The relationship of Hox genes to the developing segments is further exemplified in the well-studied model arthropod, Drosophila melanogaster. In D. melanogaster, segments are established via the partitioning of the blastoderm into discrete segmental units by the activation of the gap genes, which subsequently activate the pair-rule and segment polarity genes. The gap and pair-rule gene expression domains are then used to establish the Hox gene domains within each segment [13,14]. It has also been shown that segment polarity genes directly interact with Hox genes to elicit segmental identity in the D. melanogaster abdomen [15]. The body plan of chelicerate arthropods is comprised of two main body regions, the anterior prosoma and the posterior opisthosoma. The segments of the prosoma in chelicerates comprise the chelicerae, pedipalps, and four pairs of walking legs. The opisthosoma is more variable and contains the segments bearing the book gills in horseshoe crabs, the chemosensory pectines in scorpions and the book lungs and spinnerets in spiders. Contrary to the morphological variation of the opisthosoma seen throughout many chelicerate groups, the Hox gene Ultrabithorax (Ubx) has been shown to have a conserved early expression boundary in the second opisthosomal, or genital, segment [16][17][18]. The expression of the Hox gene abdominal-A (abd-A) is expressed in more posterior opisthosomal segments in chelicerates, having an anterior boundary in the third opisthosomal segment, which overlaps with Ubx expression. The most terminally expressed Hox gene in the chelicerate opisthosoma is Abdominal-B (Abd-B), which usually overlaps with the expression of Ubx and abd-A in the posterior opisthosomal segments [16,[18][19][20][21]. Previously, we have shown that the mite Archegozetes longisetosus patterns only two segments in the opisthosoma via the expression of orthologues of the segment polarity genes hedgehog (hh) and engrailed (en) [22], indicating a large degree of segmental fusion or loss in comparison to the ancestral chelicerate opisthosoma, which was likely comprised of twelve segments [23]. To determine whether a reduction in posterior segmentation in A. longisetosus resulted in changes in Hox gene utilization in the mite opisthosoma, the expression patterns of the A. longisetosus orthologues of Ubx and Abd-B (Al-Ubx and Al-Abd-B, respectively) were followed. Also, the expression patterns of the segmentation gene patched (ptc), which encodes an Hh receptor in all other arthropods studied [24][25][26], were followed to determine if the Al-hh and Al-en expression patterns were unique, or if A. longisetosus truly pattern only two segments. The results of this study suggest that A. longisetosus does only pattern two segments in the opisthosoma during embryonic development, and also that Al-Ubx and Al-Abd-B are both only expressed in the same single segment, a novel observation for any arthropod studied thus far. These data, in conjunction with the observation that acariform mites have lost an abd-A orthologue [22,27], suggest that Hox gene input in the mite opisthosoma has been reduced either as a cause of or a consequence of segmental reduction. Archegozetes longisetosus cultures Mites were reared on a plaster-of-Paris/charcoal substrate in plastic jars to maintain appropriate humidity. Wood chips were added to the jars to promote oviposition. Mites were fed with brewer's yeast. No ethical approval was needed as A. longisetosus is not subject to any animal care regulations. Embryo fixation and staining To collect early-stage embryos (that is, germ band and early segmentation stage), adults were dissected in 1X PBS using a sharpened tungsten needle and sharp forceps. Laid late-stage embryos (that is, post-germ band stage) were collected from the culture chambers with a needle. Embryos of all stages were pooled and dechorionated in 50% bleach for one minute. Fixation occurred in an n-heptane solution over 4% formaldehyde in PBS for 45 minutes. Embryos were devittelinized by placing them into an n-heptane solution chilled on dry ice, subsequently adding room temperature methanol and then shaking vigorously for one minute to rupture the membrane. Embryos were rehydrated in graded methanol/PBS solutions, and placed in PBS with 0.1 μg/mL 4′,6-diamidino-2-phenylindole (DAPI) for one minute in the dark. A detailed protocol is available from the authors. Gene cloning and identification cDNA was constructed from A. longisetosus total embryonic RNA using the SMARTer RACE cDNA Kit (Clontech, Madison, WI, USA). All gene fragments were amplified using this cDNA as a template in rapid amplification of cDNA ends (RACE) PCR reactions. The A. longisetosus orthologue of the segment-polarity gene patched (ptc) was cloned by using the primer Arlo.ptc.GSP2.1 (GTGTGTGCATTCTTGGCGGCAGCAATTATTCC) in a 3′ RACE reaction. The resulting fragment was subsequently used in a nested 3′ RACE reaction using the primer N.Arlo. ptc.GSP2.1 (AGGTGTTTTGCTCTTCAGGCTGCAATTC TC). Both of these primers were designed against a fragment retrieved from an expressed sequence tag screen. The resulting 2,976 bp sequence consists of a large 1,816 5′ UTR, a 981 bp coding sequence and a 179 bp 3′ UTR (GenBank: KF155150). The coding sequence encodes a 326 amino acid protein, which contains the diagnostic Eukaryotic Sterol Transporter (EST) family domain [see Additional file 1: Figure S1A]. Al-en was cloned and sequenced as described in [22]. The full-length mRNA sequence of the A. longisetosus Ultrabithorax orthologue (Al-Ubx) was retrieved using both 3′ and 5′ RACE reactions. For the 3′ RACE reactions, the primer Ubx.Rtry.GSP2 (GCTGCAGCTGAAGCACAT CAGGCCTACC) was used in an initial RACE PCR reaction. The resulting product was used in a subsequent nested 3′ RACE reaction using the primer N.Ubx.Rtry. GSP2 (CTTTACGACGGAGCGACCAGTCAAGCAT). For the initial 5′ RACE reaction, the primer Ubx.Rtry. GSP1 (CGCCTGTGCCTGTCTCTCCTGTTCGTTT) was used. The resulting product was used in a subsequent nested RACE PCR using the primer N.Ubx.Rtry.GSP1 (CGACCTCTTCGACGCAGACCGTTGGCAC). All four of the aforementioned primers were designed using the deduced Al-Ubx coding sequences of the A. longisetosus Hox cluster (unpublished data, we have dense coverage sequence for the Hox cluster region relevant to this paper). This single Ubx orthologue was the only one retrieved, and matches the genomic sequence in the cluster, thus indicating A. longisetosus likely has only one Ubx orthologue. The resulting 3′ and 5′ nested RACE PCR reactions were cloned into the pGEM T Easy vector and sequenced. The resulting sequences were assembled using PHRAP to construct the full-length sequence of the Al-Ubx mRNA (GenBank: KF155151). The 1,759 bp AlUbx mRNA sequence consists of a 134 bp 5′ UTR, a 816 bp coding sequence, and a 809 bp 3′ UTR. The deduced amino acid sequence of Al-Ubx has a typical Ubx homeodomain and the diagnostic C-terminal UbdA motif [see Additional file 1: Figure S1B]. Fragments of the A. longisetosus Abd-B orthologue (Al-Abd-B) were cloned from embryonic cDNA using primers developed from genomic Hox cluster sequences. The primers used were AbdB.Rtry.Gsp1 (TAGCCTGTGG AGCACCGGTCCATTCCAG) in the 5′ RACE reaction and the primer AbdB.Rtry.Gsp2 (GGCCAAACACTCCA TATCTCAGCAAAGCGG) in the 3′ RACE reaction. The resulting fragments were used in subsequent nested RACE PCR reactions, using the primers N.AbdB.Rtry. Gsp1 (GGTGAGTAGTTGCACCAGGCCGCTGCCG) and N.AbdB.Rtry.Gsp2 (CAGCGGCCTGGTGCAACTAC TCACCATA) in the 5′ and 3′ reactions, respectively. These primers are overlapping reverse-compliments of one another. The resulting fragments were cloned into the pGEM T Easy plasmid and sequenced. This single Abd-B orthologue was the only one retrieved, and matches the genomic sequence in the cluster, thus indicating A. longisetosus likely has only one Abd-B orthologue. The resulting sequences were assembled using PHRAP to construct the full-length sequence of the Al-Abd-B mRNA (GenBank: KF155152). The overlapping 3′ and 5′ RACE Abd-B products resulted in a 2,323 bp mRNA, consisting of a 227 bp 5′ UTR, a 1,191 bp coding sequence, and a 905 bp 3′ UTR. The deduced amino acid sequence of Al-Abd-B consists of a typical Abd-B homeodomain [see Additional file 1: Figure S1C]. patched (Al-Ptc) expression The earliest observed expression of Al-ptc was in the early germ band stage ( Figure 1A-E), in the prosomal segments that will eventually bear the chelicerae, pedipalps and the first two pairs of walking legs. The segments of the third and fourth pair of walking legs have not been formed at this stage. In all studied arthropods, ptc is initially expressed in a single, broad stripe in each segment followed by the splitting of this stripe into two stripes of expression in more mature segments, in which the expression of the segment-polarity gene en is situated in between these two stripes [24][25][26]. The early expression patterns of Al-ptc show this double-striped pattern, by which an anterior stripe is expressed in the middle of the developing segment, and a posterior stripe is expressed just posterior to the segmental boundary ( Figure 1A-D´; see [22] for early prosomal Al-en expression). As this stage is the earliest that was observed, it is unclear whether these prosomal Al-ptc expression patterns began as single broad stripes. Al-ptc expression was also expressed in a continuous stripe anterior to the two cheliceral Al-ptc stripes in a region taken to be the ocular segment ( Figure 1D-E). Al-ptc is also expressed in a broad growth zone, possibly populated with undifferentiated segmental tissue (Figure 1B-C´; F). This growth zone is bifurcated in the ventral midline, possibly due to the presence of neuroectodermal tissue of the ventral sulcus ( Figure 1D-D´). Al-ptc expression in later stages followed previously reported expression patterns of the A. longisetosus orthologues of hh and en [22] by which the first opisthosomal segment appears initially, followed by the appearance of the segment bearing the anlagen of the fourth pair of walking legs, which is then followed by the appearance of the final second opisthosomal segment. Following the early Al-ptc expression in the early germ-band ( Figure 1A-F), Al-ptc expression was expressed in stripes of expression in the cheliceral and pedipalpal segments as well as the segments of the first three pairs of walking legs ( Figure 1G-H´); however, the 'older' segments, that is, all prosomal segments excluding the third walking leg segments, had a more pronounced stripe of Al-ptc expression in the middle of the segment ( Figure 1G), and the posterior stripes were undetected. However, this may be an artifact of our methodology, as in late-stage D. melanogaster embryos, the anterior ptc stripe of expression is much more pronounced than the posterior stripe [26]. At this stage, no opisthosomal expression of Al-ptc was observed, with the posterior-most expression in the posterior limit of the anlagen of the third pair of walking legs ( Figure 1H-H´). Following this stage, a broad single stripe of Al-ptc expression was observed in the region of the first opisthosomal segment (O1) ( Figure 1I-J), similar to the patterns of Al-hh and Al-en observed in [22]. Also in the same manner as Al-hh and Al-en expression, a stripe of Al-ptc appeared anterior to the O1 stripe in the following stage ( Figure 1K). Subsequently, the broad stripe of Al-ptc expression in O1 split into two stripes ( Figure 1L), likely to facilitate Al-en expression, as has been observed in a myriapod [24], a fly [26] and a spider [25]. Following this stage, Al-ptc was expressed in a broad stripe demarking the second opisthosomal segment. Also at this stage, the Al-ptc expression in the fourth walking leg segment remained in a single stripe ( Figure 1M). Whether this is due to the resolution of our images or due to a different patterning mechanism needs to be explored further. In later stages, in which the opisthosoma began to move more anterior forming the caudal bend (see [22] for morphological movements), Al-ptc expression in the fourth walking leg segment was reduced, with two broad stripes remaining in the opisthosoma demarking the first and second opisthosomal segments, respectively ( Figure 1N-O). Observations of expression patterns are complicated in later stages due to the formation of the caudal bend. Therefore, it is unknown when the Al-ptc stripe of the second opisthosomal segment splits nor is it known when the Al-ptc stripe of the fourth walking leg segment splits. Further study into these questions using laser-scanning fluorescent confocal microscopy needs to be conducted, as ptc genes in other arthropod species are expressed in single-cell wide domains [24][25][26], and the non-fluorescent detection methods prove problematic in ascertaining the small domains in A. longisetosus (personal observations). Ultrabithorax (Al-Ubx) expression The single A. longisetosus Ultrabithorax orthologue (Al-Ubx) was expressed only during the later parts of opisthosomal segmentation (Figure 2). At the earliest stage of Al-Ubx expression, Al-Ubx is expressed in a small ventral domain that coincides with the boundaries of the second opisthosomal segment delineated by the expression patterns of Al-ptc ( Figure 1K-O) and Al-en (Figure 2A-D). In subsequent stages, following the completion of the formation of the caudal bend, Al-Ubx expression remained in this small domain ( Figure 2E-H). This expression domain initially looked broader in comparison to earlier stages (compare Figure 2A to 2E). However, under close inspection, this is due to the 'rolling over' of the opisthosoma during the formation of the caudal bend (compare Figure 2C, G and H). Therefore, the expression of Al-Ubx seen in Figure 2E-F and H is being viewed through posterior tissue that has folded over the Al-Ubx expressing cells (see [22] for a review on the morphogenesis of the caudal bend). Thus, the above data suggest that Al-Ubx is expressed only in the second opisthosomal segment. These data, in conjunction with the segmentation gene data, suggest that the segments of the opisthosoma reduce in size in later stages. Abdominal-B (Al-Abd-B) expression The A. longisetosus Abdominal-B orthologue (Al-Abd-B) was also expressed only in later stages in the opisthosoma (Figure 3). In comparison to Al-Ubx, Al-Abd-B was initially expressed in a broader ventral domain. This domain of expression coincides with the boundaries of the second opisthosomal segment as well as the unsegmented telson ( Figure 3A-E and L). In later stages, following the completion of the formation of the caudal bend, Al-Abd-B is expressed in a much smaller domain and is expressed weakly in the telson (compare Figure 3A to 3F). Also at this stage, the darker more anterior expression pattern now sits at the anterior-pointing region of the opisthosoma and appears to be situated in the same segmental domain (that is, the second opisthosomal segment) that Al-Ubx is expressed in at this stage ( Figure 3G-I; compare Figure 2F and H to Figure 3G and M). In subsequent stages, Al-Abd-B is restricted to the second opisthosomal segment and all expression has been removed from the telson ( Figure 3K-K´and N). These data indicate that, like Al-Ubx, Al-Abd-B is expressed only in the second opisthosomal segment; however, Al-Abd-B is initially expressed in the telson until the later stages of the formation of the caudal bend. Discussion Al-ptc expression provides evidence that the wg/en segmentation pathway is conserved in mites In the fly D. melanogaster, terminal segmental boundaries are generated by the Engrailed/Wingless auto regulatory loop [28], whereby en expressing cells activate hh expression and signaling. Hedgehog signaling proteins bind to the Ptc receptor proteins on the anterior adjacent cells to activate the expression of wingless, which encodes a signaling molecule that in turn binds to the Frizzled-2 receptor in the en expressing cells, thereby stabilizing en expression. This signaling pathway has been shown to be conserved in a number of other arthropods (see [29] for review). patched expression has been observed in three arthropod groups, in the fly D. melanogaster [26], the millipede G. marginata [24] and the spider Parasteatoda tepidariorum [25]. In D. melanogaster, ptc is initially ubiquitously expressed in the blastoderm. In later stages, ptc is expressed in a segmental manner in cells abutting en expressing cells, via the repression of ptc by en. In subsequent stages, ptc is repressed by an unknown factor, resulting in two thin stripes of ptc expression per segment with en expressing cells in the middle of these two stripes [26]. This two-striped expression pattern of ptc surrounding a stripe of en expression is also seen in the ventral germ band of the millipede G. marginata [24] and also in the developing segments in the spider P. tepidariorum [25]. In all three of these species, ptc is initally expressed in broad stripes prior to the splitting into two stripes per segment. The data presented for Al-ptc expression (Figure 1) suggests that this mode of ptc expression is conserved throughout the arthropods. Previous data on en and hh expression in A. longisetosus indicate that the En/Wg signaling loop acts in A. longisetosus to pattern terminal segmental boundaries [22,30]. However, the expression patterns of other components of this pathway (for example, wingless, cubitus interruptus and Notum) are needed to confirm this. Al-Ubx is expressed in a single segment: a novel observation in an arthropod The above data illustrate that in the mite A. longisetosus, Ubx is expressed in a single segment. The conserved role of Hox genes in specifying segments in arthropods suggests that Al-Ubx specifies the identity to a single segment in the opisthosoma. This is a novel observation in an arthropod in that in all observed arthropod species, Ubx is expressed in several developing posterior segments. In insects, Ubx specifies the abdominal segments, and in some lineages, Ubx is expressed in the second and/or third thoracic segments (for example, [31][32][33][34][35]). In crustaceans, Ubx is also expressed in multiple posterior developing segments [36][37][38], as is Ubx in myriapods [39][40][41][42]. Direct expression patterns of Ubx in chelicerates have been observed in the spiders (Araneae) P. tepidariorum and Cupiennius salei, and also in the harvestman Phalangium opilio. In P. tepidariorum, the single identified Ubx orthologue is expressed in the second opisthosomal segment (O2) through the remaining posterior segments [21]. C. salei has two orthologues of Ubx, in which Ubx-1 is expressed from the anterior portion of O2 through the remaining posterior segments. Ubx-2 is expressed from the posterior half of O2 through the remaining posterior opisthosomal segments [16]. Ubx expression in P. opilio is similar to spider expression patterns, where its anterior border of expression also lies in O2 [18]. Popadić and Nagy (2001) observed expression patterns of the Hox genes Ubx and abd-A simultaneously using the UbdA antibody in the scorpion Paruroctonus mesaensis and the horseshoe crab Limulus polyphemus. The detection of this antibody showed that Ubx has an early expression boundary in O2 in both species, which later moves anteriorly to be expressed in O1. These data indicate that the ancestral chelicerate expression boundary of Ubx lies in the second opisthosomal, or genital, segment. However, the UbdA data should be interpreted with caution, as this antibody detects both Ubx and abd-A expression. The Al-Ubx expression data indicate that this gene patterns a single segment, the second opisthosomal segment (Figure 2). This adds support to the hypothesis that the second opisthsosomal segment was the ancestral Ubx anterior expression boundary in chelicerates. We, therefore, maintain that the first and second opisthosomal segments are retained in A. longisetosus due to this anterior expression boundary of Al-Ubx. However, unlike the Ubx expression patterns observed in other chelicerates, Al-Ubx was not observed to extend anteriorly or posteriorly in later stages. Al-Abd-B is expressed in a single segment; a novel observation in an arthropod The expression data for Al-Abd-B also indicate that this gene is expressed in a single segment as well as in an early domain in the unsegmented telson ( Figure 3). This is also a unique observation for arthropods in that in many studied arthropods, Abd-B patterns multiple posterior segments during development (see [6] for review). In the fly D. melanogaster, Abd-B functions to specify the fourth through the eighth abdominal segments via the expression of the m and r Abd-B isoforms [43]. Abd-B also has a role in specifying the genital region of D. melanogaster embryos [44,45]. In the beetle Tribolium castaneum, Abd-B also acts to specify the posterior ninth and tenth abdominal segments [46]. In the grasshopper Schistocerca gregaria, Abd-B is expressed in the eighth through the eleventh abdominal segments, as well as in the genital region [47]. In the thysanuran Thermobia domestica, Abd-B is expressed in the eight through the tenth abdominal segments [48]. In the milkweed bug Oncopeltus fasciatus, Abd-B also has a genital-specifying role [49]. In crustaceans, Abd-B is also expressed in the genital segments [50][51][52] and extends throughout all five segments of the posterior tagma (the pleon) of the isopod Porcellio scaber [51], but remains in the genital region of Artemia franciscana [50]. In the cirripede Sacculina carcini, Abd-B is expressed throughout the thorax and also in the vestigial abdomen [52]. For myriapods, Abd-B is expressed from the second legbearing segment posteriorly to the telson in the centipede Lithobius atkinsoni [42] and is expressed only in the posterior growth zone and the anal valves in the millipede Glomeris marginata; however, as G. marginata undergoes anamorphic growth (that is, more segments are added in post-embryonic stages) Abd-B may be expressed in these posterior segments at later stages [39]. These data, therefore, indicate that Abd-B had an ancestral role in patterning multiple posterior segments in arthropods, as well as a possible ancestral role in specifying the genital region. In the chelicerates, Abd-B expression has also been observed in the spider C. salei and the harvestman P. opilio. In C. salei, Abd-B has a later expression domain in the cells of the future genital opening of O2, consistent with the hypothesis that Abd-B had a role of genital patterning in the last common ancestor of arthropods, as well as with the hypothesis that Abd-B had a role in patterning the genital region in the last common ancestor of the protostomes and the deuterostomes ( [19] and references therein). Our observations show that Al-Abd-B is expressed in what we interpret as the second opisthosoma segment, due to the expression of Al-Ubx in this segment (see above). However, we are unable to assess when and where the genital rudiments of A. longisetosus form, which would verify that the segment expresses both Al-Ubx and Al-Abd-B. This may be due to our methodology, or due to the complex morphogenesis of the caudal bend during the growth of the opisthosoma. An earlier expression pattern for Al-Abd-B was also not observed in stages earlier than those shown in Figure 3A-D. We, therefore, maintain our interpretation that the two segments observed in the A. longisetosus opisthosoma are the first and second opisthosomal segments. abdominal-A loss and posterior segmental reductions in arthropods Like the spider mite Tetranychus urticae [27], A. longisetosus is likely missing the Hox gene abd-A, as Hox cluster sequencing and PCR surveys have yielded no abd-A orthologue (RH Thomas, unpublished results). T. urticae and A. longisetosus also display a two-segmental pattern of en expression in their opisthosomas, indicating that a loss of abd-A and the reduction of the opisthosoma occurred at the base of the acariform mite lineage [22,27,30]. In all arthropods that retain an abd-A orthologue in their genome, abd-A is expressed in posterior regions overlapping Ubx and Abd-B expression (see [6] for review). In the cirripede crustacean S. carcini, the abdominal segments are never fully developed in the adult. However, an en expression studied indicated that S. carcini patterned five abdominal segments in the developing vestigial abdomen, which are later removed following metamorphosis [53]. Interestingly, hybridization experiments failed to find an expression domain for abd-A in any region during the development of S. carcini [52], and a subsequent cytogenic analysis found no putative abd-A orthologues in the S. carcini Hox cluster [54]. Pycnogonids (sea spiders) also have a reduced posterior tagma. A PCR survey of Hox genes failed to find an abd-A orthologue in the pycnogonid Endeis spinosa; it also found that the abd-A orthologue in the pycnogonid Nymphon gracile had an unusual degree of sequence divergence in its homeodomain, possibly due to relaxed selection [55]. The reduction of posterior segmentation and the absence of an abd-A orthologue in these three disparate arthropod groups display a surprising convergent correlation. There may be some trend in arthropod evolution whereby redundant Hox gene functions in segments expressing multiple Hox genes cause the loss of selective pressure retaining one of the redundant genes in the genome. This selective pressure may also be reduced in arthropod lineages in which the posterior segments are reduced. Also, if selection is acting to reduce posterior segments, selection should act to retain only those posteriorly expressed Hox genes with multiple, non-redundant functions that are also highly pleiotropic. Ubx has functions that overlap with those of abd-A in arthropods, (for example, [56,57]). Therefore, abd-A may have been a target of reduced selection to maintain its presence on the genomes of different arthropod groups during evolution. As Hox genes act to specify the identities of segments in arthropods, it should follow that the loss of Hox genes (that is, abd-A) followed the loss of segments. In A. longisetosus, it would seem likely that segmental loss was facilitated via repressing the production of posterior segments arising from the posterior growth zone, rather than eliminating anterior segments. However, this is complicated as A. longisetosus patterns posterior segments in a manner that is not currently well-understood, in that it follows an anachronistic delineation pattern (that is, the later appearance of the L4 segment following the delineation of the first opisthosomal segment). Therefore, comparative functional studies are needed to answer these questions surrounding the loss of arthropod Hox genes and posterior segments. Hox genes and chelicerate tagmosis The co-expression of arthropod Hox genes has been shown to correlate with tagmatization, or the fusion of body segments to form distinct morphological units along the antero-posterior axis. Previous work has shown that the expression of Hox genes in spiders correlates with the prosomal and opisthosomal boundaries, with the genes labial (lab), proboscepedia (pb), Hox3, Deformed (Dfd), and Sex combs reduced (Scr) being expressed in the prosoma, and the remaining Hox genes, Antennapedia (Antp), Ubx, abd-A, and Abd-B, being expressed predominately in the opisthosoma. Hox3 expression in P. tepidariorum and P. opilio are notable exceptions, being expressed strongly in the pedipalpal and walking leg segments, and weakly throughout the opisthosoma [18,58] ( Figure 4C-D). Also, in P. opilio, ftz is weakly expressed throughout the opisthosoma, and pb and Scr have weak expression domains in the telson and ninth opisthosomal Expression data for pb, Dfd, Scr, and Antp from [30]. Expression data for Hox3 from [59] and Ftz expression from [60]. Question marks for Hox3, Dfd and Antp denote the unknown late-expression patterns in the Te (see text). (B) The known Hox gene expression domains for the spider C. salei. lab, Antp, Ubx-1 and 2 and abd-A expression from [16]; pb, Dfd-1 and 2 and Scr-1 and 2 expression from [61]. Note that Scr-2 expression was not observed in L2; Hox3 expression from [62]; Ftz from [63]; Abd-B from [20]. (C) The known Hox expression domains for the spider P. tepidariorum. lab, Dfd, Scr, Antp, Ubx, and abd-A modified from [21] and pb and Hox3 from [58]. (D) The known Hox expression domains for the harvestman P. opilio, adapted from [18]. Shaded bars indicate weak expression; however, the shaded bar for Abd-B expression in A. longisetosus denotes its early expression in the Te and subsequent clearing from this tissue. abd-A abdominal-A; abd-B, Abdominal-B; Antp, Antennapedia; Ch, chelicerae; Dfd, Deformed; ftz, fushi-tarazu; L1-L4, first through the fourth walking legs; lab, labial; O1-O12, first through twelfth opisthosomal segments; pb, proboscepedia; Pp, pedipalps; Scr, Sex combs reduced; Te, telson; Ubx, Ultrabithorax. segments respectively [18] (Figure 4D). Antp also breaks the prosoma-opisthosoma boundary in A. longisetosus [30] and C. salei [16] with both having an anterior boundary in the posterior portion of the fourth walking leg segment. This is also seen in P. opilio in which Antp is expressed in the entire fourth walking leg segment [18] (Figure 4). In comparison to other chelicerates, A. longisetosus differs in its utilization of Hox genes to pattern segments. Most notably is the single segment of expression of Ubx and Abd-B in the opisthosoma (Figures 2 and 3). Also of note is that Hox3 and Dfd are expressed in both the prosoma and opisthosoma in A. longisetosus ( Figure 4A), breaking the tagmatic boundary rule. This observation coupled with the weak expression of Hox3 observed in the opisthosoma of P. tepidariorum and P. opilio may indicate a conserved role of Hox3 in chelicerates, with the expression patterns in C. salei being derived. The remaining Hox genes are expressed in a similar manner to the P. opilio, C. salei and P. tepidariorum, and do not seem to correlate with the borders of the pseudo-tagmata. The absence of abd-A in A. longisetosus may be tied to the expression patterns of Hox3 and Dfd in the opisthosoma, as they indicate that extra Hox input is needed in these segments. However, functional studies of these genes in A. longisetosus are needed before this can be confirmed. Comparative functional studies of segmentation and posterior Hox gene expression in mites will be necessary to reveal the selection pressures, that is, a reduction in segmentation, miniaturizations, and so on that have led to their loss of abd-A. Studies are also needed to elucidate how Hox3, Dfd and Antp expression patterns change in the opisthosoma throughout development. Telford and Thomas [59] show Al-Hox3 expression in the opisthosoma; however, this is at an early stage and its late-stage expression patterns in the opisthosoma are unknown. Likewise, Telford and Thomas [30] show Al-Dfd expression throughout the opisthosoma; however, its late-stage expression in the telson is unclear. This study also highlights Al-Antp expression in an embryo of the same age as those shown in Figures 2B and 3B. However, whether Al-Antp clears from the telson on later stages is also unclear. Therefore, further study is need into the interactions and dynamic expression patterns of Hox genes in the mite opisthosoma. Conclusions The above data illustrate the reduced Hox gene input in the opisthosoma of the mite A. longisetosus by examining the expression of the A. longisetosus orthologues of Ubx and Abd-B. These two Hox genes are restricted in later stages to the same opisthosomal segment, namely the second opisthosomal segment. The reduced segmental composition in the A. longisetosus opisthosoma [22], coupled with the confirmed absence of abd-A in one acariform mite [27] and a likely loss of abd-A in A. longisetosus (RH Thomas, unpublished results), calls for further study into the evolution of the mite opisthosoma. Additional file
2017-06-17T13:29:32.761Z
2013-08-30T00:00:00.000
{ "year": 2013, "sha1": "1c7bc1629d55a547635dfeb19c449709e64b81ca", "oa_license": "CCBY", "oa_url": "https://evodevojournal.biomedcentral.com/track/pdf/10.1186/2041-9139-4-23", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4e0980bdc6e50b30eaa1dd2aaa2402bdd5b82bef", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
32099582
pes2o/s2orc
v3-fos-license
Impact of Obesity on Work Productivity in Different US Occupations Objective: The aim of this study was to quantify the relationship between workers’ body mass index and work productivity within various occupations. Methods: Data from two administrations (2014 and 2015) of the United States (US) National Health and Wellness Survey, an Internet-based survey administered to an adult sample of the US population, were used for this study (n = 59,772). Occupation was based on the US Department of Labor's 2010 Standardized Occupation Codes. Outcomes included work productivity impairment and indirect costs of missed work time. Results: Obesity had the greatest impact on work productivity in Construction, followed by Arts and Hospitality occupations. Outcomes varied across occupations; multivariable analyses found significant differences in work productivity impairment and indirect costs between normal weight and at least one obesity class. Conclusion: Obesity differentially impacted productivity and costs, depending upon occupation. A ccording to the World Health Organization, 1 obesity is defined as having a body mass index (BMI; calculated on the basis of height and weight) of at least 30.0 kg/m 2 and can be further subdivided into class I (BMI 30.0 to 34.9 kg/m 2 ), class II (BMI 35.0 to 39.9 kg/m 2 ), and class III (BMI !40.0 kg/m 2 ). Approximately a quarter of US adult men (25.3%) and women (24.6%) are estimated to be in the obese weight category. 2 Further, obesity has been associated with a range of physical and psychiatric conditions, including, but not limited to, heart disease, cancer, type-2 diabetes, pain and joint-related disorders, such as osteoarthritis, [3][4][5][6][7] and depression. 8 Unsurprisingly, people who are obese are at risk for reduced life expectancy. [9][10][11][12][13][14][15] The impact of excess weight in the workplace has also been a domain of investigation, with a number of studies detailing the rising prevalence of obesity across industries and occupational groups. Jackson et al 16 reported that from 2004 to 2011, the ageadjusted prevalence of obesity increased in all industries in the US, although estimates differed by race. Specifically, African-American women were more likely to be obese than white women in every industry category, whereas the prevalence of obesity was higher for African-American (vs white) men only in the Health Care and Social Assistance, Education Services, Public Administration, and Manufacturing industry categories. When examined by occupational group, the highest age-standardized obesity prevalence was found for US adults working in the Motor Vehicle Operator occupational category (39.2%), with the lowest prevalence for those working in the Health Diagnosing and Treating Practitioner (15.4%) category. 17 Although preliminary studies suggest that obesity may differentially impact work productivity and costs, based on occupational requirements, there is a paucity of research examining the impact of obese weight status across occupations, and findings have thus far been mixed. A large-scale, cross-sectional Dutch population study reported that obesity was predictive of developing musculoskeletal symptoms, especially among workers whose jobs had low (vs high) physical workloads (ie, the extent to which the respondent's job requires repetitive motions, awkward body positioning, etc). 18 Yet, the researchers acknowledged that their findings could alternatively be explained by individuals with musculoskeletal symptoms tending to self-select into occupations with fewer physical job demands. Gates et al 19 found a significant relationship between excess weight and impaired productivity among a sample of Manufacturing employees. Specifically, individuals with a BMI at least 35.0 kg/m 2 reported a health-related productivity loss of nearly 5.0% and needed additional time to complete physically demanding tasks. Finally, Cawley et al, 20 examining costs attributable to obesity-related absenteeism across a number of primarily office-based positions, found that costs differed by occupation, with Management and Professional occupations incurring the highest costs per worker. The extant research has primarily focused on how BMI impacts work productivity and costs without consideration of occupational requirements; results have consistently highlighted the negative effects of obesity on these outcomes. For instance, obese BMI has been associated with significantly greater absenteeism among US workers than normal BMI, after controlling for demographic characteristics (eg, age, gender, race). 21 Tunceli et al 22 found that excess weight was predictive of future workforce participation, with obese individuals less likely to be employed over time than normal weight counterparts. In addition, a large-scale Canadian-based study found, among a cohort of 56,971 respondents, that obesity was an independent predictor of absenteeism and presenteeism. 23 Notably, obese individuals with cardiometabolic risk factors (ie, diabetes, hyperlipidemia, and/or hypertension) reported significantly greater impairments in productivity and higher medical expenditures than normal weight individuals with the same risk factors, 24,25 which demonstrates the unique contribution of obesity to less favorable outcomes. Excess weight was also associated with impaired health status and work productivity, as well as increased health care resource utilization, among US workers, 26 which has serious implications for the societal burden of obesity, given companies often cover the health insurance costs of their employees. Further, among employed US adults, annual direct (ie, medical expenditures) and indirect (ie, work productivity loss) costs totaled $73.1 billion, and nearly two-thirds of these costs were incurred by morbidly obese workers (BMI > 35.0 kg/m 2 ). 27 Obesity additionally accounted for up to 12.6% of annual absenteeism and over $8 billion in associated costs. 21 Overall, previous studies have documented the prevalence of obesity by industry and occupational categories and examined the association between obesity and lost work productivity and associated costs. 16,17 However, these studies have tended to focus on one or a few occupations. 19,20 Thus, there exists a dearth of empirical research investigating the impact of obesity across varying occupational groups. Such research is critical, as excess weight may be associated with differing degrees of burden depending on profession and job responsibilities. Sample All respondents from the 2014 and 2015 US National Health and Wellness Survey (NHWS), a self-administered, Internet-based questionnaire of adults (aged 18 years or older) who reported their occupation and had nonmissing weight data were included (n ¼ 39,259). If a respondent completed the NHWS in multiple years, only the most recent data were included in this study. Underweight (BMI <18.5 kg/m 2 ) respondents were excluded from the analyses. The survey was divided into two parts. The primary component was the base survey, which included demographic, health behaviors (eg, smoking), health history (eg, height and weight, current and previous medical conditions), and work productivity questions. The second section consisted of conditionspecific (eg, diabetes) and noncondition-specific modules (eg, symptoms). Weight Status BMI was calculated on the basis of responses to items asking ''What is your height?'' and ''What is your weight?'' BMI was coded into the following categories: Normal-weight range (BMI 18. Major Occupational Groups Respondents of the NHWS were asked to provide their occupation as part of the in-depth demographic profile when they registered to join the Internet panel. Respondent occupations were first categorized into major occupational groups, based on the 2010 Standard Occupational Classification and Coding Structure (SOC), which was developed by the US Department of Labor, Bureau of Labor Statistics (BLS). 28 Those occupational groups that included fewer participants were then merged. For example, the Installation, Maintenance, and Repair occupations, Building and Grounds Cleaning and Maintenance occupations, Farming, Fishing, and Forestry occupations, and Construction and Extraction occupations were merged to into one occupational group, Construction/Installation/Maintenance/Repair/Agriculture. Demographics and Health Characteristics Participants reported their demographic and health characteristics, which included age, sex, marital status, race, education, household income, smoking status, alcohol use, and exercise behavior. This information was used to describe the sample and was included as covariates in the multivariable analyses. Comorbidity Burden The Charlson comorbidity index (CCI) was used to represent overall health by gauging the presence of a range of disparate health conditions (eg, HIV/AIDS, metastatic tumor, moderate/severe renal disease, diabetes, mild liver disease, ulcer disease, connective tissue disease, chronic pulmonary disease, dementia, etc). A higher CCI score indicates that the respondent has more health conditions and is, therefore, not as healthy. 29 Overall Work Productivity Overall work productivity was derived using the Work Productivity and Activity Impairment-General Health (WPAI-GH) questionnaire, a six-item, validated instrument. 30 Overall work impairment was measured by combining absenteeism (the selfreported number of work hours missed in the past week because of one's health divided by the total number of hours that one could have worked) and presenteeism (the self-reported level of impairment experienced while at work in the past seven days). Indirect Costs Indirect costs were calculated for each employed respondent by using median weekly income figures obtained from the BLS. 31 For each respondent, an hourly rate was estimated by dividing the median weekly income by the length of the typical workweek. Next, the number of hours missed in the last week because of one's health (absenteeism) and the number of hours missed in the last week because of health impairment while at work (presenteeism) were each multiplied by the hourly rates to arrive at total lost wages. These figures were then multiplied by the average number of workweeks in a year to obtain annual estimates. Descriptive Statistics All categorical variables were reported using frequencies and percentages. All continuous variables were reported using counts, means, medians, and standard deviations. Multivariable Analyses The independent variable was BMI category. Normal-weight BMI was the reference category. Separate generalized linear models (GLMs) for each occupational group were used to calculate the association between BMI and overall work productivity and indirect costs, controlling for age, sex, race, marital status, education, income, exercise, smoking, alcohol use, and CCI scores. To account for the skewing across the outcome variables, a negative binomial distribution and log-link function were specified. Adjusted means (least-squares means presented at the mean of the covariates) for all outcomes were calculated by using a maximum likelihood algorithm and reported in their original metric. Overall Work Productivity Impairment In general, work productivity impairment was positively associated with increases in BMI class. For all 12 occupational groups, there was a significant difference (P < 0.05) in overall work productivity impairment between normal BMI and at least one obesity class. The Construction/Installation occupation had the highest level of work impairment [17.95% (normal BMI) to 37.21% (obese class III)], followed by Arts [14.11% (normal BMI) to 28.89% (obese class III)] and Hospitality [17.32% (normal BMI) to 26.85% (obese class III)], whereas the Legal occupational category (n ¼ 738) had the lowest level of work impairment [11.66% (normal BMI) to 19.42% (obese class III); Fig. 1]. However, direct comparisons across occupation groups could not be made because the adjusted means were within the confidence intervals. Indirect Costs Indirect costs were also positively associated with BMI class. For each occupational group examined, there was a significant difference in indirect costs between normal-weight respondents and those in one or more of the obesity categories (P < 0.05 DISCUSSION Obesity imposes significant health and economic burden in the US. The current study suggests that obesity has a negative impact in the workplace, which may differ by occupation. These findings reinforce the need for employers to evaluate the burden of obesity on work productivity and to try to address it. Almost onethird of the sample reported that they were obese, which is slightly lower than a recent estimate of 37.7% for US adults. 32 Further, in the current study, nearly two-thirds (64.5%) of employed adult participants were overweight or obese. These results provided additional evidence of the considerable scope of the obesity epidemic among US workers. A number of differences emerged in demographic characteristics, with males generally less likely to be obese than females, and minority participants less likely than whites to report being in higher BMI categories. Lower income respondents (less than $25k per year) and those with less than a college degree were more likely to be overweight or obese than normal-weight participants. As may be expected, the comorbidity burden (ie, CCI scores) increased along with increases in BMI class, which was consistent with prior research showing the strong links between obesity and a variety of comorbidities, such as cardiovascular disease, type-2 diabetes, and psychiatric conditions. 11,14,15 Occupation-based analyses revealed obesity to be most common among those working in Protective Services professions, consistent with previous findings that reported a sharp increase over time in the prevalence of obesity within this occupation. 17 Despite this finding, the poorest outcomes tended to be concentrated among two other occupational groups, Construction/Installation/Agriculture and Hospitality, which reported the highest comorbidity burden (ie, CCI scores), the highest rate of overall work productivity impairment, and the greatest indirect costs. One can hypothesize that perhaps those occupations that involve more physically demanding work are worst affected by obesity, compared with more sedentary occupations. Future studies may focus on identifying which factors relate to more burden for one occupational group versus another. Previous studies have reported a significant relationship between obesity and indirect costs via work productivity loss. 26 The current study likewise found that, across most major occupational groups, indirect costs typically increased concomitantly with BMI class. In many cases and in line with prior research, 27 indirect costs for those in the obese class III group were higher than those incurred by normal-weight respondents, often by 50.0% or more. Overall, the current findings highlight the considerable burden of obesity among US working adults. Furthermore, these findings provide important clarification regarding how this burden may vary based upon a worker's respective occupation. These results can provide a better understanding of the economic consequences attributable to obesity and inform broad-based interventions targeting education and healthy weight-loss for employees. Limitations For the current study, there are some limitations that should be noted. First, all data were self-reported, and no objective verification of BMI class, health history information, or work productivity was possible. Thus, we cannot exclude the possibility that recall biases or socially desirable responding may have occurred. Second, the data used in this study were cross-sectional, which precludes the ability to infer causality between BMI class and the outcomes of interest. Disability-related costs and other nonwage-related variables were not accounted for in the indirect cost calculation. Therefore, the current study may either underestimate or provide a very conservative estimate of indirect costs. It is also possible that selection bias limited the representativeness of the sample. Specifically, preliminary bivariate analyses (not shown) indicated that NHWS respondents who did not provide occupational data, and were thus excluded from the study, systematically differed on demographics, health history, and outcomes from study participants (ie, those who provided occupational data). Finally, although the NHWS is demographically representative of the general US adult population with respect to age, sex, and race, it is unclear to what extent this sample generalizes to the specific population of obese adults or whether the sample accurately represents the characteristics of workers within each major occupational category examined. CONCLUSION Overall, the findings underscored the substantial economic burden of obesity among US workers. Generally speaking, increasing BMI class was positively associated with impaired work productivity and indirect costs. However, this study revealed that these effects were not uniform, with notable differences emerging, based on participants' respective occupation. The current study's findings are important in garnering a more complete understanding of the indirect economic impact of excess weight and in guiding broader occupation-specific interventions that target employee health.
2018-04-03T02:06:14.654Z
2017-10-23T00:00:00.000
{ "year": 2017, "sha1": "41eaf837c8482386681247584b7cd384ba06ff9b", "oa_license": "CCBYNCND", "oa_url": "https://europepmc.org/articles/pmc5770108?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "41eaf837c8482386681247584b7cd384ba06ff9b", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
52001210
pes2o/s2orc
v3-fos-license
Valproate-Induced Hepatic Dysfunction in Albino Rats and Protective Role of n-Butanol Extract of Centaurea sphaerocephala L The objective of the present study was to evaluate the protective effect of n-butanol extract of Centaurea sphaerocephala (C.sphaerocephala) and Vitamin E against sodium valproate-induced hepatotoxicity and oxidative stress in male rats. Male rats were divided into eight equal groups treated with plant extract (50mg/kg, 100mg /kg), Vit. E (100mg/kg) and VPA (300mg/kg). At the end of the experiment, animal were scarified and samples (blood and liver’s tissue) were removed isolated for biochemical and histological study. VPA-treated rats showed hepatic injury characterized by a significant increase in biochemical parameters (serum transaminase, cholesterol and triglycerides). Also, VPA induced oxidative stress exhibited a significant increase in MDA level and significant decrease in GSH levels, CAT and GPx activities. These effects were accompanied by histopathological changes in liver. While the pretreatment by n-butanol extract of C. sphaerocephala reversed the alteration induced by VPA and reduced its toxic effects. The results showed a significant decrease in serum markers and liver’s lipid peroxidation whereas GSH level and the activities of GPx, CAT enzymes were significantly increased. Histopathological observations correlated with the biochemical parameters. VPA-induced hepatotoxicity involved free radical production, the antioxidant and free radical scavenging property of Centaurea sphaerocephala would have provided the protection against hepatic damage. INTRODUCTION Valproic acid (VPA) is a well-established anticonvulsant drug used in the treatment of many forms of generalized epilepsy and psychiatric disorders to control epileptic seizures and regulate the mania associated with bipolar disorder 1,2 .VPA is well tolerated at therapeutic doses and it has inherent toxicity 3 .Two types of serious side-effects limit the use of this drug: hepatotoxicity and teratogenicity 4 However, Administration of VPA produced many metabolic and morphological aberrations in the liver 5 .Also, histopathological and biochemical studies indicated that VPA evoked hepatic necrosis, apoptosis and steatosis 6 .Furthermore, VPA increased intracellular reactive oxygen species (ROS) levels in several tissues, including liver, brain and small intestine 7.But the mechanism by which VPA induces liver injury remains unknown 8 .A possible VPA biotransformation and/or alterations in natural antioxidants might contribute to the VPA associated complications.However, the main cause of VPA hepatotoxicity was shown to be due to generate the free radical scavenger 9 . Oxidative stress, as a result of compromised antioxidant capacity and/or increased production of reactive oxygen species (ROS) has been also proposed as one mechanism for VPA-induced hepatotoxicity 10 .Lipid peroxidation may be involved as an additional mechanism of VPA-induced liver damage in rats 11 .Injection a single dose of VPA in to rats resulted in a dosedependent elevation levels of lipid peroxidation in plasma and liver 12 .However, antioxidants were the primary candidates to counteract such toxic effect.Glutathione (GSH) as a major antioxidant and redox regulator play an important role in the defense against oxidants and electrophiles 13 .Consequently, any mechanism which removes ROS or prevents hepatic GSH depletion or induce activation and production of GSH dependent enzymes may provide protection for hepatotoxicity in VPA-treated patient 14 .Also Cells can be protected from oxygen-derived radical injury by naturally occurring free-radical scavengers and antioxidant pathways, including vitamins A, C, E, SOD, catalase and glutathione peroxidase 15 .Moreover, many therapeutic studies are offered to plants since plants are a natural source of antioxidants and hence reduce oxidative stress 16 .The genus Centaurea (Asteraceae) contained more than 500 species.45 species growing in Algeria, including 7 in the Sahara 17,18 .Many species of the genus Centaurea have been used in traditional medicine to cure various ailments (diabetes, diarrhea, rheumatism, malaria, hypertension) 19 .To our knowledge, no traditional uses or pharmacological studies are reported so far for this species.So, as a part of our ongoing research program on beneficial health effects of plants and herbs 20,21 , we investigate in the present study, the ability of the protective effect of n-butanol extract of Centaurea sphaerocephala an Algerian endemic plants and vitamin E on VPA-induced liver damage in male rats. Plant material and extraction procedure Aerial parts of C. sphaerocephala were collected from the area of El Kala, Algeria (21 m, 36° 53′ 44″ N, 8° 26′ 35″ E) in May 2012 and authenticated on the basis of Quezel and Santa (1963) 18 by Professor M. Kaabache, specialist in the identification of Algerian Centaurea species (Ferhat Abbas University, Setif 1, Algeria).A voucher specimen (CSA0512-EK-ALG-65) was deposited in the Herbarium of the VARENBIOMOL research unit, Frères Mentouri University Constantine 1.The leaves and flowers (2000 g) of this plant were macerated for 24 h, three times with methanol-water (70:30, v/v) at room temperature.After filtration, the filtrate was concentrated under vacuum (up to 35 °C), the remaining solution (400 mL) was dissolved in distilled H2O (800 mL) under magnetic stirring and maintained at 4 °C overnight to precipitate a maximum amount of chlorophylls.After filtration, the resulting solution was extracted successively with chloroform (CHCl3), ethyl acetate (EtOAc) and n-butanol (n-BuOH).The organic solutions were dried with sodium sulfate (Na2SO4), filtered using common filter paper and concentrated in vacuum (up to 35 °C) to obtain the following extracts: CHCl3(5 g), EtOAc (4.94 g) and n-BuOH (34 g). Animals and Treatment Male Wistar albino rats weighing (150-200 g) were obtained from Pasteur institute (Algiers, Algeria).Animals were housed in plastic cages, with controlled laboratory conditions of light/dark cycle (12 h/12 h), temperature (22±2°C) and relative humidity, with food and tap water.Rats were adapted for 2 weeks before the indicated treatments.All experimental procedures were performed between 8-10 a.m. and care was taken to avoid stress full conditions.Also, all experimental assays were carried out in conformity with international guidelines for the care and use of laboratory animals.Animals were left for 10 days before being randomized into experimental groups of 8 animals and four animals per cage.The study protocol was approved by the Institutional Animal Ethical Committee.Rats were housed four per cage and were randomly divided into 8 groups (8 animals in each group): Group1, non-treated served as control; Group2 and Group 3, received plant extract (50 mg/kg) and (100mg/kg) respectively; Group 4 treated with 300 mg/kg per day sodium valproate; Group 5, rats received Vitamin E (100mg/kg); Groups 6, 7, 8 received respectively, plants extract (50 and 100mg/kg), vitamin E (100 mg/kg) 1 hour before treatment with VPA (300mg/kg).Treatments were given for 14 days by gavage.After treatment, blood samples were drawn from the caudal vena cava, collected in test tubes containing EDTA, and centrifuged to obtain serum for analysis of biochemical parameters.The rats were sacrificed by decapitation after deep ether anesthesia; livers were isolated to measure the levels of antioxidant enzymes, MDA and histopathological studies. Preparation of tissues samples Livers were perfused with ice NaCl 0.9% solution to remove blood cells, removed quickly and placed in the same solution.After blotted on filter paper, weighed, and homogenized in ice-cold KCl 1.015% with the addition of 6 µl of 250 µM butylated hydroxytoluene to prevent the formation of new peroxides during the assay.The homogenization procedure was performed under standardized condition.Homogenates (20%) were centrifuged and the supernatant was kept on ice until assayed or conserved in freezer -80°. Lipid peroxidation determination Lipid peroxidation (LPO) was determined by measuring the formation of TBRAs using the colorimetric method of Uchiyama 22 .3ml of phosphoric acid (1%) and 1ml of thiobarbituric acid (TBA, 0.67%), aqueous solution were added to 0.5 ml of liver homogenate (20%) pipetted into centrifuge tube.The mixture was heated for 45 min in a boiling water bath.Then the mixture was cooled at room temperature, and 4 ml of n-butanol was added and mixed vigorously.After centrifugation, the absorbance was measured at 532 nm.MDA was used as the standard. Measurement of reduced glutathione Reduced glutathione (GSH) content in the liver was measured chemically according to the method described by Elman 23 using Elman's reagent.This method is based on the reactive cleavage of 5, 5′-dithiobis-(2-nitrobenzoic acid) by sulfhydryl group to yield a yellow color with maximum absorbance at 412 nm against reagent blank. Evaluation of GPx activity GPx activity in the liver was measured chemically according to the method described by Flohe 24 .This method is based on the reduction of H2O2 in the medium by GPx in the presence of GSH.Briefly 0.2ml supernatant obtained from tissues, 0.4ml GSH (0.1 mM), 0.2ml TBS solution (Tris 50mM, NaCl 150mM PH 7.4) were added to the tubes and mixed.After 5 min incubation at 25 °C, 0.2 ml of H2O2 (1.3mM) was added in the mixture.The reaction was stopped after 10 min by addition of 1 ml trichloroacetic acid (TCA 1%, w/v), and then the tubes maintained at 0-5°C in an ice bath for 30min.After centrifugation, 0.48ml supernatant was taken and added to each tube, and then 2.2 ml TBS solution and 0.32 ml DTNB (1mM) were added.The optical density was measured at 412 nm in the spectrophotometer after 5 min. Evaluation of the catalase activity The enzymatic activity of catalase was measured as described by (Claiborne, 1985) 25 .The homogenate was centrifuged at 10000 rpm for 45 min at 4° C; the final supernatant is the source used for the evaluation of the activity of catalase.The disappearance of H2O2 was determined spectrophotometrically at 240 nm.Catalase activity was expressed as U/mg of protein.In order to express the antioxidant enzyme (GPx, catalase) activities per gram of protein, total protein concentration was determined calorimetrically by using the method of (Lowry, 1951) 26 . Plasma biochemical analysis The liver marker enzymes, aspartate transaminase (AST) and alanine transaminase (ALT) also, total cholesterol and triglycerides were estimated using commercial kits (Spinreact, SPAIN). Histopathological examination For histopathological analysis, hepatic tissue fragments were taken and fixed in neutral formalin 10 % solution.The fixed specimens were then trimmed, washed and dehydrated in ascending grades of alcohol.These specimens were then embedded in paraffin, cut into 5μm thick sections and stained with Harris hematoxylin and eosin for microscopically examination 27 . Statistical analysis Data are expressed as mean ± SD and statistical interferences were based on student's test for mean values comparing control and treated animals using Graph Pad Prism 5.01 Retail+5.02Update, Version 5.The statistical significance was accepted at a level of P<0.05. Impact of VPA, vitamin E and n-butanol extract of Centaurea sphaerocephala on serum transaminases levels. As shown in Figure 1, the administration of toxic dose of VPA (300mg/kg) caused a significant increase in liver enzymes (AST and ALT) with the values 130.32±2.11U/l,95.72±4.14U/lrespectively.This increase was statistically significant (P<0.001)compared to control group 75.14±2.42U/l,66.63±1.01U/lrespectively.Animals pretreated with n-butanol extract (100mg/kg) and Vit.E (100mg/kg) showed a significant decrease (P<0.01,P <0.001) in these liver enzymes compared to VPA-treated animals.While, plasma levels of these enzymes in rats pretreated group with extract (50mg/kg) were significantly decrease (P<0.05) and (P<0.01)respectively. The protective effect of n-butanol extract of C. sphaerocephala and vitamin E on cholesterol and triglycerides levels. The VPA treated rats exhibited a significant increase (p<0.001) the cholesterol and triglyceride serum levels compared to control group.The pretreatment with both doses of plant extract and Vit.E (100mg/kg) decreased significantly (p<0.01) the total cholesterol compared to VPA-group.A significant reduction in triglycerides was observed in rats pretreated with n-butanol extract (50mg/kg, 100mg/kg) (p<0.01,p<0,001) and Vit E (p<0,001) compared to VPA-treated rats (Figure 2). The protective effect of n-butanol extract of C. sphaerocephala and vitamin E on VPA-induced lipid peroxidation in liver The administration of VPA induced a significant increase (P<0.01) in lipid peroxidation in liver tissue compared to control.While the pretreatment with n-butanol extract (100mg/kg) and Vit E (100 mg/kg) produced a significant decrease (P<0.01) in lipid peroxidation in liver compared to VPA group (Fig 3). Effect of VPA, n-butanol extract of C. sphaerocephala and Vitamin E on liver GSH levels As showed in Figure 4, a significant decrease in GSH levels of liver's tissue was observed in VPA group (P<0.001)compared to control or untreated rats.While coadministration of plant extract (100mg/kg) and vit E (100mg/kg) with VPA increased significantly (P<0.01) the level of GSH compared to VPA group, in the other side, group pretreated with 50mg/kg has significant decrease with (P<0.05) in GSH level. Effect of VPA, n-butanol extract of C. sphaerocephala and Vitamin E on GPx activity in liver. As illustrated in Figure 5, VPA induced significant decrease (P<0.001) in GPx activity compared to control or normal group.Furthermore, there was found a marked significant increase (P<0.001) in GPx activity after cotreatment with plant extract (100mg/kg, 50mg/kg) and VitE (100mg/kg) compared to VPA group (Figure5). Effect of VPA, n-butanol extract of C. sphaerocephala and Vitamin E on Catalase activity in rats' liver CAT activity was significantly decreased (P<0.01) in liver rat's tissue after administration of VPA (300mg/kg) compared to control.Furthermore, n-butanol extract of C. sphaerocephala (100mg/kg) and Vit E (100mg/kg) showed a significant increase (P<0.05;P<0.01) respectively in catalase activity compared to its activity in VPA group (Figure 6). Histological examination Effect of VPA and n-butanol extract of C. sphaerocephala on liver histology As shown in Figure 7 (A) the liver of control or untreated rats showed normal histological architecture.Liver's VPA treated-rats (300mg/kg), showed dilatation and vascular congestion (D, a); steatosis (D, b) and hepatic necrosis (D, c).While the liver's section of plant extract treated rats showed a normal histological picture that closely approximate of the control group (Figure7 B, C).Also, sections belonging to groups pretreated or coadministrated by VPA and Vit.E or VPA and n-butanol extracts showed relatively normal ultrastructure compared to VPA group (Figures 7, G, F and H). DISCUSSION The use of VPA as an anticonvulsant has been supported by clinicians, which was subsequently challenged due to its side-effects and induced toxicity 28 .The most serious of those being hepatotoxicity 29 , teratogenicity 30 and neurotoxicity 31 which are associated with increased reactive oxygen species (ROS) formation 32 .The mechanism of hepatic injury has been studied extensively but is still unclear.Some authors hypothesized that VPA aberrant metabolism with the formation of toxic metabolites or mediation of lipid peroxidation might be the underlying mechanism of serious hepatic reactions 33,34 .Lipid peroxidation is one of the excessive ROS consequences while causing cell damage.It was shown that VPA induced lipid peroxidation in rat hepatocyte cultures 35,36 . In the present study, administration of VPA to rats caused a significant increase of lipid peroxidation as indicated by the significant increase in MDA level compared to the control group; suggesting that VPA activated the formation of free radicals in hepatic tissue.These results confirmed by others findings which demonstrated that VPA exposure stimulated the generation of ROS 37,38 .Also, study reported elevated serum LPO levels in epileptic children who had VPA therapy when compared to pretreatment group 39 .Another study reported increased in plasma LPO levels in epileptic adults who were treated with VPA 40 . It is well known that reduced glutathione (GSH) is a major antioxidant and redox regulator, which is present in all cell types.Is the most abundant cellular thiol, and plays an important role in the defense against oxidants and electrophiles 41 .Also it is a substrate for glutathione peroxidase (GPx) and detoxifies foreign compounds and biotransformation drugs 42 .In our investigation, GSH level, CAT and GPx activities decreased in rats' liver of VPAtreated group compared to control animals.The increased production of ROS caused inactivation of antioxidant enzymes which reflects their consumption through the oxidative stress.In agreement with this finding, the significant decrease of GSH content in VPA-treated rats suggested that it might be due to exhaustion of GSH stores and increase in the oxidative stress.These results are in agreement with others studies 43,44 .Also, the activity of erythrocytes GPx decreased in patients VitE+VPA treated with VPA 45 and in rats administered VPA intraperitonally 46 .One of the most sensitive and dramatic indicators of hepatocyte injury is the release of intracellular enzymes such as AST, ALT after VPA administration.The elevated activities of these enzymes indicated a hepatocellular damage 47 .Our results showed that VPA administration caused severe acute liver damage in rats, demonstrated by the significant elevation of plasma AST and ALT levels, suggesting that excessive VPA might cause critical injury to the organ.These findings concurred with the results of other studies 48,49 .Also, in the current study; the VPAtreated rats exhibited significantly higher Cholesterol and triglycerides levels than the control rats.This increase consistent with the finding of other study which reported that administration of VPA caused significant increase in the levels of lipid profile (cholesterol, triglycerides, phospholipids and free fatty acids) 11 .Moreover, histological studies of VPA-induced toxicity have shown extensive factor lead to sever distortion of liver architecture, vascular congestion, microvesicular steatosis with and hepatic necrosis which is in agreement with other studies 50,51 . Plants produced significant amount of antioxidants such as polyphenols, phenols and flavonoids.These compounds scavenge a wide range of free radicals, including the most Different g treatments active hydroxyl radical, which may initiate lipid peroxidation and prevent the loss of the lipophilic (αtocopherol) and hydrophilic (ascorbate) antioxidants, by repairing tocopheryl and ascorbate radicals 52 . In our study, administration of n-butanol extract of C. sphaerocephala (50mg/kg,100mg/kg) or vitamin E (100 mg/kg) simultaneously with VPA to male rats resulted in normalization of lipid peroxidation process as well as glutathione content, glutathione peroxidase and catalase activity in rats' livers .Permitting the prevention of hepatic dysfunction and maintaining the normal level of serum transaminases, cholesterol and triglycerides following inhibition of their hepatic leakage by preventing lipid peroxidation.So, the protective efficacy of C. sphaerocephala may be due to the presence of several active components.These results are in agreement with other studies which demonstrated that the antioxidant and free radical scavenging property of medicinal plants extract would have provided the protection against hepatic damage caused by valproic acid 53,54 .Also, in this study we showed that, treatment with C. sphaerocephala improved histological changes in the liver caused by VPA. CONCLUSION Results of this study showed that VPA administration reduced antioxidants and increased lipid peroxidation which leads to organ damage.Also, it was observed that C. sphaerocephala exerted significant protection against VPA-induced toxicity by its ability to ameliorate the lipid peroxidation through the free radical scavenging activity, which enhanced the levels of antioxidant defense system.This effect could be attributed to its antioxidant properties.(E), (F): Livers 'section of rats treated with VPA (300mg/kg) and C. sphaerocephala extract (50mg/kg or 100mg/kg) respectively showed conserved hepatocytes (×400).(G): Livers 'section of rats treated with VPA (300mg/kg) and vitamin E (100mg/kg) showing a histological picture comparable to that of the control group with minimal damage of hepatocytes (×400).
2018-08-09T17:20:27.220Z
2017-10-25T00:00:00.000
{ "year": 2017, "sha1": "a758f571a13fbd633d1c5d05d8624ee97f875d43", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.25258/phyto.v9i10.10458", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a758f571a13fbd633d1c5d05d8624ee97f875d43", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
67806089
pes2o/s2orc
v3-fos-license
Ghosts from Ghosts in the BRST Formalism We show that the Hamiltonian HQ introduced in the course of BRST analysis of a gauge theory may in fact be associated with an action that itself is gauge invariant. This action can then be treated using the BRST formalism. We illustrate this by considering the spinning particle and the first order Einstein-Hilbert action in 1 + 1 dimensions. Introduction The treatment of systems whose action involves non-physical degrees of freedom through the Hamiltonian-BRST procedure [1][2][3] is quite useful. (For reviews, see refs. [4][5][6][7].) In this approach, the contribution of non-physical degrees of freedom to physical processes is cancelled by systematically introducing additional non-physical degrees of freedom with opposite statistics. The first step in this procedure is to introduce a canonical pair of "ghosts" (θ i , π i ), one for each first class constraint φ i arising in the theory, and having different Grassmann character from φ i . Next, a BRST operator Q is introduced such that where Q E = Q E (q i , p i , θ i , π i ) is the "extra" contribution to Q that ensures that where {, } * is a Dirac Bracket (DB) used to eliminate any second class constraints [5,7,8] that are present. Once Q has been found, a "BRST Hamiltonian" H Q constructed so that {Q, H Q } * = 0 (3) with the condition that if the ghost fields were to all vanish, then H Q reduces to H C , the canonical Hamiltonian. In this paper, we wish to note that the BRST action may itself possess gauge symmetries, much like the classical action These symmetries can be found by either examining the equations of motion that follow from the action [9] or by examining the action itself [10]. In both approaches, a symmetry generator G gives rise to a change in any dynamical variable A δA = {A, G} * that leaves the action invariant. We will consider two models in order to demonstrate how S Q can possess gauge symmetries. First of all we shall examine the spinning particle [11] which possesses both a local "Bosonic" (non-Grassmann) and "Fermionic" (Grassmann) symmetry. The BRST approach has been used to quantize this model [12][13][14] but the local gauge symmetries associated with the BRST Hamiltonian has apparently not been considered. The other model we shall look at is the first order Einstein-Hilbert action in 1 + 1 dimensions. The canonical structure of this model has been used to find a local gauge symmetry that is distinct from the manifest diffeomorphism invariance present in this model [15][16][17]. This novel symmetry has been used in conjunction with Faddeev-Popov path integral quantization [18]; we will consider the BRST approach to analyzing this model in section three. The Spinning Particle The action for a spinning particle is [11] where φ µ and e are Bosonic and ψ µ , ψ 5 and χ are Fermionic; m is a mass parameter. Both of the approaches of refs. [9,10] lead to the generator of the gauge invariances associated with S C in eq. (7) being In eq. (8), p A (π A ) is the canonical momentum associated with a Bosonic (Fermionic) coordinate variable A. Also, p e = π χ = 0 (9) are primary first class constraints, are secondary first class constraints, B(τ ) (F (τ )) is a Bosonic (Fermionic) gauge function, and the variables ψ µ , ψ 5 satisfy the DB Following the procedure outlined in [1][2][3][4][5][6][7], it can be shown that the BRST operator Q is given by where f i (b i ) are Fermionic (Bosonic) ghost fields; subsequently it follows that the BRST Hamiltonian is where (Eqs. (15)(16)(17)(18) are also in refs. [12][13][14].) With H Q given by eq. (16), we have the first order BRST action We now can perform a canonical analysis of the action S Q . It is apparent that again, there are the primary first class constraints of eq. (9). Once more there is the secondary first class constraint of eq. (10), but now, in place of eq. (11), there is the secondary first class constraint Since there is now a tercery first class constraint (recalling eq. (10)). The formalism of ref. [10] can now be used to find the gauge generator associated with the local gauge invariances of S Q of eq. (19); it is where under the transformations of eq. (24), it follows that Having established the presence of a local gauge symmetry in S Q , we can now repeat the BRST procedure. We first of all find that with eq. (2), the BRST operator associated with S Q is where f i and b i are Fermionic (Bosonic) ghost fields. Again, from eq. (3), it follows that the BRST Hamiltonian that is associated with Q is We shall now examine the action for possible gauge invariances. Again employing the approach of ref. [10], we find that the generator of gauge symmetries that leaves S Q in eq. (28) invariant is withB(F ) being a Bosonic (Fermionic) gauge function. Since S Q has a gauge symmetry, it too is subject to a BRST analysis. The First Order Einstein-Hilbert Action in 1 + 1 Dimensions Another example of a gauge theory which has an associated BRST action that itself possesses a gauge symmetry is provided by the first order Einstein-Hilbert action in 1 + 1 dimensions. The classical action for this model is where If now h µν = √ −g g µν and G λ µν = Γ λ µν − 1 2 δ λ µ Γ σ σν + δ λ ν Γ σ σµ , then eq. (30) can be rewritten [15][16][17][18] The primary constraints p ξ 1 = p ξ = p ξ 1 = 0 (34) obviously lead to the secondary constraints these are all first class as Using these constraints, one finds that the gauge generator leads to the transformations [15][16][17][18] where ǫ 01 = −ǫ 10 = 1 and ω µν is a symmetric gauge function. If now we define then we have a canonical Hamiltonian Introducing now Fermionic ghost fields f a , f b , f c and F a , F b , F c , it follows from eqs. (2,3) that the BRST charge Q and the BRST Hamiltonian H Q are given by We find that where The action associated with the BRST Hamiltonian H Q obviously has the primary constraints as well as the secondary constraints with H Q given by eq. (42) there are no tertiary constraints. The constraints of eqs. (48, 49) are all first class and consequently S Q itself possesses a gauge invariance which is generated by [10] Discussion Cancellation of the effects due to the presence of non-physical degrees of freedom appearing in a locally gauge invariant actions by the introduction of "ghost" fields is quite efficient. It is well understood that the BRST action of eq. (4) that takes the place of the classical action of eq. (5) upon introduction of these ghost fields has a global gauge invariance on account of eq. (3) [6]; we have in this paper demonstrated that the BRST action itself might possess a local gauge invariance. Adding a term of the form to H Q is a form of "gauge fixing" [1][2][3][4][5][6][7]. This term leaves eq. (3) intact on account of eq. (2) and it has been demonstrated [1][2][3][4][5][6][7] that transition amplitudes are independent of the choice of Γ ′ ; it may also break any local symmetry present in S Q . We note that on account of eqs. (17,45) H Q + H gf = 0 for both the spinning particle and the Einstein-Hilbert action in 1 + 1 dimension if we choose Γ ′ = −Γ. In the discussion of the spinning string in refs. [1][2][3][4][5][6][7][8][9][10][11][12][13][14], Γ ′ = 0. In this case, since the BRST action has a local gauge invariance, the path integral used in quantization is not well defined. One could either choose a suitable gauge fixing function Γ ′ or reapply the BRST procedure and after having arrived at an action involving "ghosts of ghosts", check to see if it is well defined; if it is not, a gauge fixing function can be introduced at this stage. The novel "ghosts of ghosts" arising due to the situation described above differ from the "new ghosts" that may arise in the course of applying the BV analysis of ref. [19]. These new ghosts of BV arise whenever the gauge invariance present in the Lagrangian has reducible generators; they serve to eliminate any gauge invariance that would be present in the ghost sector if they were not included. In contrast, the "ghosts of ghosts" that are considered in this paper may occur even if the gauge generators of the classical action are not reducible; their purpose is to ensure that if the BRST action that arises upon applying the procedure of refs. [1][2][3] itself possesses a gauge invariance then any superfluous degrees of freedom occurring are eliminated in a consistent way. We note that we had not expected that the BRST action might itself be gauge invariant; in Yang-Mills theory this is not the case. It would appear that each BRST action must be examined individually to see if it is gauge invariant using the standard approach of refs. [9,10]. We note that the BRST approach of refs. [1][2][3] and the BV approach of ref. [19] are related. A discussion of their connection appear in ref. [20] where the gauge invariance present in the first order (Hamiltonian) form of the action is treated using both approaches and they are shown to be equivalent.
2015-06-11T18:15:46.000Z
2014-04-16T00:00:00.000
{ "year": 2014, "sha1": "fb515f99b52bc58ac57248fc4602c1e443f1af4e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1404.4322.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "246b7dea765d081019dd03c2dde048efb7bdf6d2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
227243130
pes2o/s2orc
v3-fos-license
Transcriptional Regulation of Dental Epithelial Cell Fate Dental enamel is hardest tissue in the body and is produced by dental epithelial cells residing in the tooth. Their cell fates are tightly controlled by transcriptional programs that are facilitated by fate determining transcription factors and chromatin regulators. Understanding the transcriptional program controlling dental cell fate is critical for our efforts to build and repair teeth. In this review, we describe the current understanding of these regulators essential for regeneration of dental epithelial stem cells and progeny, which are identified through transgenic mouse models. We first describe the development and morphogenesis of mouse dental epithelium in which different subpopulations of epithelia such as ameloblasts contribute to enamel formation. Then, we describe the function of critical factors in stem cells or progeny to drive enamel lineages. We also show that gene mutations of these factors are associated with dental anomalies in craniofacial diseases in humans. We also describe the function of the master regulators to govern dental lineages, in which the genetic removal of each factor switches dental cell fate to that generating hair. The distinct and related mechanisms responsible for the lineage plasticity are discussed. This knowledge will lead us to develop a potential tool for bioengineering new teeth. Introduction Tooth bioengineering is of great interest because dental decay and tooth loss constitute major public health issues, and tooth anomalies are commonly found in many craniofacial diseases. Compared to the success of dental pulp stem cells (SC) in regenerative medicine [1], it has been a great challenge to regenerate dental enamel, the hardest tissue of the body. Dental enamel is produced by dental epithelial SC and progeny residing in the tooth [2], and their cell fate is controlled by specific transcription program [3]. Transcription factors (TFs) are ultimate regulators to conduct cell specific transcription in every biological process [4]. They are expressed in specific cell types and regulate the expression pattern. They recognize specific DNA sequences called response element or TF binding site and activate or repress the gene expressions [4]. Some TFs called fate TFs serve as the major drivers specifying cell fate [4][5][6] by orchestrating the fate-specific transcriptional program. Particular TFs possess the remarkable ability to reprogram one type of cell to another. The best known example is the combination of four TFs-Oct4(Pou5f1)/Sox2/Nanog/Klf4-that convert somatic cells to a pluripotent state [7]. Even one TF is sufficient to trans-differentiate somatic cells into another lineage. Myoblast determination protein (MyoD) converts fibroblasts to myoblasts [8]. Erythroid TF of GATA-binding protein 1 (Gata1) changes myeloblasts to erythrocyte precursors [9]. The CCAAT/enhancer-binding protein (Cebpα or β) converts B lymphocytes to macrophages [10]. In addition to TFs, the current model of transcriptional regulation includes a role for chromatin regulators to specify cell fate. They orchestrate gene transcription through controlling chromatin dynamics. For example, the chromatin remodeling complex, switching defective/sucrose non-fermenting factors (SWI/SNF) controls cell lineages through enhancer maintenance [11]. Highly conserved chromatin modifying complexes, such as the nucleosome remodeling and deacetylation (NuRD) complex, are also associated with lineage commitment during early development [12]. The special AT-rich sequence binding protein (SATB1) modulates the NuRD complex to regulate chromatin architecture and has the ability to modulate dental lineage [12]. The Mediator complex also controls cell lineage by facilitating gene transcription. The Mediator forms "super-enhancers" [13], which differ from typical enhancers in density and size. In super-enhancers, fate TFs are highly condensed to activate the transcription of cell identity genes [13]. For example, the Mediator complex maintains the cell fate of embryonic SCs (ESC) regulating four reprogramming TFs-Oct4(Pou5f1)/Sox2/Nanog/Klf4-within these super-enhancers. Reduced expression of the Mediator subunits induces ESC differentiation as a result of losing their pluripotent state following decreased expression of these 4 factors [13,14]. Mediator 1 (MED1) is one of the subunits of the multi-protein Mediator complex. Ablation of Med1 in vivo results in embryonic lethality in mice, but conditional Med1 null mice have been used to demonstrate its role in various cell lineages, including blood cells [15], T and B cells [16], and mammary epithelia [17,18]. Med1 controls epidermal lineages in skin, in which Med1 ablation in keratin 14 (Krt14) expressing epithelia enhances epidermal and sebaceous lineages while abolishing hair fate resulting in alopecia [19]. The same Med1 null mice convert the dental lineage to skin epithelia in the tooth [20,21]. Understanding the transcriptional program controlling their cell fate is crucial to our efforts to build and repair teeth. Identification of master regulators controlling dental transcriptional regulatory networks is necessary for successful manipulation of pluripotent or adult SCs to regenerate dental enamel for tooth bioengineering. Therefore, the control of enamel cell fate in tooth development and regeneration is the main theme of this review. A number of factors have been identified that control the cell fate of enamel producing dental epithelium. In this review, we describe the current understanding of TFs and chromatin regulators controlling dental cell fate. We first describe the development and morphogenesis of mouse dental epithelia in (1) early development, (2) different dental lineages towards subpopulations such as enamel producing ameloblasts, and (3) adult SCs in incisor to regenerate dental epithelia postnatally. Then, we discuss the role of critical TFs or chromatin regulators by focusing on (1) SCs and their renewal, (2) commitment to different lineages, and (3) lineage plasticity. We also discuss the clinical significance of these factors through their gene mutations causing dental defects in craniofacial diseases in humans. Our main focus is on the epithelial TFs that have the re-programming potential to regenerate enamel. Several signaling pathways such as Wnt, FGF, TGFβ, and BMP are important but not mentioned in here as they have already been reviewed by others [22,23]. Initiation of Tooth Development During embryonic development, tooth morphogenesis is initiated by thickening of dental epithelium to form a dental placode, followed by invagination into the mesenchyme in mice. Thereafter, tooth buds progress into the cap stage and primary enamel knots are formed in dental epithelium to lead to tooth cusps. Inner Enamel Epithelia (IEE) Lineage IEE cells are important for tooth morphogenesis as they eventually differentiate to enamel-producing ameloblasts. The basement membrane (BM) that lies between the epithelium and mesenchyme is critical for IEE differentiation and tooth morphogenesis [24,25]. Adhesion molecules such as LAMA5 and LAMA2 are important for IEE and tooth morphogenesis [26,27]. Mutations in LAMA3 or LAMB3 cause amelogenesis imperfecta in humans [28,29]. Nephronectin (NPNT) is an ECM protein possessing 5 EGF-like repeat domains and a RGD sequence that promotes proliferation and differentiation of IEE. The NPNT localizing in the BM of the developing tooth reduces the number of SCs and increases cell proliferation at least partially through the EGF signaling pathway [30]. Stratum Intermedium (SI) Lineage Dental epithelial SC also differentiate into the SI lineage that is located adjacent to IEE cells and ameloblasts. SI cells support enamel mineralization by expressing alkaline phosphatase (ALPL) [20], which is essential for mineralization of the tooth and bone, as shown by hypo-mineralization in conditional Alpl null mice [31][32][33]. SI cells also express Notch1, which is central to their differentiation. Notch signaling is induced by Notch ligands Jag1 and Jag2, which are located in the adjacent IEE and ameloblasts [34], in which Jag2-deficient mice also show enamel hypoplasia [35]. A single-cell RNA-seq and lineage tracing suggests that SI cells possess high lineage plasticity as Notch1-expressing SI cells are converted to ameloblasts during injury induced regeneration [23,36]. Outer Enamel Epithelia (OEE) and Stellate Reticulum (SR) Lineages The OEE is fused with IEE at the crown cervical margin and forms Hertwig's epithelial root sheath (HERS), which contributes to root formation in teeth [37,38]. A single-cell transcriptome study suggests that OEE cells control tooth size whereas SR cells regulate transport of nutrients in the incisor [39]. The unbiased clustering from single-cell analyses at 7 days old mouse incisors indicates that IEE and OEE, or SI and SR, are not much distinguishable by transcriptome. In addidtion, novel makers are identified in which ATF3 marks both OEE and SR cells. However, KRT15 labels only OEE [39]. However, independent single-cell transcriptome study at 8 weeks of the incisor demonstrates that dental epithelia are further divided into more than traditional category, in which OEE are clearly separated from upper IEE and IEE-OEE junctional region and further divided into two groups (OEE-1 and OEE-2). The SR is close to SI, and they are categorized into three groups: SI, inner SR/SI, and outer SR [36]. Ameloblast Lineage Ameloblasts are specialized epithelial cells responsible for the formation of the enamel, the hardest tissue in the human body. Ameloblast differentiation goes through a series of sequential morphological changes [40]. IEEs progress to presecretory ameloblasts. The signaling cues from dental mesenchymal cells facilitate further differentiation from presecretory to secretory ameloblasts. Secretory ameloblasts are polarized and secrete enamel matrix proteins, including amelogenin (Amelx) and ameloblastin (Ambn). Enamel crystal rods are formed and strengthened to mineralize the enamel matrix. After the enamel matrix is deposited, secretory ameloblasts differentiate into maturation ameloblasts. These cells are primarily responsible for ion transport and reabsorption of water and peptides hydrolyzed from the enamel matrix proteins to orchestrate the full mineralized enamel matrix. When enamel biomineralization is complete, ameloblasts subsequently apoptose. A single-cell transcriptome study indicates the two different types of ameloblasts that are distinguished by dentin sialophosphoprotein (Dspp) and Ambn [39]. The Dspp+ ameloblast modulates epithelial organization, whereas the Ambn+ ameloblast regulates enamel mineralization. Different TFs drive ameloblast differentiation at different stages, which we will describe in Section 3.3. Dental Epithelial Stem Cells (DESC) Postnatally, adult SCs called dental epithelial SCs residing in the labial cervical loop (CL) regenerate dental epithelial cells for the continuously growing mouse incisors throughout the life of the mouse ( Figure 1). Postnatally, adult SCs called dental epithelial SCs residing in the labial cervical loop (CL) regenerate dental epithelial cells for the continuously growing mouse incisors throughout the life of the mouse (Figure 1). Dental epithelial SCs share several characteristics with other adult SCs in regenerative tissues such as discrete niche and the ability to differentiate [3,41]. Dental epithelial SCs are supported by a microenvironment in the CL (stem cell niche) that plays important roles in maintenance, proliferation, and cell fate decisions [42]. Dental epithelial SCs are identified by numerous SC markers, including Sox2 [43,44], Lrig1, Bmi1, and Gli1. Dental epithelial SCs give rise to all the dental epithelial cells, including IEE, OEE, SR, and SI, during tooth development. IEE subsequently differentiate into ameloblasts, which secrete enamel matrix proteins ( Figure 1). Transit-amplifying (TA) IEE cells are highly proliferative and migrate from the cervical loop toward the distal end of the mouse incisor. Recent combinatory analyses with single-cell transcriptome, in site hybridization and lineage tracing [36,39], revise previous concepts of dental SCs although a part of traditional classification for IEE, OEE, SR, SI is confirmed. For example, new study demonstrates that a highly proliferative population in IEE houses progenitor cells. However, they are different from previously reported stem cells residing in OEE, which are marked by Sox2, Gli1, Bmi1, Lrig1 [36,39]. In this review, we will still use the traditional naming and markers but introduce recent modifications as appropriate. The Role of TFs and Chromatin Regulators in Dental Epithelial Cell Fate In this section, we describe various TFs and chromatin regulators that control dental epithelia at different stages of differentiation and different locations in the mouse mandible. We also provide the information about the mutations of these factors, which are associated with craniofacial diseases in humans, illustrating their clinical significance. Epithelial Signal Centers at the Early Developmental Stage During embryonic development, teeth are initiated from the dental lamina, a stripe of stratified epithelium first discovered at the sites of future tooth rows. Mouse embryonic dental lamina are characterized by localized expression of several TFs and signaling molecules, called epithelial signal centers. Pitx2, bicoid motif binding protein and a member of the paired-like homeobox family, arises in dental epithelium, and its expression persists in the developing tooth [45]. Pitx2 plays important Dental epithelial SCs share several characteristics with other adult SCs in regenerative tissues such as discrete niche and the ability to differentiate [3,41]. Dental epithelial SCs are supported by a microenvironment in the CL (stem cell niche) that plays important roles in maintenance, proliferation, and cell fate decisions [42]. Dental epithelial SCs are identified by numerous SC markers, including Sox2 [43,44], Lrig1, Bmi1, and Gli1. Dental epithelial SCs give rise to all the dental epithelial cells, including IEE, OEE, SR, and SI, during tooth development. IEE subsequently differentiate into ameloblasts, which secrete enamel matrix proteins ( Figure 1). Transit-amplifying (TA) IEE cells are highly proliferative and migrate from the cervical loop toward the distal end of the mouse incisor. Recent combinatory analyses with single-cell transcriptome, in site hybridization and lineage tracing [36,39], revise previous concepts of dental SCs although a part of traditional classification for IEE, OEE, SR, SI is confirmed. For example, new study demonstrates that a highly proliferative population in IEE houses progenitor cells. However, they are different from previously reported stem cells residing in OEE, which are marked by Sox2, Gli1, Bmi1, Lrig1 [36,39]. In this review, we will still use the traditional naming and markers but introduce recent modifications as appropriate. The Role of TFs and Chromatin Regulators in Dental Epithelial Cell Fate In this section, we describe various TFs and chromatin regulators that control dental epithelia at different stages of differentiation and different locations in the mouse mandible. We also provide the information about the mutations of these factors, which are associated with craniofacial diseases in humans, illustrating their clinical significance. Epithelial Signal Centers at the Early Developmental Stage During embryonic development, teeth are initiated from the dental lamina, a stripe of stratified epithelium first discovered at the sites of future tooth rows. Mouse embryonic dental lamina are characterized by localized expression of several TFs and signaling molecules, called epithelial signal centers. Pitx2, bicoid motif binding protein and a member of the paired-like homeobox family, arises in dental epithelium, and its expression persists in the developing tooth [45]. Pitx2 plays important roles in the pattern formation and differentiation of the tooth [46]. Mutations in the Pitx2 are associated with Axenfeld-Rieger Syndrome in humans, which presents with dental anomalies, including hypodontia and enamel hypoplasia [47]. Sox2 marks a dental epithelial signaling center through interaction with Pitx2 and Lef1 [48]. Foxi3 [49], Dlx2, Lef1, and p63 may also be responsible for driving dental fate [22]. Foxi3 inhibits enamel knot formation [50] as its deletion leads to a supernumerary and incorrect pattern of cusps in the mouse [50]. The TF families of Pax, Msx, Lhx, and Runx are important during the early developmental stage as tooth development is arrested at the bud stage when Pax9, Msx1, or Runx2 is deleted. The dental lamina stage is also disturbed when Msx1/2, Dlx1/2, and Lhx6/7 are mutated [22]. Mutations of PAX9 are associated with tooth agenesis in humans [51]. Nkx2-3, a member of the NK2 homeobox family of TF, also plays a critical role in the early developmental stage. Nkx2-3 mediates p21 expression and ectodysplasin-A signaling in the enamel knot for cusp formation during tooth development [52]. NK2 homeobox families are tissue-specific evolutionarily conserved TFs that regulate organ development, and Nkx2-3 has been identified as the dental epithelial specific Nkx factor through comparative microarray analyses. It may regulate dental SC fate because blocking Nkx2-3 expands Sox2 expressing populations in mouse organ culture system [52]. Adult SCs and Their Renewal Sox2 has been recognized as a marker of dental epithelial SCs [36,43,53] and maintains competence for tooth formation [54]. Sox2 is critical for self-renewal of the SCs as conditional deletion of Sox2 in the embryonic incisor epithelium leads to growth defects [54]. Tbx1 also controls proliferation and differentiation of dental SCs by modulating Pitx2 activation of p21. The deletion of Tbx1 leads to loss of enamel formation in mice [55]. Tbx1 is also a candidate gene for the 22q11.2 deletion syndrome causing dental defects in humans [56]. IEE/Ameloblast Lineage Sox2 and Pitx2 controlling dental SCs also initiate the ameloblast lineage since conditional knockout (KO) mice for Pitx2 [57] and Sox2 [54] have defects in ameloblast development. Mutations of Pitx2 are identified in the Axenfeld-Rieger syndrome and tooth agenesis in humans [58]. AmeloD is a basic helix-loop-helix (bHLH) TF recently identified by screening a tooth germ complementary DNA (cDNA) library using a yeast two hybrid system [59]. The cell-type-specific class II bHLH TFs activate or repress gene transcription and control various organ morphogenesis, including muscle, neuron, and blood cells [60]. For example, muscle specific MyoD, a member of this class of TFs, has the capability to trans-differentiate fibroblasts to myoblasts [8]. The dental AmeloD regulates ameloblast differentiation from IEE and HERS cells [61]. AmeloD acts as a suppressor of E-cadherin and promotes the migration of dental epithelia. AmeloD is also important for the progression of the SCs towards enamel lineage since AmeloD KO results in enamel hypoplasia [61] and deletion of E-cadherin affects cell fate of SCs and progeny [62]. Mechanistically, AmeloD represses E-cadherin expression by transcriptional regulation, in which it directly binds to E-cadherin proximal promoter and recruits a chromatin repressive complex, including repressive histone H3K27me3 and Ezh2, which are a part of PRC2 core complex [59]. The chromatin organizer and TF, SATB1, controls ameloblast lineage at the early presecretory stage. SATB1 is a cell-type-specific gene regulator, originally found in T cells, in which it regulates gene transcription by folding chromatin into loop domains, and its deletion causes temporal and spatial mis-expression of numerous genes to arrest T-cell development [12]. In the tooth, SATB1 is expressed in presecretory ameloblasts and is essential to maintain ameloblast differentiation, cell polarity, and unidirectional secretion of matrix proteins [40]. Satb1 null mice show thin and hypo-mineralized enamel, in which Amelx transports to the apical secretory front and secretion into the enamel space are impeded, resulting in a massive cytoplasmic accumulation of Amelx [40]. The expression of SATB1 is increased when secretion and processing of matrix protein are accelerated by overexpression of alternatively spliced Amelx, the leucine rich Amelx peptide [63]. Epiprofin (Epfn)/Sp6 is a key factor to promote IEE differentiation as well as proliferation [64,65]. Epfn/Sp6 is present in ameloblasts, including IEE and secretory and mature ameloblasts, with increasing levels of expression [61]. A missense variant in Epfn/Sp6 is associated with amelogenesis imperfecta in humans [66]. Ablation of Epfn/Sp6 results in enamel defects during cusp and root formation in the mouse [65]. In contrast, over-expression of Epfn/Sp6 in Krt5-expressing epithelia induces ectopic enamel in the lingual side of the incisor, where control mice do not normally form enamel [67]. Epfn/Sp6 controls enamel formation and tooth morphogenesis through the interaction of epithelial and mesenchyme [67]. Double KO mice for Epfn/Sp6 and AmeloD show the transcriptional regulation by these two factors that is essential for epithelial cell invasion and cell proliferation [61]. Dental Lineage Plasticity Dental epithelia are developmentally derived from ectoderm and separated from other ectoderm appendages such as skin epithelia. Dental and skin epithelia are distinct in their structure and function but share similar signal pathways and transcriptional machinery. However, the checkpoints to specify the dental lineage compared to epidermal ones are not well understood. Several studies show that the master regulators Sox21, Med1, and Msx2 govern the ectoderm lineages. Genetic removal of each factor re-programs enamel producing dental epithelia to epidermal/hair epithelia in transgenic mice, in which actual hair is generated in case of Sox21 and Med1 null incisors but not with Msx2 deletion. Sox21 is a member of the SRY-Box (Sox) B group. Sox21 belongs to the SoxB2 protein family and functions as a transcriptional repressor although SoxB1 (Sox1-3) proteins are activators [72]. Sox21 was first found as a Sox2-associated factor [73]. The balance of transcriptional activation and repression is important for cell fates. For example, Sox21 repression of SoxB1 expression promotes neural differentiation [72]. Sox21 also regulates differentiation of hair cuticle, and Sox21 null mice develop cyclic alopecia [74]. In teeth, Sox21 functions as a master TF to govern ectodermal lineages since conditional Sox21 null mice switches the cell fate of dental epithelia to that generating hair, resulting in severe enamel hypoplasia [75]. Sox21 null dental epithelial cells fail to commit to the ameloblast lineage. Instead, Sox21 ablation leads to the formation of a unique microenvironment promoting hair fate because a part of dental epithelia are converted to mesenchymal like cells through epithelial mesenchymal transformation (EMT), which is supported by TGFβ [75]. These mesenchymal-like cells may generate a signal to stimulate epidermal differentiation as hair papilla do in the skin [61]. In addition, Sox21 ablation decreased E-cadherin expression, which is essential to maintain dental lineages [75]. Hair is also generated in the incisor of Fam83h null mice, although it forms relatively normal enamel [76]. The truncation mutations of FAM83H cause autosomal dominant hypocalcified amelogenesis imperfecta in humans [76]. The mechanism by which Fam83h ablation generates hair in the incisor may be related to ones for Sox21 and Med1, although Fam83H is not a TF. Msx2 also plays a critical role to control dental cell fate. Msx2 is a member of the family of divergent homeobox-containing genes. Msx2 was first reported as a transcriptional repressor [77]. It functions by forming heterodimers with other homeobox TFs such as CEBPα. Msx2 controls the cell fate of osteoblasts in bone and epithelium in ectodermal tissues such as skin, tooth, and mammary glands [78]. Msx2 antagonizes CEBPα and regulates ameloblast lineage by controlling expression of amelogenin [79]. Global Msx2 KO mice do not form enamel in the normal location, and ectopic mineralization occurs in SR cells as a result of disturbing the differentiation of both ameloblast and SI at the maturation stage. Instead, Msx2-deficient OEE cells become highly proliferative and are transformed into epidermal cells. The epidermal and hair marker proteins are accumulate in the SR layer, but actual hair is not generated [80]. Therefore, Msx2 is considered a master TF, but its function may be dependent on the interaction with other unknown TFs. Med1 also controls the cell fate of dental epithelia. Med1 ablation inhibits Notch1-mediated SI differentiation and disrupts amelogenesis essential for mineralization of the enamel matrix [20,21]. Med1 supports SI differentiation by directly facilitating Notch1-mediated gene transcription of Alpl by forming a complex with cleaved Notch1/Rbp-Jk on the Alpl promoter [20]. Instead, dental cells institute an epidermal program to regenerate ectopic hairs in the incisors. Sox2 expression persists beyond the CL and extends into the differentiation zone such that the cells within this zone remain multi-potent to maintain stem cell potentials [21]. These cells are induced to epidermal fate, likely by the calcium present in dental tissues [21]. The KO mice for these master regulators Sox21, Msx2, and Med1 indicate the high lineage plasticity of dental epithelial cells. However, epidermal fate is derived through different types of enamel epithelia. Epidermal fate is induced through IEE/ameloblast, Notch1-expressing SI cells, and SR cells in KO mice for Sox21, Med1, and Msx2, respectively (Figure 2). layer, but actual hair is not generated [80]. Therefore, Msx2 is considered a master TF, but its function may be dependent on the interaction with other unknown TFs. Med1 also controls the cell fate of dental epithelia. Med1 ablation inhibits Notch1-mediated SI differentiation and disrupts amelogenesis essential for mineralization of the enamel matrix [20,21]. Med1 supports SI differentiation by directly facilitating Notch1-mediated gene transcription of Alpl by forming a complex with cleaved Notch1/Rbp-Jk on the Alpl promoter [20]. Instead, dental cells institute an epidermal program to regenerate ectopic hairs in the incisors. Sox2 expression persists beyond the CL and extends into the differentiation zone such that the cells within this zone remain multi-potent to maintain stem cell potentials [21]. These cells are induced to epidermal fate, likely by the calcium present in dental tissues [21]. The KO mice for these master regulators Sox21, Msx2, and Med1 indicate the high lineage plasticity of dental epithelial cells. However, epidermal fate is derived through different types of enamel epithelia. Epidermal fate is induced through IEE/ameloblast, Notch1-expressing SI cells, and SR cells in KO mice for Sox21, Med1, and Msx2, respectively ( Figure 2). These results for Sox21 and Med1 null mice suggest both common and distinct mechanisms to underlying lineage plasticity. We propose that dental epithelial cells lacking these master regulators remain in an undifferentiated state and behave as pseudo stem cells in their location, where each factor is critical for their lineage. For example, Sox21-lacking and Med1-deficient dental epithelia fail to commit to their own fates for ameloblast and SI lineage, respectively. Instead, they may maintain multi-potency, as shown by the stem cell marker Sox2 extending into the differentiation zones in both Med1 and Sox21 null tooth [21,75]. These cells may be then re-programmed to skin epithelia through some stimulants present in their microenvironments. Sox21 and Med1 null tooth may utilize distinct stimulants since hair is generated in different locations of enamel organ. Sox21 null mice generate hair in ameloblast zone [75], in which Sox21-deficient epithelial cells are converted to mesenchymallike cells by EMT [75]. These mesenchymal cells may send a signal to induce epidermal fate as hair papilla do in the skin. In contrast, Med1 null incisor generates hair under papillary layer, where the calcium is abundantly supplied from blood vessels. The extracellular calcium is transported for These results for Sox21 and Med1 null mice suggest both common and distinct mechanisms to underlying lineage plasticity. We propose that dental epithelial cells lacking these master regulators remain in an undifferentiated state and behave as pseudo stem cells in their location, where each factor is critical for their lineage. For example, Sox21-lacking and Med1-deficient dental epithelia fail to commit to their own fates for ameloblast and SI lineage, respectively. Instead, they may maintain multi-potency, as shown by the stem cell marker Sox2 extending into the differentiation zones in both Med1 and Sox21 null tooth [21,75]. These cells may be then re-programmed to skin epithelia through some stimulants present in their microenvironments. Sox21 and Med1 null tooth may utilize distinct stimulants since hair is generated in different locations of enamel organ. Sox21 null mice generate hair in ameloblast zone [75], in which Sox21-deficient epithelial cells are converted to mesenchymal-like cells by EMT [75]. These mesenchymal cells may send a signal to induce epidermal fate as hair papilla do in the skin. In contrast, Med1 null incisor generates hair under papillary layer, where the calcium is abundantly supplied from blood vessels. The extracellular calcium is transported for enamel mineralization there, but it may induce epidermal fate in case of Med1 lacking tooth. Calcium induces epidermal fate of Med1 lacking dental epithelial cell in culture [21], and calcium gradient stimulates epidermal differentiation in the skin [81]. This re-programming may be driven by chromatin dynamics. Cell fate is supported by super-enhancers in ESC and somatic cells [82,83], in which the Mediator complex and the fate TFs are also densely incorporated [84]. Our recent results show that the same is true in dental epithelial SCs. Med1 may regulate cell fate by forming the super-enhancers, in which dental enamel fate TFs are highly incorporated (unpublished observations). Med1 ablation blocks the enamel lineage, resulting in enamel hypoplasia [21] ( Figure 3A microCT panels [21] by disturbing these epigenetic regulation (unpublished observations). We present a model to show that dental cell fate is controlled by epigenetic processes: (1) Mediator complex containing Med1 (blue) forms super-enhancers, (2) several fate TFs such as Pitx2 or Sox21 (pink and green) are densely recruited into the super-enhancers, (3) the super-enhancers are linked to the promoter of dental specific genes as some of Mediator subunits bind to general transcriptional complex (yellow), and (4) gene transcription for enamel lineage is induced through RNA polymerase (PIC) ( Figure 3B). Therefore, fate TFs such as Sox21 or chromatin regulator of Med1 may be essential for enamel formation by controlling dental epithelial cell fate. Fate TFs are present in specific locations of dental epithelia where they function. However, Mediator complexes are ubiquitously expressed in all cells as universal transcriptional machinery to support the function of these fate TFs. In fact, Med1 deletion from Krt14-expressing epithelia converts the cell fate not only in dental epithelia but also in skin, where it controls the balance of three epidermal cell fates involving hair, sebaceous gland, and interfollicular epidermis [85]. Sox21 is not a universal TF but is present in both dental and skin epithelia, and Sox21 deletion from Krt14 epithelia also affect cell fates of hair keratinocytes in the skin [74]. [21], and calcium gradient stimulates epidermal differentiation in the skin [81]. This re-programming may be driven by chromatin dynamics. Cell fate is supported by superenhancers in ESC and somatic cells [82,83], in which the Mediator complex and the fate TFs are also densely incorporated [84]. Our recent results show that the same is true in dental epithelial SCs. Med1 may regulate cell fate by forming the super-enhancers, in which dental enamel fate TFs are highly incorporated (unpublished observations). Med1 ablation blocks the enamel lineage, resulting in enamel hypoplasia [21] ( Figure 3A microCT panels [21] by disturbing these epigenetic regulation (unpublished observations). We present a model to show that dental cell fate is controlled by epigenetic processes: (1) Mediator complex containing Med1 (blue) forms super-enhancers, (2) several fate TFs such as Pitx2 or Sox21 (pink and green) are densely recruited into the superenhancers, (3) the super-enhancers are linked to the promoter of dental specific genes as some of Mediator subunits bind to general transcriptional complex (yellow), and (4) gene transcription for enamel lineage is induced through RNA polymerase (PIC) ( Figure 3B). Therefore, fate TFs such as Sox21 or chromatin regulator of Med1 may be essential for enamel formation by controlling dental epithelial cell fate. Fate TFs are present in specific locations of dental epithelia where they function. However, Mediator complexes are ubiquitously expressed in all cells as universal transcriptional machinery to support the function of these fate TFs. In fact, Med1 deletion from Krt14-expressing epithelia converts the cell fate not only in dental epithelia but also in skin, where it controls the balance of three epidermal cell fates involving hair, sebaceous gland, and interfollicular epidermis [85]. Sox21 is not a universal TF but is present in both dental and skin epithelia, and Sox21 deletion from Krt14 epithelia also affect cell fates of hair keratinocytes in the skin [74]. Functions of various TFs and chromatin regulators in enamel organ are summarized in Table 1. The function is deduced through mouse phenotypes of either global or conditional KO mouse models or enamel organ culture lacking the factors. The localization within the enamel epithelia is shown. Their potential roles in DESCs are also proposed through histological or gene expression analyses of mouse or organ culture models by focusing stem cell functions such as maintenance/proliferation and lineage commitments. The cell fate decision means DESCs commit to dental epithelial fate to produce the enamel but suppress non-dental ectoderm lineages towards hair and epidermis. Functions of various TFs and chromatin regulators in enamel organ are summarized in Table 1. The function is deduced through mouse phenotypes of either global or conditional KO mouse models or enamel organ culture lacking the factors. The localization within the enamel epithelia is shown. Their potential roles in DESCs are also proposed through histological or gene expression analyses of mouse or organ culture models by focusing stem cell functions such as maintenance/proliferation and lineage commitments. The cell fate decision means DESCs commit to dental epithelial fate to produce the enamel but suppress non-dental ectoderm lineages towards hair and epidermis. Table 1. The function of transcription factors and chromatin regulators in enamel organ and their potential role in dental stem cells. The localization of these factors is also shown. Conclusions In summary, we have discussed recent progress in our understanding of the TFs that control dental epithelial cell fate. We also demonstrate the high plasticity of dental epithelial cells that can be re-programmed to other lineages by manipulation of the expression of master regulators. These factors are obviously candidates for cell re-programming, in which one factor or combination of factors is capable of converting either induced pluripotent stem cell (iPS) or other somatic cells into dental epithelia to produce enamel. Further investigation of their mechanisms at the genetic level will advance our efforts to generate new teeth.
2020-11-26T09:03:51.011Z
2020-11-25T00:00:00.000
{ "year": 2020, "sha1": "fdbe2cdc0ea691916aa7717c4ee75a79e0a8c399", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/21/23/8952/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9a5f48449e16e08caba463a1ec08aa8fe8164500", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
258841244
pes2o/s2orc
v3-fos-license
Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction The robustness to distribution changes ensures that NLP models can be successfully applied in the realistic world, especially for information extraction tasks. However, most prior evaluation benchmarks have been devoted to validating pairwise matching correctness, ignoring the crucial measurement of robustness. In this paper, we present the first benchmark that simulates the evaluation of open information extraction models in the real world, where the syntactic and expressive distributions under the same knowledge meaning may drift variously. We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique that consists of sentences with structured knowledge of the same meaning but with different syntactic and expressive forms. By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques. We perform experiments on typical models published in the last decade as well as a popular large language model, the results show that the existing successful models exhibit a frustrating degradation, with a maximum drop of 23.43 F1 score. Our resources and code are available at https://github.com/qijimrc/ROBUST. Introduction Open Information Extraction (OpenIE) aims to extract n-ary knowledge tuples {(a 1 , p, a 2 , ..., a n )} consisting of n arguments and one predicate from the natural text in a domain-independent manner, which has been served as the backbone to benefit NLP applications for many years (Liu et al., 2021;Pei et al., 2022;Chen et al., 2021). Due to its structural flexibility, the evaluation of OpenIE is a nontrivial problem, which in turn drives the advancement of the task.Early studies (Stanovsky and Dagan, 2016;Zhan and Zhao, 2020) < l a t e x i t s h a 1 _ b a s e 6 4 = " s Q v L o Q e B g P M 6 9 W u r b B I M 5 S 7 9 E l 0 = " > A M 3 H u 9 J P A z 6 T i v B W t u f m F x q b h c W l l d W 9 8 o b 2 6 1 s j h P u W j y O I j T j s c y E f i R a E p f B q K T p I K F X i D a 3 s 2 Z i r d v R Z r 5 c X Q p x 4 n o h W w U + U O f M 0 n U B e u 7 / X L F q T p 6 2 b P A N a A C s e l N S 2 t g j T U x 5 K W F 1 m q 3 j u X Z W 7 G / e E + 2 p 7 j a m v 2 e 8 Q m I l r o n 9 S z f N / K 9 O 1 S I x x I m u w a e a E s 2 o 6 r h x y X V X 1 M 3 t L 1 V J c k i I U 3 h A 8 Z Q w 1 8 p p n 2 2 t y X T t q r d M x 9 9 0 p m L V n p v c H O / q l j R g 9 + c 4 Z 0 H r o O o e V d 3 z w 0 r t 1 I y 6 i B 3 s Y p / m e Y w a 6 m i g S d 4 j P O I J z 1 b d i q z c u v t M t Q p G s 4 1 v y 3 r 4 A N k m k B A = < / l a t e x i t > a1 < l a t e x i t s h a 1 _ b a s e 6 4 = " x G s T 0 n 6 e F 2 Q u 0 u G 9 q i l I O 0 e Z E Y U = " > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I i 6 r L o p s u K 9 g G 1 l M l 0 W k P T J C Q T p R T B H 3 C r n y b + g f 6 F d 8 Y p q E V 0 Q p I z 5 9 5 z Z u 6 9 X h z 4 q X S c 1 5 y 1 s L i 0 v J J f L a y t b 2 x u F b d 3 m m m U J V w 0 e B R E S d t j q Q j 8 U D S k L w P R j h P B x l 4 g W t 7 o X M V b t y J J / S i 8 o 0 D a j L I E Z T B i R / Q d 0 q 5 j 2 J D 2 y j P V a k 6 n B P Q m p L R x Q J q I 8 h L C 6 j R b x z P t r N j f v K f a U 9 1 t Q n / P e I 2 J l b g h 9 i / d L P O / O l W L x A C n u g a f a o o 1 o 6 r j x i X T X V E 3 t 7 9 U J c k h J k 7 h P s U T w l w r Z 3 2 2 t S b V t a v e M h 1 / 0 5 m K V X t u c j O 8 q 1 v S g N 2 f 4 5 w H z U r Z P S 6 7 F 0 e l 6 p k Z d R 5 7 2 M c h z f M E V d R Q R 4 O 8 h 3 j E E 5 6 t m h V a m X X 3 m W r l j G Y X 3 5 b 1 8 A H b h p A R < / l a t e x i t > a2 < l a t e x i t s h a 1 _ b a s e 6 4 = " V h k 5 3 o s A w j Q 7 M s 3 / i H U 2 w X R 9 s K I f M E F V R R Q 5 2 8 B 3 j E E 5 6 t q h V a q X X 3 m W r l M s 0 2 v i 3 r 4 Q P d 5 p A S < / l a t e x i t > a3 < l a t e x i t s h a 1 _ b a s e 6 4 = " s Q v L o Q e B g P M 6 9 W u r b B I M 5 S 7 9 E l 0 = " > A M 3 H u 9 J P A z 6 T i v B W t u f m F x q b h c W l l d W 9 8 o b 2 6 1 s j h P u W j y O I j T j s c y E f i R a E p f B q K T p I K F X i D a 3 s 2 Z i r d v R Z r 5 c X Q p x 4 n o h W w U + U O f M 0 n U B e u 7 / X L F q T p 6 2 b P A N a A C s < l a t e x i t s h a 1 _ b a s e 6 4 = " E v e X X q k u i l 3 5 4 J B 0 8 based on the lexical matching of syntactic heads between elements.To tackle the overly lenient metric, subsequent approaches (Lechelle et al., 2019;Bhardwaj et al., 2019;Gashteovski et al., 2022) propose to use of exact matching between tokens for delicate evaluation.Among these benchmarks, CaRB (Bhardwaj et al., 2019) adopts the all-pair matching table to compute the tuple match scores between extractions, which has been considered the de facto standard for evaluation.Research including these efforts has been devoted to evaluating the pairwise matching correctness between model extractions and golden facts on a sentence.However, the conventional evaluation benchmarks do not measure the robustness of models in the realistic open-world scenario, where the syntactic and expressive forms may vary under the same knowledge meaning (Qi et al., 2023).As shown in Figure 1, while the three sentences s 1 , s 2 , s 3 contain the same structured knowledge (a 1 , p, a 2 , a 3 ), the state-of-the-art model OpenIE6 successfully extracts facts (in green color) on sentence s 1 , but fails to predict arguments (in red color) on the other sentences due to the syntactic and expressive drifts.In this example, the sentence s 1 comes from CaRB which has a similar syntactic distribution to the training set, and existing benchmarks can only eval-uate models on this limited target attributing it the commendable scores (46.4/33.3),rather than on the other world samples.For accurate and faithful evaluation, we should measure the performance of models on sentences with various syntactic and expressive distributions under the same knowledge meaning (Zhong et al., 2022).Nevertheless, it is not trivial to construct a benchmark that satisfies the aforementioned conditions of encompassing both knowledge invariance and distributional shift.First, manual annotation of parallel texts to maintain the same knowledge meaning with different syntactic and expressive forms may result in either too trivial or artificial.Second, it is difficult to build a metric that measures the robustness as well as be compatible with existing benchmarks (e.g., (Bhardwaj et al., 2019;Gashteovski et al., 2022)) to ensure comparability. On the other hand, natural language paraphrasing is defined as producing sentences with different surface forms (syntactic and lexical) by conveying the same semantic meaning (Zhou and Bhat, 2021).Going beyond the pairwise correctness comparison, can we evaluate the robustness of models based on reliable paraphrases equipped with syntactic and expressive transformations? In this paper, we introduce ROBUST, a Robust OpenIE Benchmark with Ubiquitous Syntactic Transformations, aiming to evaluate the robustness of OpenIE models.ROBUST is a large-scale human-annotated benchmark consisting of 1, 272 robustness testing cliques, where each clique contains sentences with different syntactic and expressive variations while conveying the same underlying knowledge meaning, for a total of 4,971 sentences and 16,191 knowledge extractions.To obtain each clique, we first adopt a syntactically controllable paraphraser with diversified syntactic sampling and expressive filtering strategies to generate paraphrases for each sentence in CaRB.We then design a two-stage annotation pipeline to perform sentence correction and knowledge extraction for each individual paraphrase in cliques based on human experts.This data paradigm enables evaluation to go beyond pairwise matching to clique-wise comparisons.Upon the testbed structure, we calculate the robustness scores with respect to the worst performance within a clique and further analyze the performance variances on all cliques.This metric fairly reflects the robustness of models to distributional drifts and is also compatible with existing benchmarks calculated at one sentence magnitude. To explore the robustness of existing models, we implement typical OpenIE systems published in the past decade.The experimental results show a dramatic degradation in model performance on ROBUST, with an average drop of 18 percentage points in F 1 scores, indicating that the robustness of existing successful models is far from satisfactory.We then further analyze the correlation between the variances of the model performance and the divergences of the syntactic distances on the cliques.The results find that the variance grows as the syntactic distance increases, and models behaved with similar variance on most of the cliques also demonstrate the inner consistency of our benchmark.In addition, we also evaluate the a representative large language model ChatGPT1 for OpenIE.Experimental results demonstrate that ChatGPT achieves a remarkable performance that is compatible with the state-of-the-art model on CaRB (F 1 score of 0.516 under the 10-shot setting), yet it still exhibits the robustness issue on ROBUST (F 1 score of 0.275 under the 10-shot setting). The ROBUST Benchmark In this section, we describe the details of the benchmark construction.The benchmark consists of cliques based on syntactically diverse paraphrase generation and human annotation to unsure the knowledge invariance and distributional shift, where both the syntactic transformations sampled from real world and the human experience guarantee the naturalness.We also provide details of annotations and strategies in the Appendix A.1 and A.2. Data Preparation Paraphrase Generation.Considering the compatibility with previous benchmarks, we build our benchmark based on CaRB (Bhardwaj et al., 2019), which contains 1,272 sentences2 of general domain originated from OIE2016 (Stanovsky and Dagan, 2016) with high-quality n-tuples annotations.To build sufficient paraphrases, we adopt AESOP (Sun et al., 2021), a syntactically controllable paraphrasing model generating paraphrases by specifying pruned target syntactic trees that can be sampled diversely.The model used in our work is trained on a parallel annotated data with two-level target N 2 f 4 5 w H z U r Z P S 6 7 F 0 e l 6 p k Z d R 5 7 2 3 N e 2 I 8 9 d 3 G 9 syntactic trees.During generation, we first collect a set of constituency parse pairs {(T P s i , T P t i )} pruned at height 3 from the ParaNMT-50M (Wieting and Gimpel, 2018).And then for each sentence s with its constituency parse tree T , we obtain 2 most similar parses {(T ′P s i , T ′P s 2 )} by calculating weighted ROUGE scores between parse strings and select 5 top-ranked parses from {T P t i } for each T ′P s i by a sampling with the distribution of ) .We thus generate 10 syntactically varying paraphrases for each sentence. Diversified Expressive Filtering.Though different syntactic trees are specified in the paraphrase generation, we find that there are still similar expressions in the generated sentences.Therefore, we further filter the paraphrases with a heuristic search strategy to maintain the most diverse ones.For each clique composed of multiple sentence nodes, including an original sentence and multiple paraphrases, we first calculate the BLEU scores (Papineni et al., 2002) between all pairs of nodes.We then repeat the following simple strategy on paraphrase nodes until reaching the maximum acceptable number to eliminate homogeneity: (1) find the pair of nodes with the largest score in the current clique; (2) remove a node if its length is less than 2/3 of the original sentence, otherwise remove the node with the highest sum of scores with all other nodes.As depicted in Figure 1, the remaining sentences s 2 and s 3 exhibit distinct syntactic structures and expressive forms compared to the original sentence s 1 .The detailed process with an example is shown in Appendix A.2.2. Annotation For each paraphrase within a clique, we further design a two-stage annotation pipeline based on human experts to perform sentence correction and structured knowledge extraction.All annotators undergo training with tutorials to pass a final examination, and our batch-wise sampling validation ensure an annotation accuracy of over 90%.Detailed annotation including annotators, platforms, and quality checking can be found in Appendix A.1.Paraphrase Annotation.While automatically generated paraphrases present syntactic and expressive variants, the correctness of the sentences cannot be fully guaranteed.To ensure the quality of the sentences, we perform a thorough paraphrase annotation with three types of corrections: • Grammar Correcting: Correct grammatical mistakes in sentences to ensure the fluency. • Phrase Replacing: Replace the incorrect phrases in sentences to ensure the correctness. • Sentence Rewriting: Rewrite the entire sentence if it has a semantic difference from the original sentence. All operations are required to preserve both the distinctiveness of the annotation from the original sentence and their semantic equivalence.Based on this paradigm, all paraphrases are guaranteed to differ from the original sentence in expression, while retaining the same semantic meaning.As shown in Figure 2, the three sentences in the 1st column exhibit different syntactic and expressive forms.A detailed process is available in Appendix A.1.1.Knowledge Annotation.In the second stage, we leverage human experts to annotate N-ary knowledge tuples on the paraphrases finished in the first stage.We design a guideline involving an iterative process to instruct annotators in extracting all possible facts from a sentence.By referring to the annotation of CaRB, in each iteration, we also divide the task of annotating into three steps: (1) recognizing the predicate, (2) finding the arguments for that predicate, and (3) optionally obtaining the time and location arguments for the tuple if possible. In particular, we distribute the complete clique to individual annotators to obtain extractions with the same structured knowledge meaning.This annotation process ensures the characteristics in CaRB (i.e.Completeness, Assertedness, Informativeness, and Atomicity) while maintaining consistency with the underlying knowledge.As illustrated in the fourth column of Figure 2, the extractions from different sentences correspond to the same underlying knowledge.Detailed annotation process is available in Appendix A.1.2. Data Analysis To understand the general characteristics of RO-BUST, we provide quantitative statistics at different granularities in comparison to previous benchmarks.In contrast to the traditional analysis on words and sentences, we further investigate the syntactic phenomena on cliques to explain the robustness evaluation. Data Statistics Table 1 shows the quantitative statistics of RO-BUST and representative OpenIE benchmarks, including OIE2016 (Stanovsky and Dagan, 2016), Re-OIE2016 (Zhan and Zhao, 2020), CaRB (Bhardwaj et al., 2019) and BenchIE (Gashteovski et al., 2022).In comparison with the conventional dataset, ROBUST provides the largest number of humanannotated high-quality sentences.Meanwhile, based on the annotation paradigm, ROBUST raises a new data structure, the clique, which establishes the interconnection of sentences with underlying knowledge.The average number of sentences per clique is 3.877. In addition, we find that previous benchmarks completely originate from OIE2016 based on Wiki and newswires, potentially leading to distribution bias to similar training corpus, especially for pretrained language models (e.g.BERT (Devlin et al., 2019)) trained on the general corpora.ROBUST mitigates this bias by extending syntactic and expressive distributions to realistic scenarios.We further compute the vocabulary sizes for CaRB and ROBUST, resulting in 7648 and 7981, respectively, demonstrating that our natural annotations do not introduce many rare words. Syntactic Analysis The proposed benchmark measures the robustness of models on the drifts of linguistic observations.Therefore, the syntactic divergence in the clique is the key to ensuring robustness evaluation.We provide a thorough syntactic analysis of cliques to investigate the divergence.Metrics of Syntactic Correlation.In order to analyze the syntactic divergence in the cliques, we need a metric to measure the syntactic correlation between two sentences.A fast and effective algorithm is the HWS distance proposed in (Qi et al., 2023), which calculates the syntactic tree distance between two sentences based on a hierarchically weighted matching strategy, where smaller weights imply a greater focus on the comparison of skeletons.The value domain of this is [0, 1], where 1 indicates the farthest distance.However, we find that their method may lead to the overcounting problem for repeated consecutive spans 3 .We revise the original algorithm to solve the problem while maintaining efficiency.The details of the revised algorithm are shown in Appendix A.2.1 for ease of use. We additionally implement the algorithm of Convolutional Tree Kernel (CTK) similarity proposed in (Collins and Duffy, 2001) to fairly illustrate the syntactic phenomenon.In contrast to distance, it measures the similarity between a pair of tree structures by counting the number of tree fragments in common.The value domain of this algorithm is also [0, 1], where 1 means the maximum similarity.Intra-clique Syntactically Analysis.To exhaustively investigate the syntactic divergence on the cliques, we calculate the average syntactic distance/similarity in each individual clique based on the algorithms described above.The result is shown in Figure 3, where the horizontal axis and vertical axis are the output and the discounting weights of two algorithms, respectively. Overall, we observe that the values of syntactic distance and syntactic similarity are mainly scattered between [0.6, 0.9] and [0.0, 0.7], respectively, indicating that most of the cliques exhibit significant syntactic discrepancies.Another notable observation is that the distribution of the HWS scatter representing the distance is closer to 1 as the discount weight decreases, suggesting that the differences in syntactic skeletons are more significant in ROBUST. Inter-cliques Syntactically Analysis.Going be-3 For two strings s1s3s4 and s1s2s1 with consecutive span s1 in common (e.g, SVPNP and SVPNPVP), the resulting distance may increase with the repetition of span s1. yond the individual clique, we further explore the syntactic divergence over all cliques.As shown in Figure 4, we average the mean of clique-wise syntactic distance/similarity on all cliques, based on the linearly increased discounting weights.We find that the average similarity of syntactic trees on ROBUST decreases rapidly as the discounted weight of the algorithm increases.Considering that increasing the weights implies a reduced focus on the low-level tree fragments, this result suggests that ROBUST involves prominent variability in the high-level skeleton of syntactic trees. Experiments In this section, we explore the robustness of existing successful OpenIE systems and further analyze the impact of different model architectures on robustness.We first introduce the proposed ROBUST metric, which calculates the robustness performance on a clique, and then extensively evaluate six typical models from three major categories and a large language model ChatGPT.Furthermore, based on the clique structure, we analyze the correlation between the variances of the model performance and the syntactic divergences in cliques. Evaluation Metrics The existing widely used CaRB scorer computes pairwise matching scores based on extractions on a sentence.Though accurate, it has rather limitations.We extend this scorer on cliques to calculate the robustness scores.The CaRB Metric.To evaluate the correctness of system tuples, CaRB first creates an all-pair matching table, with each column as a system tuple and each row as a gold tuple, and computes precision and recall scores in each cell.Then, it calculates the overall recall R by averaging the maximum values of all rows and the overall precision P by averaging the one-to-one precisions between system tuples and gold tuples in the order of the best match score to the worst.Finally, the overall F 1 is computed with R and P .The ROBUST Metric.An OpenIE system is considered robust if it behaves consistently on sentences with the same underlying knowledge meaning but differing syntactic and expressive variations, indicating the preservation of knowledge invariance.Therefore, we naturally calculate the robustness scores of a model on each clique. Given {s 1 , ..., s k } in ROBUST, we first calculate the P/R/F 1 scores of the model on each sentence, and then select the scores from the sentence with the worst F 1 as the ultimate robustness scores As mentioned above, we can compute the pair-wise P/R/F 1 scores based on the CaRB scorer. It is noteworthy that the ROBUST evaluation metric is compatible with existing benchmarks because we calculate on the order of magnitude of one sentence, and we can directly compare our robustness scores with CaRB and others. Evaluation Models To exhaustively evaluate the robustness of existing paradigms, we select six typical OpenIE approaches from 3 categories.(1) Rule-based models, which adopt linguistic patterns to identify knowledge facts, including OpenIE4 (Christensen et al., 2011), ClauseIE (Del Corro and Gemulla, 2013), and OpenIE5 (Saha et al., 2017(Saha et al., , 2018)).( 2) Independent NN-based models, that train neural networks from scratch with designed architecture, including RnnOIE (Stanovsky et al., 2018) and SpanOIE (Zhan and Zhao, 2020).(3) PLMbased models, that rely on a pre-trained language model usually trained on a large-scale text corpus, including OpenIE6 (Kolluru et al., 2020a) which introduces a novel iterative grid labeling architecture, which treats OpenIE as a 2-D grid labeling task to produce extractions gradually based on BERT. We also evaluate the OpenIE performance of ChatGPT.We use the python API interface of gpt-3.5-turboversion4 for all experiments.We perform few-shot experiments with manually constructed prompts and sampled demonstrations for CaRB and ROBUST benchmarks.The prompt template is available in Appendix A.3. Results on Typical OIE Models We run the source code of all baselines on both CaRB and ROBUST and compute the average scores across all samples.All results are shown in Table 2.Note that although the ROBUST scores are calculated in a different environment than CaRB, it still offers a fair comparison due to the calculation manner.Based on the result, we can see that current successive OpenIE systems experience a considerable performance decline on ROBUST across the board.Compared with CaRB, the average degradation for precision, recall, and the F 1 score is 20%, 15%, and 18%, respectively.This observation suggests that research on the robustness of existing OpenIE models still needs to be completed, as overly idealized evaluations encourage models to match fixed expressions strictly. With the concrete comparison of model architectures, we find that the SpanOIE model demonstrates a relatively small decrease in all three scores compared to other models, indicating its robustness to syntactic transformations.This result suggests that the extraction strategy of enumerating geometric spans is, to some extent, independent of syntactic drift, making it less susceptible to sentence form transformations. Results on ChatGPT We evaluate ChatGPT's OpenIE capability on CaRB and ROBUST.We randomly select 1/3/5/10 demonstrations from CaRB, and prompt ChatGPT to extract knowledge tuples by incorporating these demonstrations.We exclude sentences that belong to the same clique as the demonstrations during extraction.The result shows that ChatGPT exhibits impressive capability on CaRB, attaining a 51.6 F 1 score in 10-shot setting, comparable to the supervised state-of-the-art model OpenIE6.However, it still faces the robustness problem, as evidenced by a decline in the F robust 1 score to 27.5 on ROBUST in the same setting. We also investigate the impact of ChatGPT's performance on the diversity of demonstrations.We first randomly select 100 pairs of cliques {(C i , C j )|(C i = (s 1 i , s 2 i , ...)} 100 from ROBUST.For each sentence in clique C i , we prompt Chat-GPT by specifying 1/2/3/4 demonstrations from clique C j .We then calculate the CaRB F 1 score for each sentence (shown in blue), the average CaRB F 1 score for all sentence (s 1 i , s 2 i , ...) (shown in orange), and the ROBUST F robust score on all sen-tence in clique C j (shown in green).The results in Figure 6b show that the correctness and robustness of ChatGPT can be improved by giving more diversified demonstrations. Detailed Analysis In this section, we investigate the coherence among cliques in ROBUST, as well as the variations in model performance across different cliques. Is the evaluation of model performance consistent across cliques?It is necessary to investigate whether our evaluation of the model is consistent across the majority of cliques in order to explore the internal consistency of our data samples.Based on the main results, we calculate the F 1 score variance in each clique for three representative models, Rn-nOIE, SpanOIE, and OpenIE6.The distribution of the number of cliques based on variance is depicted in Figure 5a.We find that the majority of cliques exhibit relatively slight variances, indicating a high degree of consistency among robustness cliques.In addition, we sample 11 subsets of interval 100 in ROBUST and calculate the Person's Correlation Coefficient between the average F robust of Ope-nIE6 on each subset and the number of cliques of each subset.This result is −0.1480, indicating a weak correlation between these two factors.How does the syntactic divergence affect the performance of models?Benefiting from the data structure of ROBUST, we can further investigate the effect of syntactic divergence on the performance of models.Concretely, for each clique, we calculate the average HWS/CTK values between all pairs of sentences and the variance of F 1 across all sentences.The result is shown in Figure 5 Related Work OpenIE Approaches.The OpenIE task was first proposed by (Banko et al., 2007) and is a fundamental NLP task.Earlier models focused on statistical or rule-based methods to handle this task (Christensen et al., 2011;Schmitz et al., 2012;Del Corro and Gemulla, 2013;Angeli et al., 2015;Pal et al., 2016;Saha et al., 2017Saha et al., , 2018)).Recently, with the rapid development of deep representation learning, many supervised neural models have been proposed for OpenIE.These approaches could be roughly classified into two lines: 1.Sequence Labeling-based models.RnnOIE (Stanovsky et al., 2018) applies a BiLSTM transducer, extending deep Semantic Role Labeling models to extract tuples.SenseOIE (Roy et al., 2019) leverages an ensemble of multiple unsupervised OpenIE systems' outputs and the lexical and syntactic information to improve performance.SpanRel (Jiang et al., 2020) represents the OpenIE task in a single format consisting of spans and relations between spans.SpanOIE (Zhan and Zhao, 2020) predicts the candidate relation spans and classifies all possible spans of the sentence as subject or object for each span.Multi 2 OIE (Ro et al., 2020) first predicts all relational arguments by BERT and then predicts the subject and object arguments associated with each relation using multi-headed attention.Ope-nIE6 (Kolluru et al., 2020a) provides an iterative grid labeling architecture, which treats OpenIE as a 2-D grid labeling task.2.Sequence Generative models.Neural Open IE (Cui et al., 2018) and Logician (Sun et al., 2018) generate OpenIE extractions by a seq2seq paradigm.IMoJIE (Kolluru et al., 2020b) leverages a BERT-based encoder and generates the next extraction which is fully conditioned on the extractions produced so far.OpenIE Benchmarks. Several benchmark datasets have been proposed to evaluate existing OpenIE approaches.OIE2016 (Stanovsky and Dagan, 2016) developed a method to create a large-scale OpenIE dataset using QA-SRL annotations (He et al., 2015) which was found to be noisy with missing extractions.After that, CaRB (Bhardwaj et al., 2019) and Re-OIE2016 (Zhan and Zhao, 2020) re-annotated the corpus to improve the dataset's quality for more accurate evaluation.Wire57 (Lechelle et al., 2019) provided high-quality expert annotations, but the size is too small to serve as a comprehensive test dataset with only 57 sentences.DocOIE (Dong et al., 2021) argued that in reality a sentence usually exists as part of a document rather than standalone; the contextual information can help models understand it better and annotate a document-level Ope-nIE dataset.LSOIE (Solawetz and Larson, 2021) was built by converting the QA-SRL BANK 2.0 dataset (FitzGerald et al., 2018) to OpenIE which had a significant improvement over previous work in terms of data quantity.BenchIE (Gashteovski et al., 2022) created a fact-based benchmark and framework for multi-faceted comprehensive evaluation of OpenIE models in the multi-lingual setting. Despite the widespread interest in these benchmarks and the related OpenIE approaches provides promising results.However, the traditional peer-topeer matching-based evaluation can not measure the robustness of those approaches, where the syntax and expression may be various with underlying meaning (Qi et al., 2023).This work significantly fills the gap between traditional metrics and missed robustness evaluation for OpenIE and calls for more efforts in this research area. Conclusion and Future Work In this work, we propose ROBUST, a large-scale human-annotated OpenIE benchmark consisting of 1272 robustness testing cliques, where each clique contains sentences with different syntactic and expressive variations while conveying the same underlying knowledge meaning.We introduce our methodology for constructing the benchmark, including a syntactically and expressively diverse paraphrase generation, and a two-stage manual annotation.A comprehensive analysis is then performed to demonstrate the consistency of the proposed data with the real world.We finally perform extensive experiments on existing successive models as well as a representative large language model, and the results show that the robustness of existing methods is far from satisfied.The further detailed analysis demonstrates the substantial internal coherence of our benchmark, providing inspiration for the future development of robustness benchmarks. Limitations We have presented a dataset with metrics to evaluate the robustness of OpenIE models in this paper.However, there are still several limitations that need to be improved in further study.First, there are a few studies exploring the pre-trained language models to perform zero-shot information extraction with advantages.To the lack of open source code, we have not explored the robustness performance of these zero-shot models.Second, we think the robustness problem generally exists in the NLP community, we remain the extensive study of robustness examination for more domains and models in future works. Ethic Consideration There are two major considerations for conducting the evaluation of our proposed new benchmark.First, the source sentences are selected as same as CaRB, the original dev and test splits of OIE2016 in the open domain source of Wall Street Journal text and Wikipedia.All these data files are leveraged for the research purpose, and the result will be publicly available.Second, the annotators in this research are paid a salary higher than the market average and further allowed to choose flexible working time for human rights.For data utilization, we will make all annotation results publicly available under the CC BY-SA 4.0 license (free for research use). A.1 Annotation Details We have the following detailed annotation information.Who: For Task1 and Task2, we employed two separate annotation teams consisting of 6 and 9 students respectively, who are all majoring in CS at universities.We ensured their professionalism through the tutorials and a final examination.Where: As both tasks are easy to read and write for annotators, we distributed the data directly without using a special annotation platform.Quality: We adopted a batched iterative annotation and evaluation process to ensure that the sampling accuracy is above 90%.License: We will release all annotation results under the CC BY-SA 4.0 license (free for research use). A.1.1 Paraphrase Annotation Process The goal of paraphrase annotation is to correct the automatically generated sentences from the models based on human intelligence.Overall, we adopt an iterative step of combining human annotation paired with expert evaluation to ensure accuracy and efficiency.In each iteration, at least three human workers who are fluent in English reading and writing annotate a batch of samples, and then two domain experts will check the annotation results on a random sample of 40% of the batch.The batch annotations will be accepted until the validation accuracy is greater than 90%.For the annotation of each paraphrase, the annotators are asked to correct the sentence with syntactic, phrasal, or semanticdifferent mistakes against the original sentence. A.1.2 N-tuples Annotation Process We leverage the same iterative annotation strategy with the paraphrase annotation for OpenIE N-tuples annotation.In particular, we design an annotation flowchart for the workers according to the similar process in CaRB, by dividing the task into 4 steps: (1) identifying the relation, (2) identifying the arguments for that relation, and (3) optionally identifying the location and time attributes for the tuple.The same validation meaner with the paraphrase annotation is adopted to reach each acceptable annotation batch. Algorithm 1 HWS Distance Input: Constituency parses T 1 , T 2 of sentences s 1 , s 2 , pruning height h, discount factor α Output: Syntactic distance d between s 1 , s 2 1: Get trees T h 1 , T h 2 pruned at height h, and their level-order traversal sequences q 1 , q 2 2: Initialize total length and count l = 0; m = 0 j], j = 1, ..., q 2.len 7: for i = 2 → q 1.len do 8: for j = 2 → q 2.len do end for 23: end for The revised Hierarchically Weighted Syntactic Distance Algorithm (HWS distance) is shown in algorithm 1.We fix the over-counting problem for repeated consecutive spans while preserving the efficiency with the same time complexity in the original work (Qi et al., 2023). A.2.2 Diversified Filtering Process We perform diversified filtering based on BLEU scores between all pairs of sentences in each set of generated paraphrases to maintain the most diverse paraphrases.For example, given the generated paraphrases following: ori In 1840, he was appointed to command his regiment, a post he held for nearly fourteen years.p1 1840, the regiment's commander, which he held for nearly 14 years.p2 In 1840 he took command of the regiment and held it for nearly 14 years.p3 When he was 14 years old , he became a member of the regiment .p4 1840, the command of the regiment, which he held for nearly 14 years.p5 The regiment, then, in 1840, the rank of captain, which he held for nearly 14 years. As shown in Figure 7, we first calculate the BLEU scores between all pairs of paraphrases (shown on the edges).We then find the two sentences p 1 , p 4 with the maximum BLEU score.Because the lengths of these two sentences are larger than 2/3 of the original sentence, we then calculate the summation of scores from each of them to all other sentences which results sum(p 1 , p /1 ) = 136.9and sum(p 1 , p /1 ) = 158.7,and remove the sentence p 4 that has larger summation score.We repeat the strategy above to remove the sentence p 1 and obtain 3 expressively diverse paraphrases. < l a t e x i t s h a 1 _ b a s e 6 4 = " N m M e t c 6 < l a t e x i t s h a 1 _ b a s e 6 4 = " S j i q 0 c X + X + 5 o 6 p j x i X T X V E 3 t 7 9 U J c k h I U 7 h H s U F Y a a V 0 z 7 b W p P q 2 l V v P R 1 / 0 5 m K V X t m c j O 8 q 1 v S g N 2 f 4 5 w F j Y O y e 1 x 2 L 4 5 K l T M z 6 j x 2 s I t 9 m u c J K q i i h j p 5 D / C I J z x b V S u y M u v u M 9 X K G c 0 2 v i 3 r 4 Q M B s 5 A h < / l a t e x i t > Z 9 t r U l 1 7 a q 3 n o 6 / 6 U z F q n 1 g c j O 8 q 1 v S g N 2 f 4 5 w H z a O y e 1 x 2 L y q l 6 p k Z d R 5 7 2 M c h z f M E V d R Q R 4 O 8 h 3 j E E 5 6 t m h V Z m X X 3 m W r l j G Y X 3 5 b 1 8 A E E E 5 A i < / l a t e x i t > p 4 < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 Q o l S V 3 a D 8 V C P t H t B n q 3 Q q B S V B I = " > A x O C e k V p L S x R 5 q Y 8 g R h d Z q t 4 5 l 2 V u x v 3 h P t q e 4 2 p r 9 v v E b E S t w Q + 5 d u m v l f n a p F o o 9 T X U N A N S W a U d U x 4 5 L p r q i b 2 1 + q k u S Q E K d w j + K C M N P K a Z 9 t r U l 1 7 a q 3 n o 6 / 6 U z F q j 0 z u R n e 1 S 1 p w O 7 P c c 6 C x k H Z P S 6 7 F 4 e l y p k Z d R 4 7 2 M U + z f M E F V R R Q 5 2 8 B 3 j E E 5 6 t q h V Z m X X 3 m W r l j G Y b 3 5 b 1 8 A E G c 5 A j < / l a t e x i t > o 6 p j x i X T X V E 3 t 7 9 U J c k h I U 7 h H s U F Y a a V 0 z 7 b W p P q 2 l V v P R 1 / 0 5 m K V X t m c j O 8 q 1 v S g N 2 f 4 5 w F j Y O y e 1 x 2 L 4 5 K l T M z 6 j x 2 s I t 9 m u c J K q i i h j p 5 D / C I J z x b V S u y M u v u M 9 X K G c 0 2 v i 3 r 4 Q M B s 5 A h < / l a t e x i t > p 3 < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 R 5 q Y 8 g R h d Z q t 4 5 l 2 V u x v 3 h P t q e 4 2 p r 9 v v E b E S t w Q + 5 d u m v l f n a p F o o 9 T X U N A N S W a U d U x 4 5 L p r q i b 2 1 + q k u S Q E K d w j + K C M N P K a Z 9 t r U l 1 7 a q 3 n o 6 / 6 U z F q j 0 z u R n e 1 S 1 p w O 7 P c c 6 C x k H Z P S 6 7 F 4 e l y p k Z d R 4 7 2 M U + z f M E F V R R Q 5 2 8 B 3 j E E 5 6 t q h V Z m X X 3 m W r l j G Y b 3 5 b 1 8 A E G c 5 A j < / l a t e x i t > p 5 2 1 .7 8 .2 4 3 .7 1 2 .917.2 6. 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " S j i q 0 c X + X + 5 o 6 p j x i X T X V E 3 t 7 9 U J c k h I U 7 h H s U F Y a a V 0 z 7 b W p P q 2 l V v P R 1 / 0 5 m K V X t m c j O 8 q 1 v S g N 2 f 4 5 w F j Y O y e 1 x 2 L 4 5 K l T M z 6 j x 2 s I t 9 m u c J K q i i h j p 5 D / C I J z x b V S u y M u v u M 9 X K G c 0 2 v i 3 r 4 Q M B s 5 A h < / l a t e x i t > p 3 < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 R 5 q Y 8 g R h d Z q t 4 5 l 2 V u x v 3 h P t q e 4 2 p r 9 v v E b E S t w Q + 5 d u m v l f n a p F o o 9 T X U N A N S W a U d U x 4 5 L p r q i b 2 1 + q k u S Q E K d w j + K C M N P K a Z 9 t r U l 1 7 a q 3 n o 6 / 6 U z F q j 0 z u R n e 1 S 1 p w O 7 P c c 6 C x k H Z P S 6 7 F 4 e l y p k Z d R 4 7 2 M U + z f M E F V R R Q 5 2 8 B 3 j E E 5 6 t q h V Z m X X 3 m W r l j G Y b 3 5 b 1 8 A E G c 5 A j < / l a t e x i t > p 5 1 2 .917.2 6. 1 remove < l a t e x i t s h a 1 _ b a s e 6 4 = " Q S p l f n 8 + h y e 9 X 5 U t p O p Q 6 B e l 7 i c = " > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I p 6 r L o p s u K 9 g G 1 l C S d 1 q F p E i Y T p R T B H 3 C r n y b + g f 6 F d 8 Y p q E V 0 Q p I z 5 9 5 z Z u 6 9 f h L y V D r O a 8 5 a W F x a X s m v F t b W N z a 3 i t s 7 z T T O R M A a Q R z G o u 1 7 K Q t 5 x B q S y 5 C 1 E 8 G 8 s R + y l j 8 6 V / H W L R M p j 6 M r O U l Y d + w N I z 7 g g S e J u k x 6 l V 6 x 5 J Q d v e x 5 4 B p Q g l n 1 u P i C a / Q R I 0 C G M R g i S M I h P K T 0 d O D C Q U J c F 1 P i B C G u 4 w z 3 K J A 2 o y x G G R 6 x I / o O a d c x b E R 7 5 Z l q d U C n h P Q K U t o 4 I E 1 M e Y K w O s 3 W 8 U w 7 K / Y 3 7 6 n 2 V H e b 0 N 8 3 X m N i J W 6 I / U s 3 y / y v T t U i M c C p r o F T T Y l m V H W B c c l 0 V 9 T N 7 S 9 V S X J I i F O 4 T 3 F B O N D K W Z 9 t r U l 1 7 a q 3 n o 6 / 6 U z F q n 1 g c j O 8 q 1 v S g N 2 f 4 5 w H z a O y e 1 x 2 L y q l 6 p k Z d R 5 7 2 M c h z f M E V d R Q R 4 O 8 h 3 j E E 5 6 t m h V Z m X X 3 m W r l j G Y X 3 5 b 1 8 A E E E 5 A i < / l a t e x i t > p 4 remove < l a t e x i t s h a 1 _ b a s e 6 4 = " N m M e t c 6 A f P E Z Z w z H C Q t g t e M 8 b 1 c = " > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I R d V l 0 0 2 V F + 4 B a S j K d 1 s G 8 m E y U U g R / w K 1 + m v g H + h f e G V N Q i + i E J G f O v e f M 3 H v 9 J B C p c p z X g j U 3 v 7 C 4 V F w u r a y u r W + U N 7 d a a Z x J x p s s D m L Z 8 b 2 U B y L i T S V U w D u J 5 F 7 o B 7 z t 3 5 z p e P u W y 1 T E 0 a U a J 7 w X e q N I D A X z F F E X S d / t l y t O 1 T H L n g V u D i r I V y M u v + A K A 8 R g y B C C I 4 I i H M B D S k 8 X L h w k x P U w I U 4 S E i b O c Y 8 S a T P K 4 p T h E X t D 3 x H t u j k b 0 V 5 7 p k b N 6 J S A X k l K G 3 u k i S l P E t a n 2 S a e G W f N / u Y 9 M Z 7 6 b m P 6 + 7 l X S K z C N b F / 6 a a Z / 9 X p W h S G O D E 1 C K o p M Y y u j u U u m e m K v r n 9 p S p F D g l x G g 8 o L g k z o 5 z 2 2 T a a 1 N S u e + u Z + J v J 1 K z e s z w 3 w 7 u + J Q 3 Y / T n O W d A 6 q L p H V f f 8 s F I 7 z U d d x A 5 2 s U / z P E Y N d T T Q J O 8 R H v G E Z 6 t u R V Z m 3 X 2 m W o V c s 4 1 v y 3 r 4 A P z k k B 8 = < / l a t e x i t > p 1 Figure 7: By performing the diversified filtering, 3 paraphrases p 2 p 3 p 5 maintained. A.3.1 Prompt Design We create a prompt template for the task of OpenIE to query the ChatGPT.An example of a 1-shot prompt is shown in Figure 8, where the highlighted demonstration and the variable <sentence> can be replaced with specified examples. A.3.2 Performance with Syntactic Correlations In this section, we further investigate the correlation between the model performance and syntactic distance of demonstrations and questions for the ChatGPT model.We first randomly sample a set of 100 pairs of cliques {(C i 1 , C i 2 )|i = 1, ..., 100} In these tuples, we always put the predicate first, the second is the subject corresponding to the predicate, the third is the object corresponding to the predicate (if there is none, it is not labeled), and the last two are time and place in that order, which can be omitted if there is none. Please follow the example above and extract all the relational tuples in the following sentence: <sentence> Please show the results in one line strictly in the form of the results above" in ROBUST.Then for each pair, we select all examples in clique C i 1 as demonstrations and select all sentences in C i 2 as questions to calculate the F robust 1 -score.For syntactic correlations, we first calculate the averaged value a i between question i and all sentences in C 1 and further calculate the average on (a 1 , a 2 , ...) as the final correlation on current clique-pairs.We divide the scores into several intervals and compute the average value in each corresponding interval to avoid abnormal values.The results based on both implementations of HWS distance and Tree Kernel similarity as the syntactic correlation are shown in Figure 9. In the left figure of the result, we can see that the F robust 1 -score of the model gradually increases as the average syntactic similarity of the two cliques increases.The same observation is also shown in the right figure with the averaged syntactic distance between two cliques.These results suggest that ChatGPT is sensitive to the syntactic distribution between questions and demonstrations and that giving demonstrations with similar syntactic distribution enhances the effectiveness of ChatGPT. A.4 Error Analysis for OIE Systems We conduct error analysis for three typical OpenIE models OpenIE4, SpanOIE, and OpenIE6 on a robustness clique.The model predictions with the CaRB and ROBUST scores are shown in Table 4. First, we can see that the sentences in the clique exhibit a significant syntactic and expressive divergence.It implies that the constructed data source satisfies the expectation.Second, we find all sentences in the clique have more than one extraction, while the OpenIE4 and OpenIE6 models predict the extractions insufficiently, which causes a lower recall.On the other hand, the SpanOIE model outputs predictions by enumerating all possible geometric spans, which build sufficient outputs regardless of syntactic features.This architecture offers SpanOIE a consistent performance. 9 g F a J J l O 6 9 A 0 C Z O J U I r + g F v 9 N v E P 9 C + 8 M 0 5 B L a I T k p w 5 9 5 4 z c + 8 N 0 0 h k y v N e C 8 7 c / M L i U n G 5 t L K 6 t r 5 R 3 tx q Z U k u G W + y J E p k J w w y H o m Y N 5 V Q E e + k k g e j M O L t c H i m 4 + 0 7 L j O R x J d q n P L u K B j E o i 9 Y o I h q p D f l i l f 1 z H J n g W 9 B B X b V k / I L r t F D A o Y c I 3 D E U I Q j B M j o u Y I P D y l x X U y I k 4 S E i X P c o 0 T a n L I 4 Z Q T E D u k 7 o N 2 V Z W P a a 8 / M q B m d E t E r S e l i j z Q J 5 U n C + j T X x H P j r N n f v C f G U 9 9 t T P / Q e o 2 I V b g l 9 i / d N P O / O l 2 L Q h 8 n p g Z B N a W G 0 d U x 6 5 K b r u i b u 1 + q U u S Q E q d x j + K S M D P K a Z 9 d o 8 l M 7 b q 3 g Y m / m U z N 6 j 2 z u T n e 9 S 1 p w P 7 P c c 6 C 1 k H V P 6 r 6 j c N K 7 d S O u o g d 7 G K f 5 n m M G i 5 Q R 9 N 4 P + I J z 8 6 5 E z m Z k 3 + m O g W r 2 c a 3 5 T x 8 A F o U j 3 s = < / l a t e x i t > p < l a t e x i t s h a 1 _ b a s e 6 4 = " k 0 n 0 o M O y L k h p J G w P H U H D 1 F Z + B a Q = " > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I V d V l 0 0 2 V F + 4 B a S j K d 1 q F p E i Y T p R T B H 3 C r n y b + g f 6 F d 8 Y U 1 C I 6 I c m Z c + 8 5 M / d e P w 5 E o h z n N W f N z S 8 s L u W X C y u r a + s b x c 2 t R h K l k v E 6 i 4 J I t n w v 4 Y E I e V 0 J F f B W L L k 3 8 g P e 9 I f n O t 6 8 5 T I R U X i l x j H v j L x B K P q C e Y q o S 6 9 7 2 C 2 W n L J j l j 0 L 3 A y U k K 1 a V H z B N X q I w J B i B I 4 Q i n A A D w k 9 b b h w E B P X w Y Q 4 S U i Y O M c 9 C q R N K Y t T h k f s k L 4 D 2 r U z N q S 9 9 k y M m t E p A b 2 S l D b 2 S B N R n i S s T 7 N N P D X O m v 3 N e 2 I 8 9 d 3 G 9 P c z r x G x C j f E / q W b Z v 5 X p 2 t R 6 O P U 1 C C o p t g w u j q W u a S m K /r m 9 p e q F D n E x G n c o 7 g k z I x y 2 m f b a B J T u + 6 t Z + J v J l O z e s + y 3 B T v + p Y 0 Y P f n O G d B 4 6 D s H p f d i 6 N S 5 S w b d R 4 7 2 M U + z Figure 2 : Figure 2: An example of a robustness clique consisting of three sentences from ROBUST, where the sentences exhibit syntactic and expressive variants while preserving the same structured knowledge meaning.In contrast to conventional metrics, ROBUST measures the robustness score on a clique of all nodes. Figure 3 : Figure 3: The average syntactic distances/similarity in each clique is calculated using HWS distance and Convolutional Tree Kernels, where the x-axis refers to the hierarchical discounting weights for two algorithms. Figure 4 : Figure 4: The average syntactic distance/similarity over all cliques with the hierarchical discounting weights.Cliques containing only one point will be a line with a value of 0 or 1. Figure 5: (a) The distribution of the number of cliques with the variance of F 1 scores in each clique.(b) The variance of F 1 scores with the values HWS distance.(c) The variance of F 1 scores with the values of Convolutional Tree Kernel similarity.The both correlation values are divided into several intervals to avoid abnormal values. 5 .The results indicate a general trend where the variance of model performance decreases with increasing syntactic divergence.Based on the main experiment results, which indicate low performance of models on the overall benchmark, the observed degradation implies a consistent trend of poorer model performance in more open scenarios. w m S i m C P + B W P 0 3 8 A / 0 L 7 4 w p q E V 0 Q p I z 5 9 5 z Z u 6 9 f h y I R D n O a 8 5 a W F x a X s m v F t b W N z a 3 i t s 7 z S R KJ e M N F g W R b P t e w g M R 8 o Y S K u D t W H J v 7 A e 8 5 Y / O d b x 1 y 2 U i o v B K T W L e H X v D U A w E 8 x R R l 3 G v 0 i u W n L J j l j 0 P 3 A y U k K 1 6 V H z B N f q I w J B i D I 4 Q i n A A D w k 9 H b h w E B P X x Z Q 4 S U i Y O M c 9 C q R N K Y t T h k f s i L 5 D 2 n U y N q S 9 9 k y M m t E p A b 2 S l D Y O S B N R n i S s T 7 N N P D X O m v 3 N e 2 o 8 9 d 0 m 9 P c z r z G x C j f E / q W b Z f 5 X p 2 t R G O D U 1 C C o p t g w u j q W u a S m K / r m 9 p e q F D n E x G n c p 7 g k z I x y 1 m f b a B J T u + 6 t Z + J v J l O z e s + y 3 B T v + p Y 0 Y P f n O O d B s 1 J 2 j 8 v u x V G p e p a N O o 8 9 7 O O Q 5 n m C K m q o o 0 H e Q z z i C c 9 W z Q q t 1 L r 7 T L V y m W Y X 3 5 b 1 8 A H / R J A g < / l a t e x i t >p 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " p 1 p o g 1 / k o E e h D v 3 + 5 l J s d O p d I i 8 = " > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I V d V l 0 0 2 V F + 4 B a S j K d 1 t A 0 C Z O J U o r g D 7 j V T x P / Q P / C O + M U 1 C I 6 I c m Z c + 8 5 M / d e P w m D V D r O a 8 6 a m 1 9 Y X M o v F 1 Z W 1 9 Y 3 i p t b j T T O B O N 1 F o e x a P l e y s M g 4 n U Z y J C 3 E s G 9 k R / y p j 8 8 V / H m L R d p E E d X c p z w z s g b R E E / Y J 4 k 6 j L p H n a L J a f s 6 G X P A t e A E s y q x c U X X K O H G A w Z R u C I I A m H 8 J D S 0 4 Y L B w l x H U y I E 4 Q C H e e 4 R 4 G 0 G W V x y v C I H d J 3 Q L u 2 Y S P a K 8 9 U q x m d E t I r S G l j j z Q x 5 Q n C 6 j R b x z P t r N j f v C f a U 9 1 t T H / f e I 2 I l b g h 9 i / d N P O / O l W L R B + n u o a A a k o 0 t e x i t s h a 1 _ b a s e 6 4 = " Q S p l f n 8 + h y e 9 X 5 U t p O p Q 6 B e l 7 i c = " > AA A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I p 6 r L o p s u K 9 g G 1 l C S d 1 q F p E i Y T p R T B H 3 C r n y b + g f 6 F d 8 Y p q E V 0 Q p I z 5 9 5 z Z u 6 9 f h L y V D r O a 8 5 a W F x a X s m v F t b W N z a 3 i t s 7 z T T O R M A a Q R z G o u 1 7 K Q t 5 x B q S y 5 C 1 E 8 G 8 s R + y l j 8 6 V / H W L R M p j 6 M r O U l Y d + w N I z 7 g g S e J u k x 6 l V 6 x 5 J Q d v e x 5 4 B p Q g l n 1 u P i C a / Q R I 0 C G M R g i S M I h P K T 0 d O D C Q U J c F 1 P i B C G u 4 w z 3 K J A 2 o y x G G R 6 x I / o O a d c x b E R 7 5 Z l q d U C n h P Q K U t o 4 I E 1 M e Y Kw O s 3 W 8 U w 7 K / Y 3 7 6 n 2 V H e b 0 N 8 3 X m N i J W 6 I / U s 3 y / y v T t U i M c C p r o F T T Y l m V H W B c c l 0 V 9 T N 7 S 9 V S X J I i F O 4 T 3 F B O N D K W 8 s L u W X C y u r a + s b x c 2 t R h p n g v E 6 i 8 N Y t H w v 5 W E Q 8 b o M Z M h b i e D e y A 9 5 0 x + e q 3 j z l o s 0 i K M r O U 5 4 Z + Q N o q A f M E 8 S d Z l 0 j 7 r F k l N 2 9 L J n g W t A C W b V 4 u I L r t F D D I Y M I 3 B E k I R D e E j p a c O F g 4 S 4 D i b E C U K B j n P c o 0 D a j L I 4 Z X j E D u k 7 o F 3 b s B H t l W e q 1 Y t e x i t s h a 1 _ b a s e 6 4 = " N m M e t c 6 A f P E Z Z w z H C Q t g t e M 8 b 1 3 H v 9 J B C p c p z X g j U 3 v 7 C 4 V F w u r a y u r W + U N 7 d a a Z x J x p s s D m L Z 8 b 2 U B y L i T S V U w D u J 5 F 7 o B 7 z t 3 5 z p e P u W y 1 T E 0 a U a J 7 w X e q N I D A X z F F E X S d / t l y t O 1 T H L n g V u D i r IV y M u v + A K A 8 R g y B C C I 4 I i H M B D S k 8 X L h w k x P U w I U 4 S E i b O c Y 8 S a T P K 4 p T h E X t D 3 x H t u j k b 0 V 5 7 p k b N 6 J S A X k l K G 3 u k i S l P E t a n 2 S a e G W f N / u Y 9 M Z 7 6 b m P 6 + 7 l X S K z C N b F / 6 a a Z / 9 X p W h S G O D E 1 C K o p M Yy u j u U u m e m K v r n 9 p S p F D g l x G g 8 o L g k z o 5 z 2 2 T a a 1 N S u e + u Z + J v J 1 K z e s z w 3 w 7 u+ J Q 3 Y / T n O W d A 6 q L p H V f f 8 s F I 7 z U d d x A 5 2 s U / z P E Y N d T T Q J O 8 R H v G E Z 6 t u R V Z m 3 X 2 m W o V c s 4 1 v y 3 r 4 A P z k k B 8 = < / l a t e x i t > p 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " S j i q 0 c X + X + 5 I 2 O h c 4 O 6 Z O C i i M 8 k = " > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V Z I i 6 r L o p s u K 9 g G 1 l G Q 6 r U P T J E w m S i m C P + B W P 0 3 8 A / 0 L 7 4 w p q E V 0 Q p I z 5 9 5 z Z u 6 9 f h y I R D n O a 8 5 a W F x a X s m v F t b W N z a 3 i t s 7 z S R K J e M N F g W R b P t e w g M R 8 o Y S K u D t W H J v 7 A e 8 5 Y / O d b x 1 y 2 U i o v B K T W L e H X v D U A w E 8 x R R l 3 G v 0 i u W n L J j l j 0 P 3 A y U k K 1 6 V H z B N f q I w J B i D I 4 Q i n A A D w k 9 H b h w E B P X x Z Q 4 S U i Y O M c 9 C q R N K Y t T h k f s i L 5 D 2 n U y N q S 99 k y M m t E p A b 2 S l D Y O S B N R n i S s T 7 N N P D X O m v 3 N e 2 o 8 9 d 0 m 9 P c z r z G x C j f E / q W b Z f 5 X p 2 t R G O D U 1 C C o p t g w u j q W u a S m K / r m 9 p e q F D n E x G n c p 7 g k z I x y 1 m f b a B J T u + 6 t Z + J v J l O z e s + y 3 B T v + p Y 0 Y P f n O O d B s 1 J 2 j 8 v u x V G p e p a N O o 8 9 7 O O Q 5 n m C K m q o o 0 H e Q z z i C c 9 W z Q q t 1 L r 7 T L V y m W Y X 3 5 b 1 8 A H / R J A g < / l a t e x i t > p 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " p 1 p o g 1 / k o E e h D v 3 + 5 l J s d O p d I i 8 = " > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I V d V l 0 0 2 V F + 4 B a S j K d 1 t A 0 C Z O J U o r g D 7 j V T x P / Q P / C O + M U 1 C I 6 I c m Z c + 8 5 M / d e P w m D V D r O a 8 6 a m 1 9 Y X M o v F 1 Z W 1 9 Y 3 i p t b j T T O B O N 1 F o e x a P l e y s M g 4 n U Z y J C 3 E s G 9 k R / y p j 8 8 V / H m L R d p E E d X c p z w z s g b R E E / Y J 4 k 6 j L p H n a L J a f s 6 G X P A t e A E s y q x c U X X K O H G A w Z R u C I I A m H 8 J D S 0 4 Y L B w l x H U y I E 4 Q C H e e 4 R 4 G 0 G W V x y v C I H d J 3 Q L u 2 Y S P a K 8 9 U q x m d E t I r S G l j j z Q x 5 Q n C 6 j R b x z P t r N j f v C f a U 9 1 t T H / f e I 2 I l b g h 9 i / d N P O / O l W L R B + n u o a A a k o 0 9 k y M m t E p A b 2 S l D Y O S B N R n i S s T 7 N N P D X O m v 3 N e 2 o 8 9 d 0 m 9 P c z r zG x C j f E / q W b Z f 5 X p 2 t R G O D U 1 C C o p t g w u j q W u a S m K / r m 9 p e q F D n E x G n c p 7 g k z I x y 1 m f b a B J T u + 6 t Z + J v J l O z e s + y 3 B T v + p Y 0 Y P f n O O d B s 1 J 2 j 8 v u x V G p e p a N O o 8 9 7 O O Q 5 n m C K m q o o 0 H e Q z z i C c 9 W z Q q t 1 L r 7 T L V y m W Y X 3 5 b 1 8 A H / R J A g < / l a t e x i t > p 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " p 1 p o g 1 / k o E e h D v 3 + 5 l J s d O p d I i 8 = " > A A A C x n i c j V H L S s N A F D 2 N r 1 p f V Z d u g k V w V R I V d V l 0 0 2 V F + 4 B a S j K d 1 t A 0 C Z O J U or g D 7 j V T x P / Q P / C O + M U 1 C I 6 I c m Z c + 8 5 M / d e P w m D V D r O a 8 6 a m 1 9 Y X M o v F 1 Z W 1 9 Y 3 i p t b j T T O B O N 1 F o e x a P l e y s M g 4 n U Z y J C 3 E s G 9 k R / y p j 8 8 V / H m L R d p E E d X c p z w z s g b R E E / Y J 4 k 6 j L p H n a L J a f s 6 G X P A t e A E s y q x c U X X K O H G A w Z R u C I I A m H 8 J D S 0 4 Y L B w l x H U y I E 4 Q C H e e 4 R 4 G 0 G W V x y v C I H d J 3 Q L u 2 Y S P a K 8 9 U q x m d E t I r S G l j j z Q x 5 Q n C 6 j R b x z P t r N j f v C f a U 9 1 t T H / f e I 2 I l b g h 9 i / d N P O / O l W L R B + n u o a A a k o 0 Figure 8 : Figure 8: The 1-shot prompt to ChatGPT for the OpenIE task, where the <sentence> corresponds to the query sentence. Figure 9 : Figure 9: The F robust 1 scores of OpenIE6 model with syntactic correlations between clique-pairs. measure the performance of extractions * Corresponding author: xubin@tsinghua.edu.cn Watson has served as Minority Leader since elected by his caucus in November 1998 .Since his election by his caucus in November 1998, Watson has been the Minority Leader.Watson, who was elected by his caucus in November 1998, has served as Minority Leader since then. Table 2 : The performance of typical OpenIE systems on CaRB and ROBUST benchmarks.The row ∆ represents the the difference between CaRB score and ROBUST score (↓ means the degradation from CaRB).Bold numbers refers to the highest score per metric or highest difference per row (i.e.highest ∆ for P , R and F 1 ). prompt = "Open information extraction requires the extraction of all relations in the sentence, i.e., predicates, the subjects and objects corresponding to these relations, and the possible time and place thesis elements.For example, in the sentence: Watson, who was elected by his caucus in November 1998, has served as Minority Leader since then.From this sentence, the following tuple can be extracted: (was elected by, Wastson, his caucus in November 1998); (has served as,Wastson, Minority Leader, since then)
2023-05-24T01:16:32.833Z
2023-05-23T00:00:00.000
{ "year": 2023, "sha1": "c24c67fc0b7547be5306801c01ee6f9e7bab7ebc", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2023.emnlp-main.360.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "22effda87bbb911651962db86325ded233e6e069", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
65305165
pes2o/s2orc
v3-fos-license
Interference effect of impulse noise on noise immunity of communication and control channels in underground structures The interference effect of impulse noise on the noise immunity of underground communication systems is investigated. A comparative analysis of the noise immunity of an optimal demodulator of the discrete FM signals and of the system “wide band – limiter – narrow band”, in the presence of impulse noise, has been carried out. Introduction The choice of the signal forming method for solution of the communicational and navigational problems in the underground mine is connected with the necessity of taking into account several contradicting factors. The most important among them is the necessity of the required long-range interaction assurance of the system and of the data transfer rate at the predetermined noise immunity. There are well-known notification systems that apply signal transfer through the rock massif and preserve operability in the alarm conditions. Such systems are applied for data transfer with the discrete frequency modulation (FM) signals. A small pass band of the underground medium and significant variation of the signal attenuation within the operating frequency band provide obtaining only a possibility of the code messages transmission about an emergency mode. Therefore, one of the basic problems remained is the mode choice of the signal being formed and its comprehension. The efficiency comparative assessment of the employed signal forming methods in the underground communication systems is represented in the paper [1], where, considering the influence of various industrial interference sources, a choice is made in favor of the discrete FM signals as well. During construction of a data transfer system through the rock massif, there is an opportunity to use the multiposition signals as well that permit one to increase considerably the communication efficiency [2,3]. However, when there is the need to transfer signals at long distances exceeding 300m, as the noise level increases, it is recommended to use more simple discrete FM signals. In the underground communication system, the discrete FM signals are often used for remote control of the power objects. In this case, an online control is expected to be carried out for a large amount of the process-dependent parameters being read out from various transducers distant from a data acquisition and processing device. In case the use of wire communication channels is impossible, the wireless data transfer systems are recommended. The essential feature of the data transfer lines used consists in their multiple channels; at that, the latter must have the binary number of channels corresponding to the transducers number in the system. High efficiency of the source information acquisition system could be achieved when making use of the multipositional FM signals. It is well-known that impulse noise considerably effects the noise immunity of FM data communication systems. The review of research results on the problem has been carried out in paper [4]. The most effective method of impulse noise suppression, when using FM, is considered application of the limiter. But at the present time, the paths comprising limiters have been examined only from the aspect of the output energy responses. For example, it is known [5,6] that the limiter suppresses weak signal amplitude, the amplitudes ratio of the greater signal to the lesser signal at the output increases almost twice in relation to the corresponding ratio at the input. If this ratio is of the order of unity, then it is preserved at the output. The purpose of this study involves the comparative analysis of the noise immunity of an optimal demodulator of the discrete FM signals and of the system "wide bandlimiter -narrow band", in the presence of impulse noise. Results and Discussions Let us consider reception of a binary FM signals that are orthogonal in a strengthened sense [5]; in this case, let us suppose that the impulse noise appears once on an element of the signal, at a random time. In such conditions, the signal at the input of the receiver could be represented as follows: , . Here,  amplitude and a random initial phase of a useful signal,  a spectral density of impulse noise amplitudes,  white noise,average power, frequency of -th position. Let us assess the noise immunity of the optimal demodulator, as applied to reception of a binary FM signals, under the influence of noise and impulse noise. Let us approximate the pulse characteristic of the broadband path as follows: , whereas outside the limits of this section, impulse response equals zero. Here,  the average frequency (usually the receiver's intermediate frequency). The received signal at the demodulator input can be determined using the Duhamel integral as: . Provided that a harmonic signal with a rectangular envelope would be sent (the useful signal in (1)) to the input of bandpass filter with response (2), the envelope of the output filter effect would consist of three parts [5]. The first part determines the signal-setting process during time where an amplitude and phase of the signal are time-varying. The second part, with duration of , represents the established process that bears information about -th position of the FM signal ( ), and looks as follows: . The third part, from T to ( ), describes attenuation of a useful signal, and is followed by the change of its amplitude to zero and phase change. So it is useful to use protective gaps between the elements of the signal to increase the noise immunity for inter-element interference, but while the transmission speed is reduced to , where the initial signal transmission rate. Taking into account the assumptions made, the received signal at the FM demodulator input looks as follows: , . Here, is determined in accordance with (3),  switching function, at for the rest of ,  the impulse noise emergence moment with uniform distribution in the range from zero to , and  quadrature components of the noise interference,the impulse noise amplitude. Let us use a well-known optimal decision scheme for the discrete FM signals reception as the demodulator [5]. It is considered that the signal's -th position has been received if . Here, and  envelopes of signals at the output of filters, being matched accordingly withth and -th positions by the time the information part of the element T has been received when the signal of form (4) is applied to their input, . The reception scheme according to the algorithm (5) is optimal for the case when the additive noise has the form of white noise. Synthesis of optimal decision rules for reception in the presence of only impulse noise leads to the need for preliminary nonlinear processing of the received signals before making a decision. Usually in this case a limiter is used. The way of optimal demodulation schemes syntheses in conditions of simultaneous impact of impulse and noise interference is currently unknown. In this connection, let us confine ourselves to a sub-optimal scheme that is a superposition of these optimal schemes. It will consist of a broadband path (this is a filter with an impulse response (2)), an amplitude limiter and a narrowband path (in our case this is a decisive scheme that realizes rule (5)). To evaluate the effectiveness of this scheme, it is necessary to compare its noise immunity with the interference immunity of an optimal FM demodulator in the presence of impulse noise. Further, let us find the probability value of the optimal demodulator error, with the signal at the input of form (4), at presence of the white noise and impulse noise. In this case, each quadrature component of the algorithm (5), calculated for transmission of -th position, consists of three additive components which may be defined as a useful signal, impulse noise, and fluctuating noise. The dispersion's value of the -th interference component, for the reception path number , could be calculated as follows: where , . Now, the interference components may be presented as and . Here, and  independent normally distributed random values with zero means and unit dispersion. In the capacity of the second version, let us consider noise immunity schemes with the limiter embedded between the broadband path and the decision-making device. Such scheme realizes the suboptimal algorithm of the discrete FM signals demodulation. Let us suppose that the value of the impulse noise amplitude significantly exceeds the signal amplitude, in such way that . Taking into account the features of the limiter [4], let us suppose that under the condition (6) being met, the useful signal within the interval is fully suppressed by the impulse noise. The error probability value calculated, in this situation, is the upper bound for the error probability at FM reception, given the effect of the impulse noise and fluctuating noise. Given the assumption made, the signal being received at the FM demodulator input would look as follows: , When calculating the noise immunity, let us consider the limiter of the impulse noise with the threshold value equal to the useful signal amplitude. In this case, the useful signal and noise are not distorted, whereas the impulse noise amplitude is reduced down to the useful signal's amplitude. To do this, let us suppose that in [4] variable for and equal to 0 for others t. The results of the error probability's calculations for the processing circuit without limiter ( and with limiter ( , are presented in Table 1. The considered case was for the discrete FM signal with parameters Let us determine the losses using the error probability in the presence of impulse noise comparatively to the losses of the optimal demodulator under the conditions of the action of only white noise and with error probability. It is well-known that the error probability of such demodulator is defined by the expression . Here . the signal-to-noise ratio. Similarly, let us suppose that the interference's energy to noise ratio is . Hereby let us assume that the condition is fulfilled. Hence, the required probability will be achieved at . The error probabilities of the demodulator under study was calculated at , and for and results are shown in tables 1, 2. One may note a good agreement between the results of modeling and theoretical assumes when . For the scheme without the limiter, the noise immunity is slowly deteriorating as the value of is rising. The threshold value for the ratio of the impulse noise's energy to the noise intensity at which communication is impossible is . Conclusion The use of the limiter can significantly improve the noise immunity of reception in the presence of impulse noise. If condition (8) is satisfied, the error probability increases only twofold. Thus, circuits with a limiter can be successfully used under conditions of strong pulsed interference when transmitting discrete FM signals.
2019-02-17T14:16:56.094Z
2017-10-01T00:00:00.000
{ "year": 2017, "sha1": "0cfb3ce1a02cf27c4177540e60304dee48f64d60", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/87/8/082054", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "75cecc2e694e19574490d163fc47e9c7dbc9d312", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }